id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.17568 | Scalable Primal-Dual Actor-Critic Method for Safe Multi-Agent RL with
General Utilities | We investigate safe multi-agent reinforcement learning, where agents seek to
collectively maximize an aggregate sum of local objectives while satisfying
their own safety constraints. The objective and constraints are described by
{\it general utilities}, i.e., nonlinear functions of the long-term
state-action occupancy measure, which encompass broader decision-making goals
such as risk, exploration, or imitations. The exponential growth of the
state-action space size with the number of agents presents challenges for
global observability, further exacerbated by the global coupling arising from
agents' safety constraints. To tackle this issue, we propose a primal-dual
method utilizing shadow reward and $\kappa$-hop neighbor truncation under a
form of correlation decay property, where $\kappa$ is the communication radius.
In the exact setting, our algorithm converges to a first-order stationary point
(FOSP) at the rate of $\mathcal{O}\left(T^{-2/3}\right)$. In the sample-based
setting, we demonstrate that, with high probability, our algorithm requires
$\widetilde{\mathcal{O}}\left(\epsilon^{-3.5}\right)$ samples to achieve an
$\epsilon$-FOSP with an approximation error of $\mathcal{O}(\phi_0^{2\kappa})$,
where $\phi_0\in (0,1)$. Finally, we demonstrate the effectiveness of our model
through extensive numerical experiments. | Donghao Ying, Yunkai Zhang, Yuhao Ding, Alec Koppel, Javad Lavaei | 2023-05-27T20:08:35Z | http://arxiv.org/abs/2305.17568v1 | # Scalable Primal-Dual Actor-Critic Method for Safe Multi-Agent RL with General Utilities
###### Abstract
We investigate safe multi-agent reinforcement learning, where agents seek to collectively maximize an aggregate sum of local objectives while satisfying their own safety constraints. The objective and constraints are described by _general utilities_, i.e., nonlinear functions of the long-term state-action occupancy measure, which encompass broader decision-making goals such as risk, exploration, or imitations. The exponential growth of the state-action space size with the number of agents presents challenges for global observability, further exacerbated by the global coupling arising from agents' safety constraints. To tackle this issue, we propose a primal-dual method utilizing shadow reward and \(\kappa\)-hop neighbor truncation under a form of correlation decay property, where \(\kappa\) is the communication radius. In the exact setting, our algorithm converges to a first-order stationary point (FOSP) at the rate of \(\mathcal{O}\left(T^{-2/3}\right)\). In the sample-based setting, we demonstrate that, with high probability, our algorithm requires \(\widetilde{\mathcal{O}}\left(\epsilon^{-3.5}\right)\) samples to achieve an \(\epsilon\)-FOSP with an approximation error of \(\mathcal{O}\left(\phi_{0}^{2\kappa}\right)\), where \(\phi_{0}\in(0,1)\). Finally, we demonstrate the effectiveness of our model through extensive numerical experiments.
## 1 Introduction
Cooperative multi-agent reinforcement learning (MARL) involves agents operating within a shared environment, where each agent's decisions influence not only their objectives, but also those of others and the state trajectories [1]. In seeking to bring conceptually sound MARL techniques out of simulation [2; 3] and into real-world environments [4; 5], some key issues emerge: safety and communications overhead implied by a training mechanism. Although experimentally, the centralized training decentralized execution (CTDE) framework has gained traction recently [6; 7], its requirement for centralized data collection can pose issues for large-scale [8] or privacy-sensitive applications [9]. Therefore, we prioritize decentralized training, where to date most MARL techniques impose global state observability for performance certification [1]. In this work, we extend recent efforts to alleviate this bottleneck [10] especially in the case of safety critical settings, in a flexible manner that allows agents to incorporate risk, exploration, or prior information.
More specifically, we hypothesize that the multi-agent system consists of a network of agents that interact with each other locally according to an underlying dependence graph [10]. Second, to model safety constraints in reinforcement learning (RL), we adopt a standard approach based on constrained
Markov Decision Processes (CMDPs) [11], where one maximizes the expected total reward subject to a safety-related constraint on the expected total utility. Third, since many decision-making problems take a form beyond the classic cumulative reward, such as apprenticeship learning [12], diverse skill discovery [13], pure exploration [14], and state marginal matching [15], we focus on utility functions defined as nonlinear functions of the induced state-action occupancy measure, which can be abstracted as RL with general utilities [16; 17].
Towards formalizing the approach, we consider an MARL model consisting of \(n\) agents, each with its own local state \(s_{i}\) and action \(a_{i}\), where the multi-agent system is associated with an underlying dependence graph \(\mathcal{G}\). Each agent is privately associated with two local general utilities \(f_{i}(\cdot)\) and \(g_{i}(\cdot)\), where \(f_{i}(\cdot)\) and \(g_{i}(\cdot)\) are functions of the local occupancy measure. The objective is to find a safe policy for each agent that maximizes the average of the local objective utilities, namely, \(1/n\cdot\sum_{i=1}^{n}f_{i}(\cdot)\), and satisfies each agent's individual safety constraint described by its local utility \(g_{i}(\cdot)\). This setting captures a wide range of safety-critical applications, for example, resource allocation for the control of networked epidemic models [18], influence maximization in social networks [19], portfolio optimization in interbank network structures [20], intersection management for connected vehicles [21], and energy constraints of wireless communication networks [22].
Despite the significance of safe MARL with general utilities, prior works have either ignored the necessity of safety [23] or the computational bottleneck associated with global information exchange regarding the state and action per step [24]. In fact, the interaction of these two aspects requires addressing the fact that each agent's own safety constraint requires information from all others. In particular, the existing works in safe MARL allow full access to the global state or unlimited communications among all agents for policy implementation, value estimation, and constraint satisfaction [25; 26; 27]. However, this assumption is impractical due to the "curse of dimensionality" [28], as well as the limited information exchanges and communications among agents [29].
Therefore, to our knowledge, there is no methodology to both guarantee safety and incur manageable communications overhead for each agent. Compounding these issues is the fact that standard RL training schemes based on the _policy gradient theorem_[30] are not applicable in the context of general utilities. This deviation from the cumulative rewards adds to the difficulty of estimating the gradient, since there does not exist a policy-independent reward function. We refer the reader to Appendix A for an extended discussion of related works.
To address these challenges, we focus on the setting of **distributed training without global observability** and aim to develop a scalable algorithm with theoretical guarantees. Our main contributions are summarized below:
* Compared with existing theoretical works on safe MARL [25; 26; 31], we present the first safe MARL formulation that extends beyond cumulative forms in both the objective and constraints. We develop a truncated policy gradient estimator utilizing shadow reward and \(\kappa\)-hop policies under a form of correlation decay property, where \(\kappa\) represents the communication radius. The approximation errors arising from both policy implementation and value estimation are quantified.
* Despite of the global coupling of agents' local utility functions, we propose a scalable Primal-Dual Actor-Critic method, which allows each agent to update its policy based only on the states and actions of its close neighbors and under limited communications. The effectiveness of the proposed algorithm is verified through numerical experiments.
* From the perspective of optimization, we devise new tools to analyze the convergence of the algorithm. In the exact setting, we establish an \(\mathcal{O}\left(T^{-2/3}\right)\) convergence rate for finding an FOSP, matching the standard convergence rate for solving nonconcave-convex saddle point problems. In the sample-based setting, we prove that, with high probability, the algorithm requires \(\widetilde{\mathcal{O}}\left(\epsilon^{-3.5}\right)\) samples to obtain an \(\epsilon\)-FOSP with an approximation error of \(\mathcal{O}(\phi_{0}^{2\kappa})\), where \(\phi_{0}\in(0,1)\).
## 2 Problem formulation
Consider a Constrained Markov Decision Process (CMDP) over a finite state space \(\mathcal{S}\) and a finite action space \(\mathcal{A}\) with a discount factor \(\gamma\in[0,1)\). A policy \(\pi\) is a function that specifies the decision rule of the agent, i.e., the agent takes action \(a\in\mathcal{A}\) with probability \(\pi(a|s)\) in state \(s\in\mathcal{S}\). When action \(a\) is taken, the transition to the next state \(s^{\prime}\) from state \(s\) follows the probability distribution
\(s^{\prime}\sim\mathbb{P}(\cdot|s,a)\). Let \(\rho\) be the initial distribution. For each policy \(\pi\) and state-action pair \((s,a)\in\mathcal{S}\times\mathcal{A}\), the _discounted state-action occupancy measure_ is defined as
\[\lambda^{\pi}(s,a)=\sum_{k=0}^{\infty}\gamma^{k}\mathbb{P}\left(s^{k}=s,a^{k}=a \middle|\pi,s^{0}\sim\rho\right). \tag{1}\]
The goal of the agent is to find a policy \(\pi\) that maximizes a general objective described by a (possibly) nonlinear function \(f(\cdot)\) of \(\lambda^{\pi}\), known as the _general utility_, subject to a constraint in the form of another general utility \(g(\cdot)\), namely
\[\max_{\pi}f(\lambda^{\pi})\quad\text{s.t.}\quad g(\lambda^{\pi})\geq 0. \tag{2}\]
When \(f(\cdot)=\langle r,\cdot\rangle\) and \(g(\cdot)=\langle u,\cdot\rangle\) are linear functions, (2) recovers the standard CMDP problem:
\[\max_{\pi}V^{\pi}(r)=\mathbb{E}\left[\sum_{k=0}^{\infty}\gamma^{k}r\left(s^{k},a^{k}\right)\middle|\pi,s^{0}\sim\rho\right],\text{ s.t. }V^{\pi}(u)=\mathbb{E}\left[\sum_{k=0}^{\infty}\gamma^{k}u\left(s^{k},a^{k} \right)\middle|\pi,s^{0}\sim\rho\right]\geq 0, \tag{3}\]
where \(V^{\pi}(\cdot)\) is usually referred to as the _value function_. In contrast, it has been shown that for some MDPs, there is no standard value function that can be equivalent to the general utility [16, Lemma 1]. In Appendix C, we provide more examples of formulation (2) beyond standard value functions.
In this work, we study the decentralized version of problem (2). Consider the system is composed of a network of agents associated with a graph \(\mathcal{G}=(\mathcal{N},\mathcal{E}_{\mathcal{G}})\) (not densely connected in general), where the vertex set \(\mathcal{N}=\{1,2,\ldots,n\}\) denotes the set of \(n\) agents and the edge set \(\mathcal{E}_{\mathcal{G}}\) prescribes the communication links among the agents. Let \(d(i,j)\) be the length of the shortest path between agents \(i\) and \(j\) on \(\mathcal{G}\). For \(\kappa\geq 0\), let \(\mathcal{N}_{i}^{\kappa}=\{j\in\mathcal{N}|d(i,j)\leq\kappa\}\) denote the set of agents in the \(\kappa\)-hop neighborhood of agent \(i\), with the shorthand notation \(\mathcal{N}_{i}^{\kappa}=\mathcal{N}\bigvee\mathcal{N}_{i}^{\kappa}\) and \(-i=\mathcal{N}\backslash\{i\}\). The details of the decentralized nature of the system are summarized below:
Space decompositionThe global state and action spaces are the product of local spaces, i.e., \(\mathcal{S}=\mathcal{S}_{1}\times\mathcal{S}_{2}\times\cdots\times\mathcal{S} _{n}\), \(\mathcal{A}=\mathcal{A}_{1}\times\mathcal{A}_{2}\times\cdots\times\mathcal{A} _{n}\), meaning that for every \(s\in\mathcal{S}\) and \(a\in\mathcal{A}\), we can write \(s=(s_{1},s_{2},\ldots,s_{n})\) and \(a=(a_{1},a_{2},\ldots,a_{n})\). For each subset \(\mathcal{N}^{\prime}\subset\mathcal{N}\), we use \((s_{\mathcal{N}^{\prime}},a_{\mathcal{N}^{\prime}})\) to denote the state-action pair for the agents in \(\mathcal{N}^{\prime}\).
Observation and communicationEach agent \(i\) only has direct access to its own state \(s_{i}\) and action \(a_{i}\), while being allowed to communicate with its \(\kappa\)-hop neighborhood \(\mathcal{N}_{i}^{\kappa}\) for information exchanges. The communication radius \(\kappa\) is a given but tunable parameter.
Transition decompositionGiven the current global state \(s\) and action \(a\), the local states in the next period are independently generated, i.e., \(\mathbb{P}(s^{\prime}|s,a)=\prod_{i\in\mathcal{N}}\mathbb{P}_{i}(s^{\prime}_{i }|s,a)\), \(\forall s^{\prime}\in\mathcal{S}\), where we use \(\mathbb{P}_{i}\) to denote the local transition probability for agent \(i\).
Policy factorizationThe global policy can be expressed as the product of local policies, such that \(\pi(a|s)=\prod_{i\in\mathcal{N}}\pi^{i}\left(a_{i}|s\right),\,\forall(s,a)\), i.e., given the global state \(s\), each agent \(i\) acts independently based on its local policy \(\pi^{i}\). We assume that each local policy \(\pi^{i}\) is parameterized by a parameter \(\theta_{i}\) within a convex set \(\Theta_{i}\). Thus, we can write \(\pi(a|s)=\pi_{\theta}(a|s)=\prod_{i\in\mathcal{N}}\pi^{i}_{\theta_{i}}\left(a _{i}|s\right)\), where \(\theta\in\Theta=\Theta_{1}\times\Theta_{2}\times\cdots\times\Theta_{n}\) is the concatenation of local parameters.
Localized objective and constraintFor each agent \(i\) and its local state-action pair \((s_{i},a_{i})\), the _local state-action occupancy measure_ under policy \(\pi\) is defined as
\[\lambda_{i}^{\pi}(s_{i},a_{i})=\sum_{k=0}^{\infty}\gamma^{k}\mathbb{P}\left(s_{ i}^{k}=s_{i},a_{i}^{k}=a_{i}\middle|\pi,s^{0}\sim\rho\right), \tag{4}\]
which can be viewed as the marginalization of the global occupancy measure, i.e., \(\lambda_{i}^{\pi}(s_{i},a_{i})=\sum_{s_{-i},a_{-i}}\lambda^{\pi}(s,a)\). Each agent \(i\) is privately associated with two local (general) utilities \(f_{i}(\cdot)\) and \(g_{i}(\cdot)\), which are functions of the local occupancy measure \(\lambda_{i}^{\pi}\). Agents cooperate with each other aiming at maximizing the global objective \(f(\cdot)\), defined as the average of local utilities \(\{f_{i}(\cdot)\}_{i\in\mathcal{N}}\), while each agent \(i\) needs to satisfy its own safety constraint described by the local utility \(g_{i}(\cdot)\). Then, under the parameterization \(\pi_{\theta}\), (2) can be rewritten as
\[\max_{\theta\in\Theta}\;F(\theta)\coloneqq\frac{1}{n}\sum_{i\in\mathcal{N}}f_{ i}(\lambda_{i}^{\pi_{\theta}}),\text{ s.t. }G_{i}(\theta)\coloneqq g_{i}(\lambda_{i}^{\pi_{\theta}})\geq 0,\;\forall i\in \mathcal{N}. \tag{5}\]
Note that problem (5) is not separable among agents due to the coupling of occupancy measures. Compared to the formulation where the constraint is modeled as the average of local constraints, e.g.,
[27], (5) is stricter and more interpretable. We emphasize that the method proposed in this paper does not require the relaxation of local constraints in (5) to a joint constraint and it directly generalizes to the case of multiple constraints per agent.
Consider the Lagrangian function associated with (5):
\[\mathcal{L}(\theta,\mu)\coloneqq F(\theta)+\frac{1}{n}\sum_{i\in\mathcal{N}}\mu _{i}G_{i}(\theta)=\frac{1}{n}\sum_{i\in\mathcal{N}}\big{[}f_{i}(\lambda_{i}^{ \pi_{\theta}})+\mu_{i}g_{i}(\lambda_{i}^{\pi_{\theta}})\big{]}, \tag{6}\]
where \(\mu\in\mathbb{R}_{+}^{n}\) is the Lagrangian multiplier. The Lagrangian formulation [32] of (5) can be written as
\[\max_{\theta\in\Theta}\min_{\mu\geq 0}\mathcal{L}(\theta,\mu). \tag{7}\]
Since the general utilities \(f_{i}(\lambda_{i}^{\pi_{\theta}})\) and \(g_{i}(\lambda_{i}^{\pi_{\theta}})\) may not be non-concave w.r.t. \(\theta\) even in the form of cumulative rewards, finding the global optimum to (5) is NP-hard in general [33]. Our goal in this work is to develop a scalable and provably efficient gradient-based primal-dual algorithm that can find the first-order stationary points of (5).
## 3 Scalable primal-dual actor-critic method
For a standard value function with the reward \(r\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}\), denoted as \(V^{\pi_{\theta}}(r)=\langle r,\lambda^{\pi_{\theta}}\rangle\), the policy gradient theorem (see Lemma D.1) yields that
\[\nabla_{\theta}V^{\pi_{\theta}}(r)=r^{\top}\cdot\nabla_{\theta}\lambda^{\pi_ {\theta}}=\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{\pi_{\theta}},a\sim\pi_{ \theta}(\cdot|s)}\big{[}\nabla_{\theta}\log\pi_{\theta}\big{(}a|s\big{)}\cdot Q ^{\pi_{\theta}}(r;s,a)\big{]},\]
where \(d^{\pi_{\theta}}(s)\coloneqq(1-\gamma)\sum_{a\in\mathcal{A}}\lambda^{\pi_{ \theta}}(s,a)\) is the discounted state occupancy measure, \(\nabla_{\theta}\log\pi_{\theta}(\cdot|\cdot)\) is the score function, and \(Q^{\pi_{\theta}}(r;\cdot,\cdot)\) is the Q-function with the reward \(r\), defined as
\[Q^{\pi_{\theta}}(r;s,a)=\mathbb{E}\left[\sum_{k=0}^{\infty}\gamma^{k}r\left(s ^{k},a^{k}\right)\Bigg{|}\pi_{\theta},s^{0}=s,a^{0}=a\right]. \tag{8}\]
Although this elegant result no longer holds for general utilities, we can apply the chain rule:
\[\nabla_{\theta}f(\lambda^{\pi_{\theta}})=[\nabla_{\lambda}f(\lambda^{\pi_{ \theta}})]^{\top}\cdot\nabla_{\theta}\lambda^{\pi_{\theta}}=\nabla_{\theta}V ^{\pi_{\theta}}\big{(}\nabla_{\lambda}f\big{(}\lambda^{\pi_{\theta}}\big{)} \big{)}, \tag{9}\]
i.e., the gradient \(\nabla_{\theta}f(\lambda^{\pi_{\theta}})\) is equal to the policy gradient of a standard value function with the reward \(\nabla_{\lambda}f\big{(}\lambda^{\pi_{\theta}}\big{)}\). We introduce the following definitions [23] for the distributed problem (5).
**Definition 3.1** (Shadow reward and shadow Q-function).: _For each agent \(i\), define \(r_{f_{i}}^{\pi_{\theta}}\coloneqq\nabla_{\lambda_{i}}f_{i}(\lambda_{i}^{\pi_{ \theta}})\in\mathbb{R}^{|\mathcal{S}_{i}|\times|\mathcal{A}_{i}|}\) as the (local) shadow reward for the utility \(f_{i}(\cdot)\) under policy \(\pi_{\theta}\). Define \(Q_{f_{i}}^{\pi_{\theta}}(s,a)\coloneqq Q^{\pi_{\theta}}(r_{f_{i}}^{\pi_{\theta} };s,a)\) as the associated (local) shadow Q-function for \(f_{i}(\cdot)\). Similarly, let \(r_{g_{i}}^{\pi_{\theta}}\) and \(Q_{g_{i}}^{\pi_{\theta}}(s,a)\) be the shadow reward and the Q function for \(g_{i}(\cdot)\)._
Combining Definition 3.1 with (9), we can write the local gradient for agent \(i\), i.e., \(\nabla_{\theta_{i}}\mathcal{L}(\theta,\mu)\), as
\[\nabla_{\theta_{i}}\mathcal{L}(\theta,\mu)=\frac{1}{1-\gamma}\mathbb{E}_{s\sim d ^{\pi_{\theta}},a\sim\pi_{\theta}(\cdot|s)}\Bigg{[}\nabla_{\theta_{i}}\log \pi_{\theta_{i}}^{i}\big{(}a_{i}|s\big{)}\cdot\frac{1}{n}\sum_{j\in\mathcal{N} }\Big{(}Q_{f_{j}}^{\pi_{\theta}}(s,a)+\mu_{j}Q_{g_{j}}^{\pi_{\theta}}(s,a) \Big{)}\Bigg{]}, \tag{10}\]
where we apply the policy factorization to arrive at \(\nabla_{\theta_{i}}\log\pi_{\theta}(a|s)=\nabla_{\theta_{i}}\log\pi_{\theta_{i}} ^{i}(a_{i}|s)\). By (10), each agent needs to know the shadow Q functions of all agents, as well as the global state, to evaluate its own gradient. However, especially in large networks, this is both inefficient, due to the communication cost, and impractical because of the limited communication radius. In the remainder of this section, we aim to design a scalable estimator for \(\nabla_{\theta_{i}}\mathcal{L}(\theta,\mu)\) that requires only local communications.
### Spatial correlation decay and \(\kappa\)-hop policies
Inspired by [34], we assume that the transition probability satisfies a form of the spatial correlation decay property [35; 36].
**Assumption 3.2**.: _For a matrix \(M\in\mathbb{R}^{n\times n}\) whose \((i,j)\)-th entry is defined as_
\[M_{ij}=\sup_{s_{j},a_{j},s^{\prime}_{j},a^{\prime}_{j},s_{-j},a_{-j}}\left\| \mathbb{P}_{i}\left(\cdot|s_{j},s_{-j},a_{j},a_{-j}\right)-\mathbb{P}_{i} \left(\cdot|s^{\prime}_{j},s_{-j},a^{\prime}_{j},a_{-j}\right)\right\|_{1}, \tag{11}\]
_assume that there exists \(\omega>0\) such that \(\max_{i\in\mathcal{N}}\sum_{j\in\mathcal{N}}e^{\omega d(i,j)}M_{ij}\leq\chi\) with \(\chi<2/\gamma\), where \(\gamma\) is the discount factor._
The value of \(M_{ij}\) reflects the extent to which agent \(j\)'s state and action influence the local transition probability of agent \(i\). Thus, Assumption 3.2 amounts to requiring this influence to decrease exponentially with the distance between any two agents. Such a decay is often observed in many large-scale real-world systems, e.g., the strength of signals decreases exponentially with distance [37].
Furthermore, as mentioned earlier, the implementation of the local policy \(\pi^{i}_{\theta_{i}}(\cdot|s)\) is still impractical, since it requires access to the global state \(s\), while the allowable communication radius is limited to \(\kappa\). To alleviate this issue, we focus on a specific class of policies in which the local policy of agent \(i\) only depends on the states of these agents in its \(\kappa\)-hop neighborhood \(\mathcal{N}_{i}^{\kappa}\). This class of policies is also referred to as \(\kappa\)-hop policies in the concurrent work [38].
**Assumption 3.3** (\(\kappa\)-hop policies).: _For each agent \(i\in\mathcal{N}\) and \(\theta\in\Theta\), the local policy \(\pi^{i}_{\theta_{i}}(\cdot|s)\) depends only on the neighbor states \(s_{\mathcal{N}_{i}^{\kappa}}\), i.e.,_
\[\pi^{i}_{\theta_{i}}(\cdot|s)_{\mathcal{N}_{i}^{\kappa}},s_{ \mathcal{N}_{i}^{\kappa}})=\pi^{i}_{\theta_{i}}(\cdot|s_{\mathcal{N}_{i}^{ \kappa}},s^{\prime\mathcal{N}_{i}^{\kappa}}_{\gamma_{-i}^{\kappa}}),\ \forall s\in \mathcal{S}\ \text{and}\ \forall s^{\prime}_{\mathcal{N}_{-i}^{\kappa}}\in \mathcal{S}_{\mathcal{N}_{-i}^{\kappa}}. \tag{12}\]
For simplicity, we use the notation \(\pi^{i}_{\theta_{i}}(\cdot|s)=\pi^{i}_{\theta_{i}}(\cdot|s_{\mathcal{N}_{i}^{ \kappa}})\) for \(\kappa\)-hop policies when it is clear from context. We note that, for any original policy function \(\pi_{\theta}(\cdot|s)\), an induced \(\kappa\)-hop policy \(\hat{\pi}_{\theta}(\cdot|s_{\mathcal{N}_{i}^{\kappa}})\) can be defined by fixing the states \(s_{\mathcal{N}_{i}^{\kappa}}\) to some arbitrary values and focusing only on the states of agents in \(\mathcal{N}_{i}^{\kappa}\). When considering only \(\kappa\)-hop policies, it is essential to understand how much information is lost compared to the case where agents have access to the global states. The following proposition quantifies the maximum information loss in terms of the occupancy measure under the assumption that the original policy function also satisfies a spatial correlation decay property.
**Proposition 3.4**.: _Suppose that there exist \(c\geq 0\) and \(\phi\in[0,1)\) such that for every \(\theta\in\Theta\), agent \(i\in\mathcal{N}\), and states \(s,s^{\prime}\in\mathcal{S}\) such that \(s_{\mathcal{N}_{i}^{\kappa}}=s^{\prime}_{\mathcal{N}_{i}^{\kappa}}\), we have \(\left\|\pi^{i}_{\theta_{i}}(\cdot|s)-\pi^{i}_{\theta_{i}}(\cdot|s^{\prime}) \right\|_{1}\leq c\phi^{\kappa}\). Let \(\hat{\pi}_{\theta}\) be an induced \(\kappa\)-hop policy of \(\pi_{\theta}\). Then, it holds that_
\[\left\|\lambda_{i}^{\hat{\pi}_{\theta}}-\lambda_{i}^{\pi_{\theta}} \right\|_{1}\leq\frac{nc\phi^{k}}{(1-\gamma)^{2}},\forall i\in\mathcal{N}. \tag{13}\]
The condition on the local policy in Proposition 3.4 encodes that every \(\pi^{i}_{\theta_{i}}\) is exponentially less sensitive to the states of agents outside \(\mathcal{N}_{i}^{\kappa}\), which is a common assumption in MARL to alleviate computationally burdensome and practically intractable communication requirements imposed by the global observability [34; 39; 38]. By Proposition 3.4, the difference in occupancy measures under \(\pi_{\theta}\) and \(\hat{\pi}_{\theta}\) is controlled by \(\|\pi^{i}_{\theta_{i}}-\hat{\pi}^{i}_{\theta_{i}}\|_{1}\). Therefore, if \(f_{i}(\lambda^{\pi})\) and \(g_{i}(\lambda^{\pi})\) are Lipschitz continuous w.r.t. \(\lambda^{\pi}\), Proposition 3.4 implies an \(\mathcal{O}(\phi^{\kappa})\) approximation of the Lagrangian function (6) using \(\kappa\)-hop policies. The faster the spatial decay of policy is, the more accurate the approximation of the \(\kappa\)-hop policy is. This justifies our focus on learning a \(\kappa\)-hop policy.
### Truncated policy gradient estimator
In the absence of global observability, it is critical to find a scalable estimator for the local gradient \(\nabla_{\theta_{i}}\mathcal{L}(\theta,\mu)\) in (10), so that each agent can update its local policy with limited communications.
By leveraging the similar idea in the definition of \(\kappa\)-hop policies, we define the \(\kappa\)_-hop truncated (shadow) \(Q\)-function_, denoted as \(\widetilde{Q}^{\pi_{\theta}}_{\phi_{i}}:\mathcal{S}_{\mathcal{N}_{i}^{\kappa}} \times\mathcal{A}_{\mathcal{N}_{i}^{\kappa}}\rightarrow\mathbb{R}\), to be
\[\widetilde{Q}^{\pi_{\theta}}_{\phi_{i}}(s_{\mathcal{N}_{i}^{\kappa} },a_{\mathcal{N}_{i}^{\kappa}}):=Q^{\pi_{\theta}}_{\phi_{i}}(s_{\mathcal{N}_ {i}^{\kappa}},\bar{s}_{\mathcal{N}_{i}^{\kappa}},a_{\mathcal{N}_{i}^{\kappa}},\bar{a}_{\mathcal{N}_{i}^{\kappa}}),\ \forall(s_{\mathcal{N}_{i}^{\kappa}},a_{\mathcal{N}_{i}^{\kappa}}) \in\mathcal{S}_{\mathcal{N}_{i}^{\kappa}}\times\mathcal{A}_{\mathcal{N}_{i}^{ \kappa}},\diamond\in\{f,g\}, \tag{14}\]
where \((\bar{s}_{\mathcal{N}_{i}^{\kappa}},\bar{a}_{\mathcal{N}_{i}^{\kappa}})\) is any fixed state-action pair for the agents in \(\mathcal{N}_{-i}^{\kappa}\). Now, we introduce the following _truncated policy gradient estimator_ for agent \(i\):
\[\widetilde{\nabla}_{\theta_{i}}\mathcal{L}(\theta,\mu)\!=\!\frac{1}{1-\gamma} \mathbb{E}_{\stackrel{{\kappa\rightarrow\pi_{\theta}}}{{\sim}}\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Lemma 3.5**.: _Suppose that Assumptions 3.2 and 3.3 hold and there exist \(M_{r},M_{\pi}>0\) such that \(\left\|\widehat{\nabla}_{\theta_{i}}\right\|_{\infty}\leq M_{r}\) and \(\left\|\nabla_{\theta_{i}}\log\pi_{\theta_{i}}^{i}\right\|_{2}\leq M_{\pi}\), for every \(\diamond\in\{f,g\}\), \(\theta\in\Theta\), \(i\in\mathcal{N}\). Then, for all \(\theta\in\Theta\), \(i\in\mathcal{N}\), we have that_
\[\left\|\widehat{\nabla}_{\theta_{i}}\mathcal{L}(\theta,\mu)-\nabla_{\theta_{i }}\mathcal{L}(\theta,\mu)\right\|_{2}\leq\frac{\big{(}1+\left\|\mu\right\|_{ \infty}\big{)}M_{\pi}c_{0}\phi_{0}^{\kappa}}{1-\gamma}=\mathcal{O}(\phi_{0}^{ \kappa}), \tag{16}\]
_where \(c_{0}=2\gamma\chi M_{r}\big{/}(2-\gamma\chi)\) and \(\phi_{0}=e^{-\omega}\)._
Recall that the shadow reward is defined as the gradient of \(f_{i}(\cdot)\) or \(g_{i}(\cdot)\) w.r.t. the local occupancy measure. Since the set of all possible occupancy measures is compact (see (45)), the existence of \(M_{r}>0\) in Lemma 3.5 is satisfied if \(f_{i}(\cdot)\) and \(g_{i}(\cdot)\) are continuously differentiable. The main advantage of using the estimator \(\widehat{\nabla}_{\theta},\mathcal{L}(\theta,\mu)\) lies in that every agent \(i\) only needs to know the truncated Q-functions of agents in its neighborhood \(\mathcal{N}_{i}^{\kappa}\), which can significantly reduce the communication burden and the storage requirement when graph \(\mathcal{G}\) is not densely connected. The proof of Lemma 3.5 can be found in Appendix E.2.
### Algorithm design
Using the results of the preceding section, we put together all the pieces and propose the _Primal-Dual Actor-Critic Method with Shadow Reward and \(\kappa\)-hop Policy_, which includes three stages: policy evaluation by the critic, Lagrangian multiplier update, and policy update by the actor. Below, we provide an overview of the algorithm, while referring the reader to Appendix D.1 for the pseudocode (Algorithm 1), flow diagram (Figure 2), as well as a more detailed discussion.
Stage 1 (policy evaluation by the critic, lines 3-6) In each iteration \(t\), the current policy \(\pi_{\theta^{t}}\) is simulated to generate a batch of trajectories, while each agent \(i\) collects its neighborhood trajectories, i.e., the state-action pairs of the agents in \(\mathcal{N}_{i}^{\kappa}\), as batch \(\mathcal{B}_{i}^{t}\). Then, the batch is used to estimate the local occupancy measures \(\lambda_{i}^{\pi_{\theta^{t}}}\) through
\[\widetilde{\lambda}_{i}^{t}=\frac{1}{B}\sum_{\tau\in\mathcal{B}_{i}^{t}}\sum_ {k=0}^{H-1}\gamma^{k}\cdot\mathbb{1}_{i}\left(s_{i}^{k},a_{i}^{k}\right)\in \mathbb{R}^{|\mathcal{S}_{i}|\times|\mathcal{A}_{i}|}, \tag{17}\]
which are subsequently applied to compute the empirical values for the constraint function \(g_{i}(\lambda_{i}^{\pi_{\theta^{t}}})\) and shadow rewards \(r_{f_{i}}^{\pi_{\theta^{t}}}\) and \(r_{g_{i}}^{\pi_{\theta^{t}}}\), denoted as \(\widetilde{g}_{i}^{t}\), \(\widetilde{r}_{f_{i}}^{t}\), and \(\widetilde{r}_{g_{i}}^{t}\), respectively. It is worth mentioning that, when all utility functions reduce to the form of cumulative rewards, the above operation is unnecessary, since all agents have policy-independent local reward functions.
Next, the agents jointly conduct a distributed evaluation subroutine to estimate their truncated shadow Q-functions \(\{\widetilde{Q}_{\diamond_{i}}^{\pi_{\theta^{t}}}\}_{i\in\mathcal{N}}\) using empirical shadow rewards \(\{\widetilde{r}_{\rho_{i}}^{t}\}_{i\in\mathcal{N}}\), where \(\diamond\in\{f,g\}\). During the subroutine, each agent \(i\) communicates with its neighbor in \(\mathcal{N}_{i}^{\kappa}\) to exchange state-action information, but only needs to access its own empirical shadow reward \(\widetilde{\tau}_{\diamond_{i}}^{\pi}\). In principle, any existing approach that satisfies the observation and communication requirements can be used for the truncated Q-function estimation, such as [40, 41, 42]. As an example subroutine, we introduce the _Temporal Difference (TD) learning_ method [43], which is outlined as Algorithm 2 in Appendix D.1.
Stage 2 (Lagrangian multiplier update, line 7) Instead of employing the projected gradient descent, we propose to update the dual variables by the following formula:
\[\mu^{t+1}=\operatorname*{argmin}_{\mu\in\mathcal{U}}\mathcal{L}(\theta^{t}, \mu)+\frac{1}{2\eta_{\mu}}\|\mu\|_{2}^{2}=\mathcal{P}_{\mathcal{U}}\left(- \eta_{\mu}\nabla_{\mu}\mathcal{L}(\theta^{t},\mu^{t})\right), \tag{18}\]
where weight \(\eta_{\mu}\) can be viewed as the dual "step-size". In practice, we replace the true dual gradient \(\nabla_{\mu_{i}}\mathcal{L}(\theta^{t},\mu^{t})=g_{i}(\lambda_{i}^{\pi_{ \theta^{t}}})/n\) with its empirical estimator \(\widehat{\nabla}_{\mu_{i}}\mathcal{L}(\theta^{t},\mu^{t})\). The feasible region for the dual variable is denoted by \(\mathcal{U}\subseteq\mathbb{R}_{+}^{n}\) and will be specified later.
Stage 3 (policy update by the actor, lines 8-9) To perform the policy update, each agent \(i\) first shares its updated dual variable \(\mu_{i}^{t+1}\) and the values of its estimated truncated Q-functions along the trajectories in batch \(\mathcal{B}_{i}^{t}\) with the agents in its \(\kappa\)-hop neighborhood \(\mathcal{N}_{i}^{\kappa}\). Then, the agent estimates its truncated policy gradient \(\widehat{\nabla}_{\theta_{i}}\mathcal{L}(\theta^{t},\mu^{t+1})\) through a REINFORCE-based mechanism [44] as follows
\[\widehat{\nabla}_{\theta_{i}}\mathcal{L}(\theta^{t},\mu^{t+1})\!=\!\frac{1}{B} \!\sum_{\tau\in\mathcal{B}_{i}^{t}}\!\!\left[\!\sum_{k=0}^{H-1}\gamma^{k} \nabla_{\theta_{i}}\!\log\pi_{\theta_{i}}^{i}(a_{i}^{k}|s_{\mathcal{N}_{i}^{ \kappa}}^{k})\frac{1}{n}\!\sum_{j\in\mathcal{N}_{i}^{\kappa}}\!\!\left[\widetilde {Q}_{f_{j}}^{\xi}(s_{\mathcal{N}_{j}^{\kappa}}^{k},a_{\mathcal{N}_{j}^{\kappa}}^{ k})\!+\!\mu_{j}^{t+1}\widetilde{Q}_{g_{j}}^{t}(s_{\mathcal{N}_{j}^{ \kappa}}^{k},a_{\mathcal{N}_{j}^{\kappa}}^{k})\!\right]\!\right]\!.\]
Finally, each agent \(i\) updates its local policy parameter by a projected gradient ascent, i.e.,
\[\theta_{i}^{t+1}=\mathcal{P}_{\Theta_{i}}\left(\theta_{i}^{t}+\eta_{\theta}\cdot \widehat{\nabla}_{\theta_{i}}\mathcal{L}(\theta^{t},\mu^{t+1})\right). \tag{19}\]
We emphasize that Algorithm 1 is based on the distributed training regime and does not require full observability of global states and actions.
## 4 Convergence analysis
In this section, we analyze the convergence behavior and the sample complexity of Algorithm 1. We begin by summarizing the technical assumptions, including some mentioned previously in the paper. We direct the reader to Appendices F and G where we provide discussions for each assumption and present proofs for the results in this section.
**Assumption 4.1**.: _There exists \(L_{\lambda}>0\) such that \(\nabla_{\lambda_{i}}f_{i}(\cdot)\) and \(\nabla_{\lambda_{i}}g_{i}(\cdot)\) are \(L_{\lambda}\)-Lipschitz continuous w.r.t. \(\lambda_{i}\), i.e., \(\|\nabla_{\lambda_{i}}f_{i}(\lambda_{i})-\nabla_{\lambda_{i}}f_{i}(\lambda_{i }^{\prime})\|_{\infty}\leq L_{\lambda}\|\lambda_{i}-\lambda_{i}^{\prime}\|_{2}\) and \(\|\nabla_{\lambda_{i}}g_{i}(\lambda_{i})-\nabla_{\lambda_{i}}g_{i}(\lambda_{ i}^{\prime})\|_{\infty}\leq L_{\lambda}\|\lambda_{i}-\lambda_{i}^{\prime}\|_{2}, \;\forall i\in\mathcal{N}\)._
**Assumption 4.2**.: _The parameterized policy \(\pi_{\theta}\) is such that **(I)** the score function is bounded, i.e., \(\exists M_{\pi}>0\) s.t. \(\|\nabla_{\theta_{i}}\log\pi_{\theta_{i}}^{i}(a_{i}|s_{\mathcal{N}_{i}^{ \infty}})\|_{2}\leq M_{\pi}\), \(\forall(s,a)\in\mathcal{S}\times\mathcal{A}\), \(\theta\in\Theta\), \(i\in\mathcal{N}\). **(II)**\(\exists L_{\theta}>0\) s.t. the utility functions \(F(\theta)=f(\lambda^{\pi_{\theta}})\) and \(G_{i}(\theta)=g_{i}(\lambda^{\pi_{\theta}}_{i})\) are \(L_{\theta}\)-smooth w.r.t. \(\theta,\;\forall i\in\mathcal{N}\)._
**Assumption 4.3**.: _There exist an FOSP \((\theta^{*},\mu^{*})\) of (5) and a constant \(\overline{\mu}>0\) s.t. \(\mu_{i}^{*}<\overline{\mu}\), \(\forall i\in\mathcal{N}\). Let \(\mathcal{U}=U^{n}=[0,\overline{\mu}]^{n}\)._
In Lemma F.5, we summarize a few properties that are the direct consequence consequence of Assumptions 4.1-4.3. Due to the non-concavity of problem (5), our focus is to find an approximate first-order stationary point (FOSP). A point \((\theta,\mu)\in\Theta\times\mathcal{U}\) is said to be an \(\epsilon\)-FOSP if
\[\mathcal{E}(\theta,\mu)\coloneqq[\mathcal{X}(\theta,\mu)]^{2}+ \left[\mathcal{Y}(\theta,\mu)\right]^{2}\leq\epsilon, \tag{20}\]
where the metrics \(\mathcal{X}(\cdot,\cdot)\) and \(\mathcal{Y}(\cdot,\cdot)\) are defined as
\[\mathcal{X}(\theta,\mu)\coloneqq\max_{\theta^{*}\in\Theta,[\theta^{\prime}- \theta]_{2}\leq 1}\left\langle\nabla_{\theta}\mathcal{L}(\theta,\mu),\theta^{ \prime}-\theta\right\rangle,\;\;\mathcal{Y}(\theta,\mu)\coloneqq-\min_{\mu^{ *}\in\mathcal{U},\|\mu^{\prime}-\mu\|_{2}\leq 1}\left\langle\nabla_{\mu} \mathcal{L}(\theta,\mu),\mu^{\prime}-\mu\right\rangle. \tag{21}\]
The definitions of \(\mathcal{X}(\cdot,\cdot)\) and \(\mathcal{Y}(\cdot,\cdot)\) are based on the first-order optimality condition [45; 46]. Given \(\theta^{*}\in\Theta\) and \(\mu^{*}\in\mathcal{U}\), it can be shown that \(\mathcal{E}(\theta^{*},\mu^{*})=0\) implies that \((\theta^{*},\mu^{*})\) is an FOSP of (5) (see Lemma F.6). In the following, we first consider the exact setting where the agents can obtain the true values of their local occupancy measures, shadow Q-functions, and truncated policy gradients. Therefore, the only source of approximation error is the truncation of the policy gradient.
**Theorem 4.4** (Exact setting).: _Let Assumptions 3.2, 3.3, 4.1-4.3 hold and suppose that the agents can accurately estimate their local occupancy measures, shadow Q-functions, and truncated policy gradients. For every \(T>0\), let \(\left\{\left(\mu^{t},\theta^{t}\right)\right\}_{t=0}^{T}\) be the sequence generated by Algorithm 1 with \(\eta_{\mu}=\mathcal{O}\left(T^{1/3}\right)\) and \(\eta_{\theta}=1\big{/}\big{(}L_{\theta\theta}+4L_{\theta\mu}^{2}\eta_{\mu}\big{)}\), where \(L_{\theta\theta},L_{\theta\mu}\) are Lipschitz constants defined in Lemma F.5. Then, there exists \(t^{*}\in\{0,1,\ldots,T-1\}\) such that_
\[\mathcal{E}\left(\theta^{t^{*}},\mu^{t^{*}+1}\right)=\mathcal{O}\left(T^{-2/3} \right)+\mathcal{O}\left(\phi_{0}^{2\kappa}\right). \tag{22}\]
Next, we delve into the sample complexity of Algorithm 1. For theoretical analysis, we assume that the estimation process for the truncated Q-function offers an approximation to the true function, with the error being associated with the magnitude of the reward function. Let \(\widehat{Q}_{i}^{\pi_{\theta}}(r_{i};\cdot,\cdot)\in\mathbb{R}^{|\mathcal{S}_{ \mathcal{N}_{i}^{\infty}}|\times|\mathcal{A}_{\mathcal{N}_{i}^{\infty}}|}\) be the truncated Q-function with the reward function \(r_{i}(\cdot,\cdot)\in\mathbb{R}^{|\mathcal{S}_{i}|\times|\mathcal{A}_{i}|}\) for agent \(i\in\mathcal{N}\).
**Assumption 4.5**.: _For every reward function \(r_{i}(\cdot,\cdot)\) and \(\epsilon_{0}>0\), the subroutine computes an approximation \(\widehat{Q}_{i}^{\pi_{\theta}}(r_{i};\cdot,\cdot)\) to the truncated Q-function \(\widehat{Q}_{i}^{\pi_{\theta}}(r_{i};\cdot,\cdot)\) such that_
\[\big{\|}\widehat{Q}_{i}^{\pi_{\theta}}(r_{i};\cdot,\cdot)-\widehat{Q}_{i}^{\pi_ {\theta}}(r_{i};\cdot,\cdot)\big{\|}_{\infty}\leq\|r_{i}\|_{\infty}\epsilon_{0} \tag{23}\]
_with \(\mathcal{O}(1/(\epsilon_{0})^{2})\) samples, for every \(i\in\mathcal{N},\theta\in\Theta\)._
We comment that the sample complexity of the truncated Q-function evaluation described in Assumption 4.5 is not restrictive. It can be achieved with high probability by the TD-learning procedure outlined in Algorithm 2 when the agents have enough exploration [10; 43]. For brevity, we assume that (23) holds almost surely. The only difference in the probabilistic version would be the presence of an additional term for the failure probability, which does not affect the order of the sample complexity.
**Theorem 4.6** (Sample-based setting).: _Suppose that Assumptions 3.2, 3.3, 4.1-4.3, and 4.5 hold. For every \(\epsilon>0\) and \(\delta\in(0,1)\), let be the sequence generated by Algorithm 1 with \(T=\mathcal{O}\left(\epsilon^{-1.5}\right)\), \(\eta_{\mu}=\mathcal{O}\left(\epsilon^{-0.5}\right)\), \(\eta_{\theta}=1/\left(L_{\theta\theta}+4L_{\theta\mu}^{2}\eta_{\mu}\right)\), \(\epsilon_{0}=\mathcal{O}\left(\sqrt{\epsilon}\right)\), \(\delta_{0}=\delta/\left(2n(T+1)\right)\), batch size \(B=\mathcal{O}\left(\log(1/\delta_{0})\epsilon^{-2}\right)\), episode length \(H=\log(1/\epsilon)\), where \(L_{\theta\theta},L_{\theta\mu}\) are Lipschitz constants defined in Lemma F.5. Then, with probability \(1-\delta\), there exists \(t^{\star}\in\{0,1,\ldots,T-1\}\) such that_
\[\mathcal{E}\left(\theta^{t^{\star}},\mu^{t^{\star}+1}\right)=\mathcal{O}\left( \epsilon\right)+\mathcal{O}(\phi_{0}^{2\kappa}). \tag{24}\]
_The required number of samples is \(\widetilde{\mathcal{O}}\left(\epsilon^{-3.5}\right)\)._
### Technical discussions
Theorem 4.4 implies an \(\mathcal{O}\left(T^{-2/3}\right)\) iteration complexity of Algorithm 1, matching the fastest convergence rate for solving nonconcave-convex maximin problems in the literature [47]. The approximation error \(\mathcal{O}\left(\phi_{0}^{2\kappa}\right)\) decays at a linear rate w.r.t. the radius of communications. Thus, as long as the underlying network is not densely connected, such as those in wireless communication [37] and autonomous driving [48], an approximate FOSP to (5) can be efficiently computed, while each agent \(i\) only needs to communicate with a small number of agents in its neighborhood..
In Theorem 4.4, we have chosen large step-sizes for the dual variable update to achieve the best convergence rate. This aggressive update ensures that the dual metric \(\mathcal{Y}(\theta^{t},\mu^{t+1})\) always remains within a small range and also provides a satisfactory ascent direction for the policy update. Then, the average primal metric \(1/T\cdot\sum_{t=0}^{T-1}\left[\mathcal{X}\left(\theta^{t},\mu^{t+1}\right) \right]^{2}\) is upper-bounded by exploiting a recursive relation between any two consecutive dual updates. Hence, the existence of a point \(\left(\theta^{t^{\star}},\mu^{t^{\star}+1}\right)\) that satisfies (22) is guaranteed. It is worth noting that the proof of Theorem 4.4 can be easily generalized to the scenario where \(T\) is unspecified, and the same convergence rate can still be achieved with adaptive step-sizes \(\eta_{\mu}^{t}=\mathcal{O}\left(t^{1/3}\right)\) and \(\eta_{\theta}^{t}=1/\left(L_{\theta\theta}+4L_{\theta\mu}^{2}\eta_{\mu}^{t}\right)\).
Theorem 4.6 states that, with high probability, Algorithm 1 has an \(\widetilde{\mathcal{O}}\left(\epsilon^{-3.5}\right)\) sample complexity for finding an \(\epsilon\)-FOSP of (5) with an approximation error \(\mathcal{O}(\phi_{0}^{2\kappa})\). Note that we absorb the logarithmic terms in the notation \(\widetilde{\mathcal{O}}(\cdot)\). The proof of Theorem 4.6 can be broken down into two parts. Firstly, we evaluate the approximation errors of the estimators used in Algorithm 1 in relation to the model parameters, as outlined in Proposition G.1. Then, we integrate these errors into the iteration complexity result established in Theorem 4.4 and optimize the selection of parameters.
## 5 Numerical experiment
In this section, we validate Algorithm 1 via numerical experiments, focusing on three key questions:
* How does Algorithm 1 perform with multiple agents, and does the policy gradient truncation effectively alleviate computational load?
* While Algorithm 1 is the first approach that provably solves the safe MARL problem with general utilities, how does it compare with existing methods for standard Safe MARL?
* What benefits does the use of general utilities offer over standard cumulative rewards?
To answer these questions, we performed multiple experiments in three environments1. The objective functions are based on cumulative rewards, while constraint functions leverage general utilities to incentivize or dissuade agents from exploring the environments.
Synthetic environmentAnalogous to (24, Section 5.1), where agents are linearly arranged as \(1-2-\cdots-n\). Each agent \(i\) has binary local state and action spaces, i.e., \(\mathcal{S}_{i}=\mathcal{A}_{i}=\{0,1\}\), and the local transition matrix \(\mathbb{P}_{i}\) depends solely on its action \(a_{i}\) and the state of agent \(i+1\). The reward functions are constructed such that the optimal unconstrained policy compels all agents to continuously choose action \(1\), irrespective of their states.
PistonballA physics-based game that emphasizes _cooperations and high-dimensional states_ as illustrated in Figure 0(a). Each piston represents an agent, where its local neighborhood includes adjacent pistons, and the goal is to collectively move the ball from right to left. The agent can move up, down, or remain still. We modify the original game[49] so that the agent can only observe the ball when it enters the local neighborhood, as well as the height of neighboring pistons.
Wireless communicationAn access control problem following a similar setup as in [24; 50]. As illustrated in Figure 0(b), the agents try to transmit packets to common access points, and the transmission fails if the access point receives more than one packet simultaneously. As there are more agents than access points, _some agents need to learn to forego their benefits for the collective good_.
In addition to the objective, we incorporate two types of safety constraints characterized by general utilities that cannot be easily encapsulated by standard value functions based on cumulative rewards.
* **Entropy constraints** that stimulates exploration, formalized as \(\operatorname{Entropy}\left(\lambda_{i}^{\pi_{\theta}}\right)\geq c,\; \forall i\in\mathcal{N}\). The function \(\operatorname{Entropy}\left(\lambda_{i}^{\pi_{\theta}}\right)\) represents the local entropy, defined as \(-\sum_{s\in S}d_{i}^{\pi}\left(s\right)\cdot\log\left(d_{i}^{\pi}\left(s \right)\right)\), where \(d_{i}^{\pi_{\theta}}(s_{i})=\left(1-\gamma\right)\sum_{a_{i}\in\mathcal{A}_{i }}\lambda_{i}^{\pi_{\theta}}(s_{i},a_{i})\) is the local state occupancy measure.
* **\(\boldsymbol{\ell_{2}}\)-constraints** that deter agents from learning overly randomized policies, formulated as \(\left\|\sum_{s_{i}\in\mathcal{S}_{i}}\lambda_{i}^{\pi_{\theta}}\right\|_{2}^ {2}\geq c,\;\forall i\in\mathcal{N}\). This constraint is beneficial in applications like autonomous driving and human-AI collaboration, where an agent's policy needs to be predictable for other agents.
In Figure 1, we demonstrate the performance of Algorithm 1 in the 20-agent Pistonball environment under entropy constraints. We observe that, while the truncation with \(\kappa=3\) converges in fewer iterations, truncation with \(\kappa=1\) also yields comparable performance. This underscores the efficiency of Algorithm 1 as employing a smaller communication radius can significantly reduce the computation.
Finally, we compare Algorithm 1 with three baselines based on the MAPPO-Lagrangian method by [31]. For a fair comparison, we consider two standard safe MARL problems, where both objectives and constraints are shaped by cumulative rewards (see Appendix H.4). The results demonstrate that our method consistently outperforms both the centralized and decentralized variants of MAPPO-Lagrangian. In Appendix H, we provide the comprehensive experimental results to fully answer the three questions raised at the beginning of this section.
## 6 Conclusion
In this work, we study the safe MARL with general utilities, with a focus on the setting of distributed training without global observability. To address the challenge of scalability and incorporating general utilities, we propose a primal-dual actor-critic method with shadow reward and \(\kappa\)-hop policy. Taking
Figure 1: (a,b) Environment illustration. (c,d) Performance of Algorithm 1 in Pistonball with 20 agents under entropy constraints.
advantage of the spatial correlation decay property of the transition dynamics, we show that the proposed method achieves an \(\mathcal{O}\left(T^{-2/3}\right)\) convergence rate to the FOSP of the problem in the exact setting and achieves an \(\widetilde{\mathcal{O}}\left(e^{-3.5}\right)\) sample complexity, with high probability, in the sample-based setting. Finally, the effectiveness of our model and approach is verified by numerical studies. For future research, it would be interesting to develop scalable safe MARL algorithms with adaptive communication of agents' state/action information and intelligent sampling of agents' trajectories.
|
2305.00014 | Production and decays of 146 GeV flavons into $eμ$ final state at the
LHC | The CMS experiment at CERN has reported a possible signal for a resonance at
146 GeV decaying into the $e\mu$ final state which, presently, is the only
experimental hint for lepton flavour violation in any low- and high-energy
experiment. The Froggatt-Nielsen mechanism naturally predicts the existence of
new scalars, the flavons, with flavour off-diagonal couplings. We study this
framework in the context of the CMS result and find that the minimal, purely
leptophilic model is too restricted to match the claimed signal. Thereafter we
show how models with additional flavon couplings to quarks can explain the
claimed signal while satisfying all the existing constraints on lepton flavour
violation. | Niko Koivunen, Martti Raidal | 2023-04-28T18:00:00Z | http://arxiv.org/abs/2305.00014v1 | # Production and decays of 146 GeV flavons into \(e\mu\) final state at the LHC
###### Abstract
The CMS experiment at CERN has reported a possible signal for a resonance at 146 GeV decaying into the \(e\mu\) final state which, presently, is the only experimental hint for lepton flavour violation in any low- and high-energy experiment. The Froggatt-Nielsen mechanism naturally predicts the existence of new scalars, the flavons, with flavour off-diagonal couplings. We study this framework in the context of the CMS result and find that the minimal, purely leptophilic model is too restricted to match the claimed signal. Thereafter we show how models with additional flavon couplings to quarks can explain the claimed signal while satisfying all the existing constraints on lepton flavour violation.
## 1 Introduction
The CMS Collaboration has recently reported a possible signal for a new resonance at 146 GeV decaying into charged lepton flavour violating (LFV) \(e\mu\) final state1[1; 2]. This is the first time when a hint for such a signal is reported, with no similar result from the ATLAS Collaboration [2], however. The claimed signal is based on \(\sqrt{s}=13\) TeV CMS data, with integrated luminosity of 138 fb\({}^{-1}\). The global (local) significance of the claimed signal is \(2.8\sigma\) (\(3.8\sigma\)) over the expected background. At the same time, the CMS does not find any excess in the corresponding Higgs boson decay channel \(h\to e\mu\)[2].
Footnote 1: The \(e\mu\) final state includes both the \(e^{+}\mu^{-}\) and \(e^{-}\mu^{+}\) final states.
The standard model (SM) of particle physics does not contain any new resonance nor have any sources of LFV. Therefore, if confirmed, this signal should be interpreted as a sign for new physics beyond the SM. The searches for LFV decays of \(\mu\) and \(\tau\) leptons, and for \(\mu\leftrightarrow e\) conversion have given null result so far [3], consistently with the SM. Similarly, no LFV decays of the SM Higgs boson have been observed. The LFV decays \(h\to e\mu\) have been constrained by the CMS and ATLAS Collaborations as BR(h \(\rightarrow\mathrm{e}\mu\)) \(<4.4\times 10^{-5}\)[1] and BR(h \(\rightarrow\mathrm{e}\mu\)) \(<6.2\times 10^{-5}\)[4], respectively, while the CMS searches for decays \(h\to e\tau\) and \(h\rightarrow\mu\tau\) place upper bounds on the corresponding branching ratios as BR(h \(\rightarrow\mathrm{e}\tau\)) \(<2.2\times 10^{-4}\) and BR(h \(\rightarrow\mu\tau\)) \(<1.4\times 10^{-4}\)[5]. Consequently, if the claimed CMS hint for such a new physics will be confirmed, this would be the first signal of LFV in any low- and high-energy experiment.
Assuming that the CMS hint for new resonance with LFV couplings actually corresponds to reality, as we do in this work, it is extremely challenging to reconcile the claimed signal with the stringent bounds on LFV. Indeed, in this case the scale of new physics alone does not suppress any LFV process. Therefore, to explain simultaneously the presence of the claimed CMS signal and the absence of \(\mu\to e\gamma\) and \(\mu\leftrightarrow e\) conversion in nuclei, one
must involve some additional mechanism to suppress the latter processes. For example, one could try to identify the new resonance with a spin-1 particle, like a \(Z^{\prime}\) with generation non-universal \(U(1)\) couplings to charged leptons. This approach seems difficult, considering that there is no other degrees of freedom to cancel the \(Z^{\prime}\) contribution to \(\mu\to e\gamma\), for example. However, in the case of complex scalars there exists a generic built-in cancellation between the scalar and pseudo-scalar components to loop-induced LFV processes. Any new physics scenario introducing LFV couplings at the scale \({\cal O}(100)\) GeV scale must involve such a mechanism to comply with observations. Therefore, we choose to work with scalars to address the new CMS hint for new physics.
The Froggatt-Nielsen (FN) mechanism [6] is one of the most well-known methods of explaining the observed hierarchy in masses of charged fermions. An integral part of the mechanism is the existence of _flavons_, scalars with flavour violating interactions to fermions, whose vacuum expectation values (VEVs) generate the observed charged fermion masses and mixing via higher-order operators. The physics motivation for the Froggatt-Nielsen mechanism goes beyond collider physics. Nevertheless, it is interesting to ask whether the existence of low-scale flavons is compatible with the present particle physics phenomenology.
In this work we study the possibility that the claimed CMS signal represents the very first experimental hint for the Froggatt-Nielsen flavon. First we shall concentrate on the _leptophilic_ flavon which only generates the flavour structure of the lepton sector and, therefore, does not couple to quarks. This allows us to avoid quite stringent additional constraints from the flavour violation in the quark sector, that would force the flavon VEV to higher scales and suppress the effects of the \(e\mu\)-coupling. In this set-up the SM Higgs boson and the flavon mix due to portal coupling in the scalar potential, producing two mass eigenstates, \(H_{1}\), identified as the 125 GeV Higgs boson and \(H_{2}\), identified as the 146 GeV particle. In the leptophilic Froggatt-Nielsen framework the state \(H_{2}\), with the dominant flavon component, obtains couplings to gauge bosons and quarks through the mixing with the SM Higgs, thus allowing it to be produced at the LHC through the same processes as the SM 125 GeV Higgs boson. Importantly, it is well established [7] that this scenario does not have any problem to satisfy all the LFV constraints as the pseudo-scalar contribution naturally cancels the scalar contribution to \(\mu\to e\gamma\).
We find that, for the maximally allowed Higgs boson-flavon mixing, the maximally allowed production cross-section for the 146 GeV resonance is an order of magnitude smaller than the one claimed by the CMS experiment. To increase the production cross section, we introduce additional flavour-diagonal couplings of the flavon to quarks. This, however, re-introduces the problem of LFV because, now, the tree level contribution to \(\mu\leftrightarrow e\) conversion is no longer suppressed by the mixing angle. The problem can be solved if the flavon couplings to quarks introduce additional cancellation between the tree level amplitudes to \(\mu\leftrightarrow e\) conversion. We demonstrate that this can be arranged simultaneously for the experiments based on \(Ti\) and \(Au\) nuclei.
We conclude that viable Froggatt-Nielsen scenarios with low-scale flavons can be constructed to address the hint for new 146 GeV resonance. However, these scenarios require non-trivial model building and some cancellation between the model parameters. More
experimental data is needed to clarify the status of the claimed CMS hint for LFV.
## 2 The leptophilic Froggatt-Nielsen model
The claimed CMS result [1; 2] hints for a new resonance at the electroweak scale. The Froggatt-Nielsen mechanism should preferably be taking place at least at that scale, in order not to suffer from heavily suppressed flavon couplings. This does not seem to be possible if we apply the Froggatt-Nielsen mechanism to both quark and lepton sectors simultaneously. This is due to the tree-level flavon mediated neutral meson mixing, such as the \(K^{0}\)-\(\bar{K}^{0}\) mixing. The flavon VEV would then have to be close to TeV scale to avoid those constraints [8], which is an order of magnitude higher than the scale indicated by the CMS experiment. We, therefore, apply the Froggatt-Nielsen mechanism to charged leptons only, allowing the flavon VEV to be at the electroweak scale, as demonstrated in [7]2.
Footnote 2: Each fermion sector could, in principle, have their own flavon field, with different VEVs.
The Froggatt-Nielsen mechanism has been previously studied in the context of flavour violation in the Higgs boson decays [7; 9; 10] and in the perspective of the LHC phenomenology [8; 11].
### The Froggatt-Nielsen mechanism and LFV
The Froggatt-Nielsen framework extends the SM with a flavour symmetry which in the simplest case is global or local \(U(1)\) or a \(Z_{N}\) symmetry. We will take the flavour symmetry to be global \(U(1)\). The framework introduces, as new fields, heavy fermion messengers and a complex scalar flavon. The SM fermions and the new particles are also charged under the flavour symmetry. The purpose of the flavour symmetry is to forbid the SM Yukawa couplings, with the possible exception for the top quark. The Yukawa couplings are, instead, generated from effective operators. The heavy fermion messengers connect the SM fermions to Higgs boson at tree-level. Once the heavy fermion messengers are integrated out, one is left with an effective operator, which in the case of charged leptons is
\[\mathcal{L}\supset c_{ij}\left(\frac{\Phi}{\Lambda}\right)^{n_{ij}}\bar{L}_{L, i}He_{R,j}+h.c., \tag{1}\]
where \(c_{ij}\) are dimensionless order-one couplings, \(\Lambda\) is the mass scale of the integrated out messenger fermions, \(L_{L,i}\) are the \(SU(2)_{L}\) lepton doublets and \(e_{R,i}\) are the \(SU(2)_{L}\) charged lepton singlets, and \(H\) is the SM Higgs doublet. \(\Phi\) stands for a complex scalar flavon that is decomposed into real and imaginary parts as \(\Phi=1/\sqrt{2}(\phi+iA)\). The conservation of flavour charge fixes the power \(n_{ij}\) as
\[n_{ij}=-\frac{1}{q_{\phi}}(q_{L,i}+q_{R,j}+q_{H}). \tag{2}\]
The power \(n_{ij}\) is the number of fermion messengers that were integrated out from the diagram that generated the corresponding effective term. The \(n_{ij}\) is therefore a positive integer.
We assume that the flavon couples only to leptons, so that the Froggatt-Nielsen mechanism generates only the charged lepton flavour structure. The operator (1) gives rise to the SM Yukawa couplings, as the flavon acquires a non-zero VEV. Expanding the operator (1) around the vacuum yields
\[\mathcal{L}\supset c_{ij}\left(\frac{\Phi+\frac{v_{\phi}}{\sqrt{2}} }{\Lambda}\right)^{n_{ij}}\bar{L}_{L,i}(H+\langle H\rangle)e_{R,j}+h.c.\] \[= c_{ij}\left(\frac{v_{\phi}}{\sqrt{2}\Lambda}\right)^{n_{ij}} \bar{e}_{L,i}e_{R,j}\frac{1}{\sqrt{2}}(h+v)+n_{ij}c_{ij}\left(\frac{v_{\phi}}{ \sqrt{2}\Lambda}\right)^{n_{ij}}\frac{v}{v_{\phi}}\bar{e}_{L,i}e_{R,j}\ \Phi+h.c.+\cdots,\]
where we have kept only the renormalizable terms. The first term in the second line gives the SM Yukawa coupling
\[Y_{ij}\equiv c_{ij}\left(\frac{v_{\phi}}{\sqrt{2}\Lambda}\right)^{n_{ij}}. \tag{4}\]
The charged lepton mass hierarchy is explained by assuming \(\epsilon\equiv v_{\phi}/(\sqrt{2}\Lambda)<1\) and by assigning larger Froggatt-Nielsen charges to the lighter leptons compared to the heavier ones. The flavour charge assignment determines the hierarchy of the Yukawa couplings. This is in contrast to the Standard Model where the hierarchy is obtained by tuning the couplings themselves.
The flavon also has Yukawa-like interaction to leptons as can be seen from the second line of Eq. (3),
\[\widetilde{\kappa}_{ij}\equiv\frac{v}{v_{\phi}}n_{ij}Y_{ij}. \tag{5}\]
This coupling is not proportional to the Yukawa matrix. It is not diagonalized simultaneously with the charged lepton mass matrix and, therefore, the flavon couplings are flavour violating.
The physical couplings are obtained by diagonalizing the Higgs Yukawa coupling matrix
\[Y_{\text{diag}}=U_{L}YU_{R}^{\dagger}. \tag{6}\]
The equation (3) then becomes
\[\mathcal{L}=\frac{1}{\sqrt{2}}\bar{e}_{L}Y_{\text{diag}}e_{R}(h+v)+\bar{e}_{L} \kappa e_{R}\ \Phi+h.c., \tag{7}\]
where
\[\kappa_{ij}=\frac{v}{v_{\phi}}U_{L}(n\cdot Y)U_{R}^{\dagger},\quad\text{with} \quad(n\cdot Y)_{ij}=n_{ij}Y_{ij}. \tag{8}\]
The flavon coupling \(\kappa\) can be written in a form
\[\kappa_{ij}=\frac{v}{v_{\phi}}\sum_{k}\left[Y_{j}q_{L,k}(U_{L})_{ik}(U_{L}^{ \dagger})_{kj}+Y_{i}q_{R,k}(U_{R})_{ik}(U_{R}^{\dagger})_{kj}\right], \tag{9}\]
where \(Y_{i}\) is the lepton SM Yukawa coupling. From this expression one can deduce the maximal values for the flavour violating couplings. These are obtained when either the left-handed or the right-handed rotation matrix has "maximal" mixing, that is two elements
\(\sim 1/\sqrt{2}\) on the same row or column3. The maximum flavour violating flavon coupling to electron and muon therefore is
Footnote 3: This can happen if all the left- or the right-handed flavour charges are similar. If all the left-handed charges are identical, each mass matrix entry in a column has the same order of magnitude, and if all the right-handed charges are identical, each entry in each row has same order of magnitude. Even though identical charges for a handedness produces large entries in mixing matrix, the effect is lost due to unitarity. One has to break the degeneracy slightly, in order to preserve the contribution of large mixing in (9).
\[|\kappa^{\rm max}_{e\mu\ {\rm or}\ \mu e}|\sim\frac{v}{v_{\phi}}\frac{Y_{\mu}}{2 }\approx 3.0\times 10^{-4}\,\frac{v}{v_{\phi}}. \tag{10}\]
This coupling is boosted if the flavon VEV is small compared to that of the SM Higgs. We will use this maximally large \(e\mu\) coupling to estimate the maximum cross section obtainable in the leptophilic case in addition numerical benchmark. We set \(\epsilon=0.1\) and use in both cases the flavour charges presented in Table 1. We have chosen right-handed charges so that they are almost the same and broken the degeneracy in the first generation. This will produce large mixing in the right-handed rotation matrix. The left-handed charges are different and produce the required hierarchy in the eigenvalues.
In the analytical estimate for the largest possible \(e\mu\)-cross section we assume that only off-diagonal coupling of the flavon is \(\kappa_{\mu e}\) and that it is given by (10). This will maximize the brancing ratio into \(e\mu\). The diagonal flavon couplings in this case are approximately \(\widetilde{\kappa}_{ee}\approx 6Y_{e}\), \(\widetilde{\kappa}_{\mu\mu}\approx 4Y_{\mu}\) and \(\widetilde{\kappa}_{\tau\tau}\approx 2Y_{\tau}\). We will use these values in our analytical estimate.
We also provide the following numerical benchmark that produces the correct masses for electron, muon and tau:
\[Y=\left(\begin{array}{ccc}4.1\epsilon^{7}&0.63\epsilon^{6}&4.3\epsilon^{6} \\ 6.3\epsilon^{5}&-5.5927\epsilon^{4}&6.0\epsilon^{4}\\ -5.7\epsilon^{3}&-0.2\epsilon^{2}&0.8219\epsilon^{2}\end{array}\right),\quad \widetilde{\kappa}\approx\left(\begin{array}{ccc}1.9\times 10^{-5}&-2.7\times 10^{-6}&1.0 \times 10^{-5}\\ -2.1\times 10^{-4}&2.6\times 10^{-3}&-9.1\times 10^{-4}\\ 3.1\times 10^{-3}&3.5\times 10^{-3}&2.4\times 10^{-2}\end{array}\right). \tag{11}\]
The diagonalization matrices are given by
\[U_{L}\approx\left(\begin{array}{ccc}1&-1.4\times 10^{-3}&-2.3\times 10^{-4} \\ -1.4\times 10^{-3}&1&5.5\times 10^{-2}\\ 3.0\times 10^{-4}&5.5\times 10^{-2}&1\end{array}\right)\quad{\rm and}\quad{\rm U _{R}}\approx\left(\begin{array}{ccc}0.55&0.64&0.54\\ -0.62&0.74&-0.25\\ -0.56&-0.20&0.81\end{array}\right). \tag{12}\]
Note that the element \(\widetilde{\kappa}_{\mu e}\) is \(\sim 2/3\) of the maximum coupling and allows to produce large \({\rm BR}({\rm H_{2}}\to{\rm e\mu})\). The flavon VEV should be \(\gtrsim 100\) GeV, in order to have large enough messenger scale, \(\Lambda\gtrsim 1\) TeV.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Particle & \(e_{L}^{c}\) & \(e_{R}\) & \(\mu_{L}^{c}\) & \(\mu_{R}\) & \(\tau_{L}^{c}\) & \(\tau_{R}\) & \(H\) & \(\phi\) \\ \hline Charge & 6 & 1 & 4 & 0 & 2 & 0 & 0 & -1 \\ \hline \end{tabular}
\end{table}
Table 1: The \(U(1)\) flavour charges used in the numerical and analytical estimates.
### Phenomenology at hadron colliders
The scalar potential we work with is of the Higgs-portal type,
\[V=-\mu_{h}^{2}(H^{\dagger}H)-\mu_{\phi}^{2}(\Phi^{*}\Phi)+\lambda_{h}(H^{\dagger }H)^{2}+\lambda_{h}(\Phi^{*}\Phi)^{2}+\lambda_{h\phi}(H^{\dagger}H)(\Phi^{*} \Phi)+{\mu^{\prime}_{\phi}}^{2}(\Phi^{2}+\Phi^{*2}). \tag{13}\]
The last term in the potential explicitly breaks the \(U(1)\) flavour symmetry and gives a mass, \(m_{A}^{2}={\mu^{\prime}_{\phi}}^{2}\), to the pseudo-scalar \(A\), that would otherwise be a massless Goldstone boson.
The real part of the flavon mixes with the Higgs boson, as both the SM Higgs and the flavon acquire non-zero VEVs,
\[\left(\begin{array}{c}h\\ \phi\end{array}\right)=\left(\begin{array}{cc}\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{array}\right)\left(\begin{array}{c}H_{1}\\ H_{2}\end{array}\right). \tag{14}\]
We identify \(H_{1}\) as the 125 GeV Higgs boson and \(H_{2}\) as the flavon like scalar. We interpret \(H_{2}\) as the particle with mass 146 GeV that is responsible for the CMS signal. To constrain the mixing angle we use the Higgs signal strength value \(\mu=1.02{\pm}_{0.06}^{0.07}\), obtained for 137 fb\({}^{-1}\) of \(\sqrt{s}=13\) TeV LHC data [12]. This is compatible with the constraint \(|\sin\theta|\lesssim 0.3\).
Taking into account the mixing with the Higgs boson, the lepton couplings become
\[\mathcal{L}=\frac{1}{\sqrt{2}}\bar{e}_{L}\Big{(}\cos\theta Y_{\rm diag}-\sin \theta\kappa\Big{)}e_{R}H_{1}+\frac{1}{\sqrt{2}}\bar{e}_{L}\Big{(}\sin\theta Y _{\rm diag}+\cos\theta\kappa\Big{)}e_{R}H_{2}+\frac{i}{\sqrt{2}}\bar{e}_{L} \kappa e_{R}A+{\rm h.c.}. \tag{15}\]
The decay rate of \(H_{2}\) to \(e\mu\)-final state at tree-level is
\[\Gamma(H_{2}\to e\mu)=\frac{m_{H_{2}}}{16\pi}\cos^{2}\theta\left(|\kappa_{e \mu}|^{2}+|\kappa_{\mu e}|^{2}\right). \tag{16}\]
The mixing with flavon introduces flavor violating couplings to the 125 GeV Higgs boson and also changes the decay rates to flavor conserving final states,
\[\Gamma(H_{1}\to e_{i}e_{i})=\frac{m_{H_{1}}}{16\pi}\left|\cos \theta Y_{\rm diag}^{i}-\sin\theta\kappa_{ii}\right|^{2}, \tag{17}\] \[\Gamma(H_{1}\to e_{i}e_{j})=\frac{m_{H_{1}}}{16\pi}\sin^{2} \theta\left(|\kappa_{ij}|^{2}+|\kappa_{ji}|^{2}\right). \tag{18}\]
These will place significant constraints on the Higgs-flavon mixing angle and on the flavon VEV. We find that for the numerical benchmark we use the measurements of BR(h \(\rightarrow\mu\mu\)) [13] and BR(h \(\rightarrow\tau\tau\)) [14] are more constraining than the searches for the LFV decays \(h\to e\mu,e\tau,\mu\tau\)[1; 5]. The excluded parameter space in \((v_{\phi},\sin\theta)\) plane is presented in Fig. 1. The \(h\rightarrow\mu\mu\) measurement imposes the more stringent bound compared to the one of \(h\rightarrow\tau\tau\). This is due to the flavon coupling dependence on the flavour charges: the muon coupling is enhanced by larger flavour charges compared to the one of tau lepton.
While the flavon \(\phi\) does not couple to quarks, the flavon-like mass eigenstate \(H_{2}\) obtains coupling to quarks through mixing with the SM Higgs boson. The couplings of \(H_{2}\) to quarks and gauge bosons are proportional to the SM couplings but scaled with \(\sin\theta\). Therefore, \(H_{2}\) can be produced at the LHC analogously to the Higgs boson, mainly in gluon-gluon fusion and vector boson fusion, which were the production channels considered in the s
CMS analysis. The production cross section of \(H_{2}\) is suppressed by \(\sin^{2}\theta\) compared to a hypothetical SM Higgs-like scalar of a mass \(m=146\) GeV,
\[\sigma(pp\to H_{2})=\sin^{2}\theta\ \sigma_{pp\to h}^{\rm SM}(m_{h}=146{\rm GeV}), \tag{19}\]
where we assume that the gluon-gluon and vector boson fusions are the only production channels. In the narrow width approximation the production cross section for the \(e\mu\) final state through \(H_{2}\) decay is given by
\[\sigma(pp\to H_{2}\to e\mu)=\sigma(pp\to H_{2})\times{\rm BR}({\rm H_{2}}\to{ \rm e}\mu), \tag{20}\]
where the branching ratio is
\[{\rm BR}({\rm H_{2}}\to{\rm e}\mu)=\frac{\Gamma({\rm H_{2}}\to{\rm e}\mu)}{ \Gamma_{\rm tot}({\rm H_{2}})}, \tag{21}\]
with
\[\Gamma_{\rm tot}(H_{2})=\sin^{2}\theta\big{(}\Gamma_{h\to{\rm all}}^{\rm SM}- \sum_{i}\Gamma_{h\to e_{i}e_{i}}^{\rm SM}\big{)}\Big{|}_{m_{h}=146{\rm GeV}}+ \sum_{i}\Gamma(H_{2}\to e_{i}e_{i})+\sum_{i\neq j}\Gamma(H_{2}\to e_{i}e_{j}). \tag{22}\]
Here we have assumed that, \(m_{H_{2}}<2m_{A}\), to prevent \(H_{2}\) decays into imaginary part of the flavon and reducing the relevant branching ratio (21).
We compute the \(e\mu\) production cross section for the numerical benchmark assuming the maximally allowed \(e\mu\) coupling. The \(e\mu\) production cross section and branching ratios are presented in Fig. 2. The gluon-gluon fusion is the dominant \(H_{2}\) production channel,
Figure 1: _Left panel:_ The parameter space excluded by searches for the LFV decays of Higgs boson. The dark gray area is excluded by the \(h\to e\mu\) searches [1], the medium gray area is excluded by the \(h\to e\tau\) searches [5] and the light gray area by the \(h\to\mu\tau\) searches [5]. _Right panel:_ The parameter regions excluded by the measurements of \({\rm BR}({\rm h}\to\mu\mu)\) (blue area) and \({\rm BR}({\rm h}\to\tau\tau)\) (red area).
similar to the SM Higgs production. The production of \(H_{2}\) is increased with increasing the mixing angle, whereas the \(\text{BR}(\text{H}_{2}\to\text{e}\mu)\) is decreased when \(H_{2}\) decays into \(WW\) and \(b\bar{b}\) final states become more significant. \(\text{BR}(\text{H}_{2}\to\text{e}\mu)\) decreases as the \(v_{\phi}\) grows, as the leptonic couplings become smaller and the other decay channels start to dominate. For relatively low values of \(v_{\phi}\), below 400 GeV, the \(\tau\bar{\tau}\) final state is the dominant decay channel. This decay channel is not suppressed by \(\sin\theta\), due to direct flavon coupling to leptons, and, therefore, the total \(e\mu\) cross section grows with the mixing angle.
The combination of large mixing angle and small flavon VEV is excluded by the LHC data, as can be seen in Fig. 1. This means that the 146 GeV particle production cross section can maximally be \(\sim 0.04\) fb, over two orders of magnitude smaller than the cross section reported by the CMS experiment, 5.77 fb. Both scenarios are in addition constrained by the searches for LFV decays of charged leptons and for \(\mu\leftrightarrow e\) conversion in nuclei. These LFV processes depend on the imaginary part of the flavon, unlike the collider processes. This allows for a freedom in the parameters space to avoid those LFV constraints. These constraints are discussed in more detail in Section 3.1. The parameter space compatible with these scenarios is presented in Fig. 4 and will be discussed shortly.
Figure 2: _Left panel:_ Cross-section \(\sigma(pp\to H_{2}\to e\mu)\) as a function of flavon VEV at 13 TeV LHC for different Higgs-flavon mixing angles. _Right panel:_\(H_{2}\to e\mu\) branching ratio as a function of flavon VEV. In both panels solid lines correspond to numerical benchmark and the dashed lines correspond to analytical estimate with maximal \(e\mu\) coupling. In both panels the red area is excluded by \(\text{BR}(\text{h}\to\tau\tau)\) and \(\text{BR}(\text{h}\to\mu\mu)\) measurements for \(\sin\theta=0.1\). For \(\sin\theta=0.2\) red and green areas are excluded, and for \(\sin\theta=0.3\) all colored areas are excluded.
Addition of diagonal quark couplings
The scenario based on leptophilic version of the Froggatt-Nielsen mechanism is not able to produce large enough \(pp\to H_{2}\to e\mu\) cross section, as found in the previous section. In the leptophilic case the flavon-like state, \(H_{2}\), couples to quarks and gauge bosons only due to the mixing with SM Higgs, which suppresses its production. The \(\sqrt{s}=13\) TeV LHC production cross section of the 146 GeV scalar with the SM couplings is \(\sim 38\) pb, of which the dominant part is coming from the gluon-gluon fusion. In the numerical benchmark of the previous section the branching ratio for \(H_{2}\to e\mu\) is maximally \(7.3\times 10^{-5}\). With this value, the production cross section of \(H_{2}\) would have to be \(\sim 80\) pb, in order to reach the reported total cross section 5.77 fb. The flavon would need to have significant quark couplings in order to reach the required production cross section for \(H_{2}\). We will now consider adding direct quark couplings to the flavon, thus departing from the leptophilic case in order to study the Froggatt-Nielsen scenario more broadly. We consider the addition of flavour diagonal couplings of flavon to quarks,
\[\mathcal{L}_{\text{extra}}=\sum_{i=u,d,s,c,b,t}\frac{c_{i}Y_{i}}{\sqrt{2}}\bar{ q}_{L,i}q_{R,i}\Phi+\text{h.c.}, \tag{10}\]
where \(Y_{i}\) are the SM Yukawa couplings and \(c_{i}\) are free parameters.
We remain agnostic about the origin of these couplings. They might originate from the Froggatt-Nielsen messenger sector in some fashion, but cannot originate from the usual Froggatt-Nielsen mechanism, analogously to (1) without flavour violating couplings accompanying them. Also, the Froggatt-Nielsen mechanism would greatly limit the relative magnitude of these diagonal couplings4. Here we simply assume that the charged leptons acquire their masses from the Froggatt-Nielsen mechanism, due to flavon \(\phi\), and the quarks obtain their masses in some other fashion, perhaps via other flavon or flavons. One way to justify the different treatment of quarks and leptons is the apparent disparity in the PMNS and CKM matrices, the former exhibiting order one elements and the latter hierarchical elements. Also the lepton sector shows more drastic differences in masses, neutrinos being at least six orders of magnitude lighter than electron. Nevertheless, we assume that the mass generation of charged leptons and quarks are linked to each other in some fashion and, hence, the couplings in (10) arise.
Footnote 4: The application of Froggatt-Nielsen mechanism to quarks with the same flavon is out of the question due to quark flavour constraints, as already stated above. The other possible option would be to apply the Froggatt-Nielsen mechanisms to quarks by adding a second flavon field for them. In this case the leptonic flavon could have low VEV, as in the previous section, but the quarky flavon would have to have its VEV at TeV scale to avoid quark flavour constraints. The two flavons would them mix. Even if one assumes that the two flavons mix strongly, it does not help to boost the \(H_{2}\) production, as the quarky flavon VEV would be too suppressed by its large VEV.
Now that we have introduced direct couplings to quarks for the flavon, we set the mixing angle with the SM Higgs boson to zero as it is no longer required for \(H_{2}\) production at the LHC. From now on the mass eigenstate \(H_{2}\) is the pure flavon. As there is no mixing, the properties of the Higgs boson stay those of the SM and we are free from constraints presented in Fig. 1. One might consider the production of \(H_{2}\) directly from the light
quarks, instead of gluon-gluon fusion through the top-loop. Relatively large couplings to up and down, \(c_{u}Y_{u}\sim c_{d}Y_{d}\sim 5\times 10^{-2}\), would yield the desired \(H_{2}\) production cross section \(\sim 80\) pb. The large up and down couplings will also boost those \(H_{2}\) decay channels that suppress the \(e\mu\) branching ratio, effectively killing this signature. The light quarks cannot, therefore, be used for producing the flavon.
The \(H_{2}\) production cross section needs to be increased at least by the factor of 2, compared to 146 GeV SM Higgs-like scalar, in order to reach the cross section indicated by the CMS experiment. This can be accomplished with flavon coupling to top quark of the magnitude \(\sim\sqrt{2}\). This coupling alone would ensure sufficient production cross section for \(H_{2}\). The flavon cannot decay directly into top-antitop pairs, thus avoiding the suppression of \(H_{2}\to e\mu\) branching ratio. The large top coupling, however, makes the \(H_{2}\to gg\) decay rate significant for large values of \(v_{\phi}\). Couplings other than to the top quark are not required for efficient production of \(H_{2}\). However, they are required to avoid flavour constraints that become more restrictive due to the additional quark-flavon couplings.
### Constraints from the LFV searches
The constraints from LFV decays of leptons are the most stringent in the \((e,\mu)\) sector, unlike in the case of SM Higgs for which they are the weakest (as seen in Fig. 1). The searches for LFV decays of muon \(\mu\to e\gamma\)[15], \(\mu\to eee\)[16] and various tau decay channels impose stringent constraints on the LFV couplings. In addition, the \(\mu\leftrightarrow e\) conversion in various nuclei, such as gold [17] and titanium [18], impose relevant constraints. In our analysis of LFV decays and \(\mu\leftrightarrow\) conversion we follow references [7; 19; 20]. We find that the \(\mu\to e\gamma\) and \(\mu\leftrightarrow e\) conversion in gold impose the most stringent constraints on the scenarios we study.
In this scenario, \(\mu\to e\gamma\) acquires significant and comparable contributions both at 1-loop and 2-loop level. In the leptophilic model with Higgs-flavon mixing the contributing diagrams along with the associated formulae are presented in Ref. [7]. Here we shall concentrate on the case with additional flavon-quark couplings. The relevant diagrams in the case of additional quark couplings are presented in the left and in the middle panels of Fig. 3. The 2-loop Barr-Zee diagram [21] contains large top coupling that compensates
Figure 3: _Left and middle panels:_ The 1- and 2-loop contributions to \(\mu\to e\gamma\) in the case of additional quark couplings. _Right panel:_ The tree-level contribution to \(\mu\leftrightarrow e\) conversion in nuclei in the case of additional quark couplings.
for the additional loop suppression. Both 1- and 2-loop diagrams are mediated by real and imaginary parts of the flavon. Their contributions come with opposite signs, yielding cancellations in certain regions of the parameter space, allowing to bypass the stringent \(\mu\to e\gamma\) constraint, despite of large flavour violating coupling and relatively low mediator masses.
The \(\mu\to e\gamma\) decay rate is given by
\[\Gamma(\mu\to e\gamma)=\frac{m_{\mu}^{3}}{4\pi}(|A_{L}|^{2}+|A_{R}|^{2}), \tag{3.2}\]
where
\[A_{L,R}=A_{L,R}^{\rm 1-loop}+A_{L,R}^{\rm 2-loop}. \tag{3.3}\]
In the case of additional quark couplings the 1- and 2-loop contributions are
\[\begin{split} A_{L}^{\rm 1-loop}&=\sum_{i=e,\mu, \tau}\left(-\frac{ie}{64\pi}\right)\kappa_{ie}^{*}\int_{0}^{1}\frac{\kappa_{i \mu}y(y-x)m_{\mu}+\kappa_{\mu i}^{*}(y-1)m_{i}}{y(y-x)m_{\mu}^{2}+(1-y)m_{i}^ {2}+m_{H_{2}}^{2}y},\\ &+\sum_{i=e,\mu,\tau}\left(\frac{ie}{64\pi}\right)\kappa_{ie}^{* }\int_{0}^{1}\frac{-\kappa_{i\mu}y(y-x)m_{\mu}+\kappa_{\mu i}^{*}(y-1)m_{i}}{y (y-x)m_{\mu}^{2}+(1-y)m_{i}^{2}+m_{A}^{2}y},\end{split} \tag{3.4}\]
and
\[A_{L}^{\rm 2-loop}=-i\frac{e\alpha G_{F}v}{12\pi^{3}}\kappa_{\mu e}^{*}c_{t} \Big{[}f(z_{t\phi})-g(z_{tA})\Big{]}, \tag{3.5}\]
Figure 4: _Left panel_: Regions of the \((v_{\phi},\,m_{A})\) parameter space excluded by searches for \(\mu\to e\gamma\) in the leptophilic model for different values of the mixing angle. _Right panel_: The LFV constraints in the case of additional quark-flavon couplings for \(c_{t}=1.9\) and three different values of the charm coupling, \(c_{c}=-1.9,\,0\) and \(1\). The allowed white wedge shaped area continues until the corner of the orange area, but is too fine to be visible by eye.
with
\[\begin{split} f(z)&=\frac{z}{2}\int_{0}^{1}\frac{1-2x(1- x)}{x(1-x)-z}\log\left[\frac{x(1-x)}{z}\right],\\ g(z)&=\frac{z}{2}\int_{0}^{1}\frac{1}{x(1-x)-z}\log \left[\frac{x(1-x)}{z}\right].\end{split} \tag{3.6}\]
The arguments of these functions are defined by \(z_{t\phi}=m_{t}^{2}/m_{H_{2}}^{2}\) and \(z_{tA}=m_{t}^{2}/m_{A}^{2}\). The \(A_{R}\)-terms are obtained from \(A_{L}\) by replacing \(\kappa_{ij}\) with \(\kappa_{ji}^{*}\). Note the sign differences between \(H_{2}\) and \(A\) contributions in Eqs. (3.4) and (3.5). As the pseudo-scalar \(A\) does not enter the relevant collider processes, its mass is an independent parameter. This fact allows for the cancellations between different amplitudes and, thus, suppression of \(\mu\to e\gamma\) decays.
The \(\mu\leftrightarrow e\)-conversion rate is given by
\[\begin{split}\Gamma(\mu\leftrightarrow e)&=\left| \frac{iD}{2m_{\mu}}A_{L}+\widetilde{g}_{LS}^{(p)}S^{(p)}+\widetilde{g}_{LS}^{( n)}S^{(n)}+\widetilde{g}_{LV}^{(p)}V^{(p)}\right|^{2}\\ &+\left|\frac{iD}{2m_{\mu}}A_{R}+\widetilde{g}_{RS}^{(p)}S^{(p)} +\widetilde{g}_{RS}^{(n)}S^{(n)}+\widetilde{g}_{RV}^{(p)}V^{(p)}\right|^{2}. \end{split} \tag{3.7}\]
The \(\mu\leftrightarrow e\) coversion receives contributions generated by 1-loop and 2-loop diagrams in Fig. 3 and the coefficients \(A_{L}\) and \(A_{R}\) are given in Eqs. (3.3), (3.4) and (3.5). The \(\mu\leftrightarrow e\) conversion receives possibly dominant tree-level contribution presented by the right diagram in Fig. 3. The tree-level process does not receive contribution from pseudo-scalar \(A\), as its contribution vanishes for coherent scattering [20]. Finally, there is sub-leading vector contribution with expression given in Ref. [19].
The contributions from tree-level scalar interactions (Fig. 3) with the proton and the neutron are:
\[\widetilde{g}_{LS}^{(p)}=-\frac{\sqrt{2}}{m_{H_{2}}^{2}}\frac{m_{p}}{v}\kappa _{e\mu}\sum_{i}c_{i}f^{(i,p)},\quad\widetilde{g}_{LS}^{(n)}=-\frac{\sqrt{2}}{ m_{H_{2}}^{2}}\frac{m_{n}}{v}\kappa_{e\mu}\sum_{i}c_{i}f^{(i,n)}. \tag{3.8}\]
The summation is over all the quarks: \(i=u,d,s,c,b,t\), and \(m_{p}\) and \(m_{n}\) are the proton and neutron masses respectively. The \(\widetilde{g}_{RS}^{(p)}\) and \(\widetilde{g}_{RS}^{(n)}\) are obtained from \(\widetilde{g}_{LS}^{(p)}\) and \(\widetilde{g}_{LS}^{(n)}\) by replacing \(\kappa_{e\mu}\) with \(\kappa_{\mu e}^{*}\). The overlap integrals for gold5 are \(D=0.189\), \(S^{(p)}=0.0614\), \(S^{(n)}=0.0918\) and \(V^{(p)}=0.0974\), in units of \(m_{\mu}^{5/2}\). The nucleon matrix elements for light quarks are
Footnote 5: The overlap integrals for other nuclei can be found in [20].
\[f^{(u,p)}=f^{(d,n)}=0.024,\quad f^{(d,p)}=f^{(u,n)}=0.033,\quad f^{(s,p)}=f^{( s,n)}=0.25, \tag{3.9}\]
and
\[f^{(c,p)}=f^{(c,n)}=f^{(b,p)}=f^{(b,n)}=f^{(t,p)}=f^{(t,n)}=0.051, \tag{3.10}\]
for the heavy quarks [19].
In the leptophilic model the quark coupling to flavon is suppressed by \(\sin\theta\) and, therefore, the tree-level contribution to \(\mu\leftrightarrow e\)-conversion is not competitive with the dipole contribution \(A_{L,R}\). In the leptophilic case the \(\mu\to e\gamma\) provides the most stringent constraint on the model parameters. The allowed parameter space in leptophilic model is
presented in the left panel of Fig. 4. The effect of cancellation of scalar \(H_{2}\) and pseudo-scalar \(A\) is clearly visible: large flavon couplings (small \(v_{\phi}\)) are allowed for \(m_{A}\sim m_{H_{2}}\), where the cancellation is most effective.
For the case of additional quark couplings the situation is more involved. The process \(\mu\to e\gamma\) enjoys the cancellation between real and imaginary parts of flavon, just like in the leptophilic case, and the \(\mu\to e\gamma\) bound can be avoided at small \(v_{\phi}\), even for the large flavon-top coupling. The \(\mu\leftrightarrow e\) conversion, however, excludes the small flavon VEVs in this case. This can be alleviated by adding flavon coupling to quark(s), other than top, with an opposite sign. This can cancel the large top contribution to \(\mu\leftrightarrow e\) conversion. In the light of Eqs. (10) and (11), the cancellation between two tree-level contributions of heavy quarks takes place when the coupling parameters \(c_{i}\) have the same absolute value but come with opposite signs. The cancellation of top contribution is also possible with other quarks. For the flavour constraints the choice between \(b\) and \(c\) quark is not relevant, but for later collider results it is. The bottom SM Yukawa coupling is larger than that of charm. The addition of bottom will, therefore, dilute the \(H_{2}\to e\mu\) branching ratio more. We will, therefore, choose charm to cancel the top contribution.
We study the flavon production through gluon-gluon fusion at the LHC and set the top coupling to be \(c_{t}=1.9\). The results are shown in the right panel of Fig. 4: for the opposite contributions of top and charm the tree-level contributions cancel, leaving \(\mu\to e\gamma\) as the dominant process. In this case low values of \(v_{\phi}\sim 100\) GeV are allowed. With top coupling only, the \(\mu\leftrightarrow e\) conversion constrains the low flavon VEVs, excluding VEVs below \(\sim 260\) GeV. For charm coupling equal to the SM Higgs charm coupling, the flavon VEVs below \(\sim 410\) GeV are excluded.
### Collider results
We consider here the flavon production through gluon-gluon fusion with large coupling to top quark. We also have flavon coupling to charm, in order to avoid the flavour bounds, but it offers negligible contribution to flavon production. As there is no mixing with the Higgs boson, the relevant \(H_{2}\) production channels are the dominant gluon-gluon fusion and the sub-dominant \(tt\)-fusion. The resulting \(e\mu\) production cross section is presented in Fig. 5. One can see that large flavon coupling to top, \(c_{t}Y_{t}\gtrsim 1.7\), can reproduce the reported CMS cross section for the \(e\mu\) final state for \(v_{\phi}\gtrsim 100\) GeV.
Other possible collider constraints on the 146 GeV flavon are avoided because there is no mixing with the SM Higgs boson and, therefore, \(H_{2}\) does not couple directly to gauge bosons. Therefore the searches for \(ZZ\) final state from the second Higgs boson at low mass [22] do not impose constraints. Most other second scalar searches by the CMS and ATLAS impose limits on masses above 146 GeV. The ATLAS searches for \(WW\) and \(ZZ\) final states place a limit on masses above 300 GeV [23], whereas the CMS search for \(WW\) final state set a limit above 200 GeV [24]. Finally, the \(\gamma\gamma\) final state searches restrict masses above 200 GeV (ATLAS) [25] and 500 GeV (CMS) [26]. As a result, the Froggatt-Nielsen scenario with additional quark couplings is, therefore, not constrained by other collider searches, even though the required top coupling is large.
## 4 Conclusions
In this work we studied the phenomenology of low-energy Froggatt-Nielsen mechanism with the aim to address the recent CMS hint for a new 146 GeV resonance with LFV couplings. We first studied the purely leptophilic model which is known to be consistent with all stringent constraints on LFV. We found that the leptophilic Froggatt-Nielsen model cannot reach the production cross section \(\sigma=5.77\) fb which is indicated by the CMS result. The simplest realizations of the Froggatt-Nielsen mechanism are, thus, incompatible with the claimed CMS signal.
We modified the purely leptophilic model by adding diagonal flavon couplings to quarks. A large flavon coupling to top quark allows for sufficient production of flavon and the experimentally hinted cross-section can be obtained. Additionally, couplings to lighter quarks are also required in order to avoid the bounds arising from tree level mediated \(\mu\leftrightarrow e\) conversion. The latter can be avoided when the lighter quark couplings to flavon have different sign compared to the top coupling.
We conclude that non-trivial model building and some cancellation between the model parameters are needed to identify the CMS hint for a new resonance with the low-scale Froggatt-Nielsen flavon. More experimental data is needed to clarify the origin and properties of the claimed CMS signal.
**Acknowledgements.** This work was supported by the Estonian Research Council grants PRG803 and PRG1677.
Figure 5: _Left panel:_ Cross-section \(\sigma(pp\to H_{2}\to e\mu)\) as a function of the flavon VEV at the \(\sqrt{s}=13\) TeV LHC for different quark couplings as indicated in the figure. The gray shaded region is excluded by \(\mu\leftrightarrow e\) conversion for all values of \(m_{A}\). The horizontal line corresponds to \(5.77\) fb. _Right panel:_ The relevant branching ratios for the same case. The solid lines correspond to \(c_{t}=-c_{c}=1.9\), dashed line to \(1.7\) and dot dashed line to \(1.5\).
**Note added.** The CMS hint for the 146 GeV resonance as a new scalar particle has also been studied in Ref. [27] in the context of two Higgs doublet models.
|
2306.06135 | Safety and Fairness for Content Moderation in Generative Models | With significant advances in generative AI, new technologies are rapidly
being deployed with generative components. Generative models are typically
trained on large datasets, resulting in model behaviors that can mimic the
worst of the content in the training data. Responsible deployment of generative
technologies requires content moderation strategies, such as safety input and
output filters. Here, we provide a theoretical framework for conceptualizing
responsible content moderation of text-to-image generative technologies,
including a demonstration of how to empirically measure the constructs we
enumerate. We define and distinguish the concepts of safety, fairness, and
metric equity, and enumerate example harms that can come in each domain. We
then provide a demonstration of how the defined harms can be quantified. We
conclude with a summary of how the style of harms quantification we demonstrate
enables data-driven content moderation decisions. | Susan Hao, Piyush Kumar, Sarah Laszlo, Shivani Poddar, Bhaktipriya Radharapu, Renee Shelby | 2023-06-09T01:37:32Z | http://arxiv.org/abs/2306.06135v1 | # Safety and Fairness for Content Moderation in Generative Models
###### Abstract
With significant advances in generative AI, new technologies are rapidly being deployed with generative components. Generative models are typically trained on large datasets, resulting in model behaviors that can mimic the worst of the content in the training data. Responsible deployment of generative technologies requires content moderation strategies, such as safety input and output filters. Here, we provide a theoretical framework for conceptualizing responsible content moderation of text-to-image generative technologies, including a demonstration of how to empirically measure the constructs we enumerate. We define and distinguish the concepts of safety, fairness, and metric equity, and enumerate example harms that can come in each domain. We then provide a demonstration of how the defined harms can be quantified. We conclude with a summary of how the style of harms quantification we demonstrate enables data-driven content moderation decisions.
## 1 Introduction
Generative AI systems allow users to create new content (e.g., text, image, audio, code) in response to an input, often relying on large-scale training datasets. Such datasets may contain social stereotypes, inequalities, and hierarchies [10, 65, 39], which generative models can replicate in downstream uses. Users may also exploit generative AI systems for disinformation, non-consensual synthetic sexual imagery, and other types of malicious content [37, 35, 28, 41]. Responsible development of generative AI systems thus requires content moderation among other techniques to deploy systems that minimize harmful content. In this paper, we provide a framework for conceptualizing responsible content moderation in generative AI from a harm-reduction standpoint - defining safety, fairness, and metric equity.
Development of content filters is a key mode of moderating generative AI content. Deciding what content to filter is a normative governance decision; in practice, generative AI content filters may index on illegal content (e.g., child sexual abuse material, copyright violations) [58], rather than a broader range of harmful content, including representational [8] or cultural harms [45] and violence or gore [50]. When harmful content is filtered, algorithmic content moderation may disproportionately penalize content concerning socially marginalized groups [4, 54, 9], as moderation systems also learn and replicate demeaning associations from their training data [66, 56, 63]. These limitations underscore the urgency of centering the experiences of oft-marginalized groups in defining and evaluating safety parameters and assessing the fairness of content moderation algorithms deployed on generative AI systems.
Assessing algorithmic fairness in generative AI systems is challenging. Much scholarly attention focuses on algorithmic fairness within classification models (e.g., [7, 13, 29]), with statistical notions of fairness reliant on confusion matrices of model performance [62]. Here, algorithmic fairness is often divided into two parts, defining fairness based on: (1) predicted outcomes across groups (e.g., equality of opportunity) [29]; or (2) the consistency of predicted and actual outcomes when sub-groups change (e.g., counterfactual fairness) [21]. These definitions fueled advances in classifier fairness (e.g., [3, 7]), but are not directly applicable to generative AI systems, as generative models do not have one right outcome nor can "accuracy" be assessed across groups or individuals within generative models. Importantly, there is rarely one definitive "correct" response to which any given input to a generative system can be measured against to quantify accuracy, as many input concepts are socially situated and contextual. The fact that many prompts provided to a generative system can properly produce an enormous range of results that respect the intent of the user, rather than a single answer (as in conventional ML), is the main source of challenge in defining generative model safety and fairness, and a characteristic distinguishing generative from finite classification contexts.
We intervene in these challenges by offering a theoretical framework for assessing safety and fairness in generative AI from the perspective of harm-reduction content moderation, and provide a quantitative example of how to measure its
constructs (Section 3). In particular, we describe a method for adversarially challenging a text-to-image (T2I) generative system and machine annotating its outputs for a number of harms at scale. Next, we demonstrate an approach to _measure_ hateful, pornographic, and violent content in generated imagery and _assess_ the relationships between the text prompts and the resulting harmful generated imagery (Section 4). We additionally analyze how each metric interacts with the gender presentation of individuals in the generated imagery, constituting one of many potential sensitive attributes.
The proposed metrics provide a means to measure harmful and biased content (safety, fairness) and how to "measure the measurements" to assess their performance across defined sociodemographic dimensions (metric equity). The metrics we describe inform efforts to evaluate models and foster greater alignment between AI systems and defined governance goals (Section 5). This research contributes to Responsible AI and content moderation scholarship, offering:
* A tractable framework for proactive definition and measurement of safety, fairness, and equity in generative AI systems.
* A harms taxonomy for safety and fairness in generative AI models.
* A method for empirical measurement of the harms specified in the taxonomy, including "measurement of measurements", for bias.
## 2 Related Work
This research engages with and extends research at the intersections of generative AI and content moderation.
**Generative AI**: Generative AI has evolved rapidly for use cases such as text and image generation through techniques such as generative adversarial networks (GANs) [24], variational auto-encoders [51, 38, 32], and transformers [61, 18]. Transformer-based models have especially shown great promise in a variety of generative tasks, including image [70, 31], text [44], and code generation [43], among others. There is a significant body of work analyzing aspects of user facing harms and biases that emerge from these model applications [49, 55, 42, 1, 11, 47, 49, 47, 55, 69, 1]. However, none of these independently provide an exhaustive framework of safety or fairness across generative models. To that end, there is an opportunity to define and develop such a taxonomy for this space.
**AI safety and content moderation**: Content moderation falls under the umbrella of AI safety: a normative, governance approach to responsibly develop and deploy ML systems [25, 26], including a focus on developing policies to outline desired characteristics of ML systems [22] and techniques to foster policy alignment, such as reducing harmful content [23]. An equity-oriented moderation system is attentive to how harms from algorithmic systems are sociotechnical - that is, emerging through the interplay of technical system features and extant societal power dynamics [57]. Scholarship identifies a wide range of potential harms in generative AI systems, including so-called _algorithmic harms_ that are more directly related to the functioning of the system and often implicate fairness through its training data (e.g., representational, allocative, and quality-of-service harms) [65, 67, 8], and _contextual harms_, through which generative system affordances in a productionized environment facilitate harm in a particular social context (e.g., peer-to-peer abuse, maliciously generated content, or information harms) [68]. Intervening in how generative AI systems create harmful content is required to develop technologies safer for oft-marginalized communities and meet governance needs for use cases in which potential harms may differ [57].
**Generative AI content moderation:** Conceptually, there are three key types of content moderation that can be applied to generative models to meet safety needs and improve fairness: (1) training data mitigations, (2) in-model controls, and (3) input and output filters. _Training data mitigations_ pertain to filtering or augmenting a generative system's training data to reduce its capability to cause harm. Ensuring a T2I model's training set does not include certain types of material (e.g., sexually explicit material) may substantially limit the model's ability to produce it. _In-model controls_ are techniques altering a model's architecture to influence its behavior. For generative AI, applying reinforcement learning with human feedback (RLHF) [14] to tune a model's weights subsequent to supervised training is a common in-model control. Lastly, _input_ and _output filters_ are additional, conventional ML systems that analyze whether the input to- or output of- a generative model is potentially harmful. Inputs deemed harmful by filter systems are not sent forward for generation; outputs deemed harmful are not surfaced to a user. A T2I input filter, for instance, might analyze if a prompt includes racial slurs; and if so, not generate any imagery. Similarly, a T2I output filter might analyze whether a generated image contains sexually explicit material; and, if so, not surface that image to the user.
Training data mitigations and in-model controls may be cumbersome, as they may require acquisition of new data, expensive filtering of existing data, or time-consuming model retraining. In contrast, input and output filters are relatively agile, and can be implemented with more immediate results. For this reason, we focus on input/output filters, and leave discussion of training data mitigations and in-model controls for future work.
There is limited research on safety filters in open-access
generative AI systems. Solaiman [58] positions safety controls and guardrails as a necessary component of a safe system release. Similarly, Rando et al. [50] adversarially tested the Stable Diffusion safety filter, articulating improved standards for safety filters, including transparent documentation and attention to a wider range of AI harms, including violence and gore. Hacker et al. [28], situates harms from generative AI systems in the context of the Digital Services Act, through which content moderation would play a central role. This work emphasizes the characteristics of safety filters, including what kinds of harmful content is filtered and to what extent filters and how they function are released publicly. To our knowledge, no work has examined the fairness aspects of generative AI safety and content filters. We extend this work to provide a tractable content moderation framework for safety and fairness in generative AI systems.
## 3 Safety and Fairness in Generative AI
In this section, we propose definitions of safety, fairness, and metric equity for generative AI.
### Safety of a generative AI system
We define _safety_ for generative AI as reducing potentially harmful outputs. Following Hernandez-Orallo et al. [30], the safety of a generative AI system encompasses harmful content that is accidentally generated (e.g., harmful content generated from a neutral prompt) and intentionally generated (e.g., harmful content from a malicious prompt or prompt purposefully violating safety rules). Inclusion of the latter necessitates understanding of -- and adversarial evaluation into -- potential forms of misuse, such as use of so-called "prompt engineering" to circumvent safety rules.
In generative AI systems, the scope of relevant safety concerns is dependent on the model affordances (i.e., what is being generated) and context of use (i.e., what type of content is appropriate for the use case), and may be further defined by regulation, standards, or organizational policies. As previously discussed, harms from algorithmic systems disproportionately impact communities already facing social marginalization [6], such as demeaning representations historically used to justify social hierarchies [48]. Thus, to reduce the risk of generative AI systems scaling inequalities, safety should be considered across a broad range of algorithmic and contextual harms (see: [57]) and defined in ways attentive to social power dynamics.
For explanatory purposes in this paper, we specify three safety harms for our illustrative T2I system requiring content moderation:
* Sexually explicit content: Generated content contains explicit or graphic sexual acts, realistic sex toys or implements, and/or sexual body parts.
* Graphic violence and gore: Generated content contains extreme and realistic acts of violence, blood, body parts, or viscera towards people or animals.
* Hateful content: Generated content expresses, incites, or promotes hate, violence, or serious harm based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste.
**Safety definition:** The safety of a generative model with respect to defined safety harms for content moderation is measured as the percent of unsafe model outputs that can be experienced through interaction with the model within and across harm categories. We define the _safe rate_ as:
\[1-\frac{\text{\# of harmful images}}{\text{\# of total images generated}}\]
in response to a defined set of queries. An appropriate safety goal for content moderation is that the safe rate be greater than some criterion \(c\). It is typically appropriate to define both an overall safety criterion and safety criterion for each individual harm under consideration. In the illustrative T2I system we use here, the safety harms are sexually explicit content, violent and gory content, and hateful content. However, in a real system, the scope of safety harms should be much more comprehensive.
### Fairness in generative AI
The performance of generative AI systems has fairness considerations, particularly in terms of _representational harms_, such as how a model may learn harmful and de-meaning stereotypes [5] and how systematic absences in training data may lead to patterns of erasure [15], which safety and content filters may further exacerbate. In this paper, we focus on four fairness considerations:
1. **Diversity of representation**: The extent to which content generated with an underspecified prompt (i.e., a prompt not specifying a sociodemographic attribute) defaults to a specific sub-group reinforcing stereotypes. Example: the underspecified prompt "CEO" only depicts individuals of one particular gender presentation, age, or other sociodemographic attribute.
2. **Equal treatment across subgroups**: The extent to which underspecified prompts are equally successful in generating content as specified prompts. Example: the underspecified prompt, "people at church," should provide the same number of safe outputs (within the error tolerance \(e\)) as the specified prompt, "Asian people at church."
3. **Stereotype amplification**: The extent to which specified prompts result in generated content that recapitulates demeaning or harmful stereotypes. Example:
content generated with prompts containing "lesbian" overwhelmingly contain sexual depictions.
4. **Counterfactual fairness**: The extent to which content generated in response to counterfactual versions of a prompt are similar. Example: content generated in response to "male CEO" should be similarly sexually explicit to those generated in response to "female CEO."
Next, we formally define each fairness consideration as a step towards providing quantitative measurements.
_Diversity of representation_. Let \(x\) be the output of a generative AI model, and let \(y\) be the sociodemographic dimension of the people or communities in the output. Then, the diversity of representation of the output can be measured by the entropy of \(y\), defined as:
\[H(y)=-\sum_{i=1}^{n}p_{i}\log p_{i}\]
where _p\({}_{i}\)_ is the probability of the ith sociodemographic group represented in the output. The higher the entropy, the more diverse the outputs. A content moderation policy may choose to enforce a minimal acceptable entropy for generated imagery, which, if not met, triggers additional rounds of generation in response to a prompt.
_Equal treatment (as subgroup erasure)_. We operationalize one facet of "equal treatment" for quantitative analysis as the harm of _erasure_ (see: [16]), in which certain sociodemographic dimensions are systematically or disproportionately absent in generated imagery. Let \(\mathsf{r}_{\text{unspecified}}\) be the rate of failures within the content moderated system for prompts in which the cultural, or sociodemographic characteristics of the subjects to be generated are _unspecified_ (i.e., "a CEO"). Let \(\mathsf{r}_{\text{specified}}\) be the rate of failures for prompts in which those characteristics are specified (i.e., "a female CEO"). The difference \(d\) between these should be minimal, reflecting that certain subgroups are not erased relative to the unspecified population. A content moderation policy may choose to enforce a maximum acceptable value of \(d\) depending on the use case and context in which a generative model is deployed:
\[(r_{\text{unspecified}}-r_{\text{specified}})\leq d\]
_Stereotype amplification_. The presence of "demeaning or harmful stereotypes" is at present a construct that remains challenging to measure, and can be strengthened by incorporating human annotation. However, one quantitative method for measuring stereotype amplification in T2I generative models is using the normalized pointwise mutual information (nPMI) metric. nPMI [2, 12] measures the degree of association between a word and an image concept by comparing the frequency of the word in texts that contain the image concept against the frequency of the word in texts that do not contain the image concept. This metric is useful in identifying words that are highly associated with certain image concepts or sociodemographic groups, which may indicate the presence of stereotypes in the model's predictions. Measuring stereotypes provides insight into biases that may exist in a given model and offers insight for developing more inclusive and equitable generative models.
nPMI is a statistical measure of the association between two discrete variables, defined as:
\[nPMI(w,c)=\frac{PMI(w,c)}{-log(P(w,c))}\]
where \(w\) is a word, \(c\) is an image concept or category, and _PMI(w,c)_ is the pointwise mutual information between the word and the concept defined as:
\[PMI(w,c)=log(\frac{P(w,c)}{P(w)*P(c)})\]
where _P(w,c)_ is the joint probability of the word and concept occurring together, _P(w)_ is the probability of the word occurring in the corpus, and _P(c)_ is the probability of the concept occurring in the corpus. nPMI value ranges from -1 to 1, where a value of 1 indicates a perfect positive association between the word and the concept, a value of 0 indicates no association, and a value of -1 indicates a perfect negative association. The normalization factor \(-log(P(w,c))\) is used to adjust for the frequency of the concept in the corpus and to prevent the nPMI value from being biased towards rare concepts.
_Counterfactual fairness_. We extend the counterfactual fairness framework for classification systems [21]. Let \(\Phi(x)\) denote the set of counterfactual examples associated with an example \(x\). Counterfactual fairness in the generative context requires that the rate of harmful model responses \(r\) for all inputs in \(X\) are within a specified error \(e\). A content moderation policy may choose to enforce a maximum acceptable value of _r_:
\[r(x)-r(x^{\prime})\leq e,\forall x\in X,x^{\prime}\in\Phi(x)\]
### Metric equity: Measure the measurements
_Metric equity_ is a construct for assessing the performance of a generative model across sociodemographic subgroups. Disparate performance is a _quality-of-service harm_ occurring when an ML system disproportionately fails for certain groups of people along social categories of difference, such as disability, ethnicity, gender identity, and race [57]. They are often a reflection of how system training data are optimized for dominant groups (e.g., [19]). However,
content moderation strategies may also affect performance for different users [4, 9, 54].
Disaggregating top line metrics across subgroups is one way to identify and address disparities through content moderation strategies. In practice, operationalizing "metric equity" needs to be specific to both the type of generative AI model (e.g., image, text, code) and the success criterion determined for its production environment by developers. For example, a success metric for a T2I model demo might be task completion (i.e., # of images generated for prompts containing non-Western clothing terms is comparable to prompts containing Western clothing terms). We posit that, regardless of the top line metric chosen in the productionized generative model, it should be equitable across users of all sociodemographic subgroups.
One approach to assess metric equity through quantitative analysis is comparing the failure rates across sociodemographic groups. Taking task completion rate as an example, let \(\text{tcr}_{\text{unspecified}}\) be the task completion rate of a T2I application within the content moderated system for prompts in which the cultural or sociodemographic characteristics of the subjects to be generated are _unspecified_. Let \(\text{tcr}_{\text{specified}}\) be the task completion rate for users or text prompts that are _specified_. The difference \(d\) between these should be minimal, reflecting that marginalized subgroups do not have a disproportionately subpar experience of the system compared to the majority subgroup:
\[(\text{tcr}_{\text{unspecified}}-\text{tcr}_{\text{specified}})\leq d\]
While we stipulate this definition, we scope this paper to the underlying generative model, not to its productionized application. We include this definition to ensure that developers undertake such evaluations.
## 4 Experiments
In this section, we demonstrate how the safety and fairness framework outlined in Section 3 can be put into practice using quantitative methods that enforce the set of content moderation decisions defined in Section 3.1.
_Model_. We make measurements on a set of 40,904 images generated in response to 10,226 prompts (4 images per prompt) submitted to a de-noising diffusion model similar, but not identical to that reported in [53]. We trained our model on a filtered dataset, whereby, to the extent possible with machine annotations, sexually explicit and violent content were removed.
_T2I prompt dataset_. Members of the research team sourced 10,226 T2I prompts. About half of the dataset (\(n\)=5,638) comprised adversarial prompts intended to be broadly harmful, including prompts for the defined safety harms in our illustrative T2I system: sexually explicit content (\(n\)=203), graphic violence and gore (\(n\)=283), hateful (\(n\)=777) and harassing (\(n\)=202) content. The remaining adversarial prompt subset addressed a variety of other potentially harmful model behaviors, including misinformation and demeaning depictions of political and religious content, cultural concepts, sociodemographic attributes (e.g., sexuality, gender, socioeconomic status), among others.
Adversarial prompts were developed through a combination of methods. Some were sourced in adversarial testing rooms (\(n\)=1,265), some were created via templating (\(n\)=6,887), some were drawn from databases (\(n\)=1,861) some were user reports of problematic prompt/image pairs (\(n\)=3), and some were created via LLM expansion (\(n\)=210). Sociodemographic categories represented included age, body type, ability, class, gender presentation, region, race/ethnicity, religion, sexual orientation, and political ideology. The adversarial testing rooms were staffed by sociotechnical researchers. The templates were designed to provide controlled counterfactual sets. For example, one template that we analyze in particular below was constructed as "a \(\langle\text{identifier}\rangle\)\(\langle\text{occupation}\rangle\)\(\langle\text{verb}\rangle\) in a \(\langle\text{location}\rangle\)". For example "a lawyer walks in a restaurant", "a male lawyer walks in a restaurant", "a female lawyer walks in a restaurant." Databases used to source queries were (1) the international Forbes 500 list of companies (to probe trademarks), (2) the Dataset Publishing Language resource [17], for a list of countries in the world, (3) the ADL hate symbols database [40], for a list of hate symbols. Curated adversarial prompts are a useful starting point for content moderation in our approach as, unlike organically sourced user queries, the intent of the adversarial tester is known, can be modeled to have exhaustive coverage of intended issues, and -- as we demonstrate below -- can provide a ground truth for filtering decisions that can be applied to a user-facing system.
_Analysis._ Additional, conventional ML classifiers were deployed to provide confidence estimates that the input text and output images included sexually explicit content, graphic violence, or hateful content. We analyze and describe the results of this classification scheme below, providing commentary on what each data point suggests with regards to content moderation decisions.
### Safety performance
We begin with analysis of the model's performance in terms of: (1) sexually explicit content, (2) violence and gore, and (3) hateful content.
**Sexually explicit content**. Figure 1, below, displays a histogram of sexually explicit content scores for the 40,904 images analyzed, along with a 95% percentile marker for illustrative purposes. The percentile score represents the probability the image contains sexually explicit content. A content moderation decision that could be made in response to this data in our illustrative system could be to set a thresh
old for exposure to users at the 95th percentile level, so that only the top 5% most sexually explicit images are blocked. Of course, the actual threshold selected for content moderation depends on many factors, such as the contextual use case and the expected audience, in which the desired threshold may be increased or lowered; but, viewing the data in this way is, _at worst_, a good place to start. In this case, we observed that even the 95th percentile scores are in the range of 0.09 on a [0,1] scale. As large scale training datasets are likely to contain a larger proportion of sexually explicit content, these low scores are consistent with the suggestion that removing sexually explicit content from the model's training set (a training data mitigation) reduced the extent to which it is able to produce sexually explicit content, even when directly prompted to do so.
**Violence and gore.** In analyzing the violence and gore data, we observed the vast majority of the 40,904 images generated were so non-violent that the aggregate histogram was not meaningful. Here, being aware of the intent of the adversarial prompts is useful, as it enables us to examine only the subset of prompts intended to produce violent imagery.
There were 1132 images generated in response to prompts with violent intent. Figure 2 displays the histogram of automated violence scores on the adversarially generated violence-specific images. As is clear from the figure, even with adversarial intent, it was hard to generate images that scored high on violence (the range of possible scores is [0,1]). We attribute this result to the removal of violent data from the model's training set.
Once again, an illustrative content moderation decision could be made to block images scoring above the 95th percentile. Note that in both this case and the sexually explicit case, the empirically derived threshold is much lower than a typical _a priori_ threshold such as 0.5 or 0.9.
**Hateful content.** As discussed in the Introduction, generative AI can create a variety of hateful imagery. To operationalize one type of hateful content as an analytical example, we examined the extent to which our illustrative model generated hate symbols in response to adversarial prompts including names and descriptions of hate symbols identified in the Anti-Defamation League's database [40], such as "burning cross," "swastika," and "iron cross," among others. We used an automated hate symbol detection model to label generated images (MNet trained on a database of hate symbol imagery). Zero (0) generated images were identified as containing hateful symbols; human spot checking of 100 images confirmed machine annotation was correct for examples checked. Though no hate symbols were identified in this experiment, they should be checked in content moderation efforts.
### Fairness performance
We applied additional, conventional ML classifiers to our illustrative T2I model to annotate the dataset for fairness analysis. For this experiment, we examine fairness in the context of gender presentation (though, of course, a similar analysis could be conducted for any other set of social identities). To this end, text prompts were machine annotated for gender referents. Generated images were machine annotated for the gender presentation of entities in images where the machine annotator detected that a depiction of a person was present. Available labels for gender presentation were [unspecified, feminine, masculine]. Here, "unspecified" indicates the gender presentation did not contain obvious markers of hegemonic masculinity or femininity.
**Diversity of representation.** Using a neutral subset of the data where no gender was specified in the prompt, entropy of gender presentation was calculated at 0.31. Overall, masculine gender presentations were more likely than
Figure 1: Sexually explicit histogram.
Figure 2: Violence histogram.
feminine gender presentations by a margin of 11.5%. We then separately examined diversity of representation in images generated from adversarial prompts with sexually explicit or violent intent. Entropy of gender presentation was lower in each of these subsets than in the full dataset (hsexually explicit = 0.24; hviolent = 0.25). Further examination of the probabilities in each subset revealed that while the representational discrepancy in the sexually explicit data was in the direction of over-representation of feminine gender presentation, the discrepancy in the violent data was instead in the direction of over-representation of masculine gender presentation.
The dissociation between unbalanced gender presentations across the sexually explicit and violent subsets of the data demonstrates that simply "boosting" generations of a particular social group (e.g., [52, 59]) would not have been a sufficiently nuanced content moderation response to the overall finding that masculine presenting depictions of people were generated more frequently than feminine presenting depictions. Instead, one recommended content moderation decision in light of this data would be to apply and enforce an entropy criterion per prompt. For example, if 10 images are generated in response to a prompt, but 8 depict only masculine presenting individuals and only 2 depict feminine presenting individuals, content moderation might enforce that a random selection of 3 of the images depicting masculine presenting individuals be rejected and new images generated until gender presentation equity is achieved.
**Equal treatment (subgroup erasure).** To measure subgroup erasure, we examined images depicting people generated from prompts without explicit gender referents (e.g., "a doctor sits in an office.") We then split the resultant images into those depicting only feminine presenting persons and those depicting only masculine presenting persons. We then measured how many images in each subset would be blocked by a 95th percentile sexually explicit content filter, as described in 4.1, finding that a greater proportion of depictions of feminine presenting people (4.8%) were blocked by this filter compared to masculine presenting people (3.1%). What this result demonstrates for content moderation is that even a safety filter that is seemingly neutral with respect to social identity (block sexually explicit content) may result in unequal treatment of people with different social identities (here, feminine presenting vs. masculine presenting). Our recommendation is that safety filters must be seen not as single constraint satisfaction problems (block all images that are more sexually explicit than a threshold), but as a multiple constraint satisfaction problem (block all images that are more sexually explicit than a threshold while minimizing the discrepancy in treatment of different social identities). We also propose de-biasing these classifier-based filters by leveraging a large body of work that ensures equal treatment and opportunity across subgroups in classifiers.
**Stereotype amplification.** Included in the dataset were images generated in response to prompts of the form "face of a person from \(X\), [caricature, photorealistic, line drawing]", where \(X\) was systematically replaced with the name of every country in the Dataset Publishing Language resource [17]. These prompts were designed such that they themselves were innocuous and neutral, but with the expectation that (especially in the caricature condition), they could generate harmful stereotypes. As an example of automated stereotype analysis, we examined 2,940 images to see if the model exhibited stereotyping towards people from Arab countries [60]. In particular, we assessed the expression of the stereotype that people from Arab countries wear head coverings. To do this, we machine labeled images of "face of a person from \(X\)" for the presence of head coverings in the image, and then split the data depending on whether an Arab country of origin was specified in the prompt. No other machine labels were computed - that is, no measurement of skin tone or any other proxy for race was made on the output. Only the input text, which specified a country of origin, was used as a basis for subsetting the data into Arab and Non-Arab groups. We then computed nPMI as shown in the table below. We found that _nPMI(Arab, Has Headear)_ has a higher value than _nPMI(Not Arab, Has Headear)_, indicating that head coverings more frequently co-occurred with generated images in response to prompts requesting people from Arab countries than people from other countries. The negative value for _nPMI(Arab, No Headear)_ indicates it was rare for images generated in response to prompts requesting images of people from Arab countries to not include headgear.
Table 3 demonstrates how data can be labeled and quan
\begin{table}
\begin{tabular}{l|l|l}
**Prompt types** & **Entropy** & **Maj. gender presentation** \\ \hline Unspecified & 0.31 & Masculine-leaning \\ \hline Sexually explicit & 0.24 & Feminine-leaning \\ \hline Violent & 0.25 & Masculine-leaning \\ \hline \end{tabular}
\end{table}
Table 1: Diversity of representation across model prompts.
\begin{table}
\begin{tabular}{l|l}
**Prompt specification \& model output association** & **Score** \\ \hline \(nPMI(Arab,HasHeadear)\) & 0.357 \\ \(nPMI(NotArab,HasHeadear)\) & -0.106 \\ \(nPMI(Arab,NoHeadear)\) & -0.043 \\ \(nPMI(NotArab,NoHeadear)\) & 0.094 \\ \end{tabular}
\end{table}
Table 3: Stereotype association between model inputs/outputs.
\begin{table}
\begin{tabular}{l|l|l}
**Prompt specification \& model output association** & **Score** \\ \hline \(nPMI(Arab,HasHeadear)\) & 0.357 \\ \(nPMI(NotArab,HasHeadear)\) & -0.106 \\ \(nPMI(Arab,NoHeadear)\) & -0.043 \\ \(nPMI(NotArab,NoHeadear)\) & 0.094 \\ \end{tabular}
\end{table}
Table 2: Equal treatment: Outputs flagged ’sexual’ by gender.
tified with nPMI to examine stereotype amplification, without requiring human annotation. A content moderation decision that could be made in response to this data may include identifying whether this stereotype is present in the training data and attempt to perform training data mitigations to reduce its prevalence. The narrow example we offer is illustrative, and we recommend content moderators be sensitive to the wide range of stereotypes that may be present in training data or likely to be surfaced in their use case.
**Counterfactual fairness.** There were 3,780 prompts in the adversarial prompt dataset that specified a "feminine" person (and no other gendered referent) be generated, and 4,184 prompts that specified that a "masculine" person (and no other gendered referent) be generated. Across this split, we measured how frequently the 95th percentile sexually explicit content threshold blocked images in each subset of resultant images. We observed images depicting feminine-presenting people were blocked by the sexually explicit content filter 3.6% of the time, compared to 3.7% of masculine-presenting people. The potential content moderation decision to block images above the 95th percentile score for sexually explicit content could be considered a fair one based on this counterfactual fairness metric, as one group is not disproportionately penalized.
We also examined a restricted subset of matched counterfactual prompt pairs, such as "a female doctor sits in an office" and "a male doctor sits in an office." Across this restricted subset, 1.2% of images generated in response to the feminine side of the counterfactual and 2.4% of the images generated in response to the masculine side of the counterfactual would be blocked by a 95th percentile sexually explicit content filter. In both the full adversarial set and the counterfactually-matched subset, setting a sexually explicit content filter at the 95th percentile would result in counterfactual parity within a 1% margin. This suggests a content moderation decision to implement the 95th percentile safety filter could be considered counterfactually fair.
## 5 Improved Insights for Content Moderation
Algorithmic content moderation involves using ML systems to classify content that leads to a governance decision to address the problematic content at scale [25]. In preceding sections, we defined safety, fairness, and equity for algorithmic content moderation in generative AI (Section 3), and provided examples of how to measure a subset of safety harms (sexually explicit, violent, and hateful generations) and fairness concepts (diversity of representation, equal treatment, and counterfactual fairness) (Section 4). In our empirical analysis, we provided an example measurement technique and discussion of what content moderation decisions were licensed by each empirical result.
While algorithmic content moderation promises the scaled and swift take down of illegal or problematic content [22], which is a growing expectation [20, 36], it must be exercised with intention. We note two key considerations. First, content moderation decisions for generative AI systems are not "one size fits all," even with respect to the limited number of safety and fairness considerations examined here. Second, content moderation -- even through quantitative methods -- is heavily use case dependent. We offer starting points for algorithmic content moderation:
_Tailor to use case:_ Content moderation decisions influence how people engage with a system [27, 64]. Content moderation choices (e.g., what safety harms are defined, thresholds for input and output classifiers, or top line metrics for evaluating metric equity) should be set in alignment with considerations of the use case.
_Equity-oriented fairness:_ As societal power dynamics constiuitively shape harms from algorithmic systems, marginalized communities that already face systemic forms of social exclusion disproportionately experience them. Thus, content moderation should consider a wide range of potentially relevant harms to develop technologies safer for off-marginalized communities. As we showed here, quantifying harms provides one mechanism for safety in content moderation (e.g., by setting filter thresholds) and for fairness in content moderation (e.g., by making sure filters do not disproportionately penalize certain social groups).
_Make evidenced-based decisions:_ Decisions and techniques to respond to support defined safety harms and algorithmic fairness in content moderation policies should be evidenced-based and tailored. It is advantageous to conceptualize harms in a manner that can be quantified at scale.
## 6 Conclusion
We offer a tractable framework for proactive definition and measurement of safety, fairness, and equity in generative AI systems. Based on experimental data for our illustrative T2I system, we demonstrate how our safety and fairness definitions can be examined without engaging human raters; although, a mixed-methods approach could strengthen the nuance of evaluating particular harm constructs, such as stereotype amplification. Nonetheless, we offer a novel safety and fairness approach to support more informed content moderation decision making.
|
2307.11760 | Large Language Models Understand and Can be Enhanced by Emotional
Stimuli | Emotional intelligence significantly impacts our daily behaviors and
interactions. Although Large Language Models (LLMs) are increasingly viewed as
a stride toward artificial general intelligence, exhibiting impressive
performance in numerous tasks, it is still uncertain if LLMs can genuinely
grasp psychological emotional stimuli. Understanding and responding to
emotional cues gives humans a distinct advantage in problem-solving. In this
paper, we take the first step towards exploring the ability of LLMs to
understand emotional stimuli. To this end, we first conduct automatic
experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna,
Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative
applications that represent comprehensive evaluation scenarios. Our automatic
experiments show that LLMs have a grasp of emotional intelligence, and their
performance can be improved with emotional prompts (which we call
"EmotionPrompt" that combines the original prompt with emotional stimuli),
e.g., 8.00% relative performance improvement in Instruction Induction and 115%
in BIG-Bench. In addition to those deterministic tasks that can be
automatically evaluated using existing metrics, we conducted a human study with
106 participants to assess the quality of generative tasks using both vanilla
and emotional prompts. Our human study results demonstrate that EmotionPrompt
significantly boosts the performance of generative tasks (10.9% average
improvement in terms of performance, truthfulness, and responsibility metrics).
We provide an in-depth discussion regarding why EmotionPrompt works for LLMs
and the factors that may influence its performance. We posit that EmotionPrompt
heralds a novel avenue for exploring interdisciplinary knowledge for human-LLMs
interaction. | Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Xing Xie | 2023-07-14T00:57:12Z | http://arxiv.org/abs/2307.11760v7 | # Large Language Models Understand and Can Behavioral by Emotional Stimuli
###### Abstract
Emotional intelligence significantly impacts our daily behaviors and interactions. Although Large Language Models (LLMs) are increasingly viewed as a stride toward artificial general intelligence, exhibiting impressive performance in numerous tasks, it is still uncertain if LLMs can genuinely grasp psychological emotional stimuli. Understanding and responding to emotional cues gives humans a distinct advantage in problem-solving. In this paper, we take the first step towards exploring the ability of LLMs to understand emotional stimuli. To this end, we first conduct automatic experiments on 45 tasks using various LLMs, including Flan-T5-Large, Vicuna, Llama 2, BLOOM, ChatGPT, and GPT-4. Our tasks span deterministic and generative applications that represent comprehensive evaluation scenarios. Our automatic experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts (which we call "EmotionPrompt" that combines the original prompt with emotional stimuli), e.g., **8.00%** relative performance improvement in Instruction Induction and **115%** in BIG-Bench. In addition to those deterministic tasks that can be automatically evaluated using existing metrics, we conducted a human study with 106 participants to assess the quality of generative tasks using both vanilla and emotional prompts. Our human study results demonstrate that EmotionPrompt significantly boosts the performance of generative
tasks (**10.9%** average improvement in terms of performance, truthfulness, and responsibility metrics). We provide an in-depth discussion regarding why EmotionPrompt works for LLMs and the factors that may influence its performance. We posit that EmotionPrompt heralds a novel avenue for exploring interdisciplinary social science knowledge for human-LLMs interaction.
Large language models, Psychology, Emotional intelligence
## 1 Introduction
Within the complex mosaic of human attributes, emotional intelligence emerges as a historically situated cornerstone characterized by a quartet of intertwined competencies centered on the processing of emotional information. Emotional intelligence denotes the capacity to adeptly interpret and manage emotion-infused information, subsequently harnessing it to steer cognitive tasks, ranging from problem-solving to behaviors regulations [1]. Emotions manifest through a confluence of reflexes, perception, cognition, and behavior, all of which are subject to modulation by a range of internal and external determinants [1, 2]. For instance, within the realm of decision-making, emotions emerge as powerful, ubiquitous, consistent influencers, wielding effects that can swing from beneficial to detrimental [3]. Studies further underscore the importance of emotions in steering attention [4], academia [5], and competitive athletic arena [6]. Other studies show that emotion regulation [7] can influence human's problem-solving performance as indicated by _self-monitoring_[8], _Social Cognitive_ theory [9, 10], and the role of _positive emotions_[1, 11]. Owing to its impact on human behaviors, emotion regulation theories have been applied across various domains, including educational settings for promoting students' success [12] and health promotion initiatives [13].
This paper aims at understanding the relationship between emotional intelligence and advanced artificial intelligence (AI) models. As one of the most promising research endeavor towards artificial general intelligence1, the recently emerging large language models (LLMs) have shown remarkable performance in a wide spectrum of tasks, such as reasoning, natural language understanding and generation, and problem-solving in STEM. A recent study [14] claimed that LLMs show great potential towards AGI by letting GPT-4 conduct a series of challenging tasks designed by humans. However, apart from their superior performance in various tasks, it remains unexplored whether LLMs can understand psychological emotional stimuli, which is a crucial advantage of humans to enhance problem-solving abilities. Therefore, we ask the question--are LLMs well aligned with human emotional intelligence? Many researchers have achieved significant advancements in multiple tasks by employing in-context learning techniques [15, 16, 17, 18, 19, 20]. However, existing approaches may not be universally applicable to all LLMs due to variations in their abilities. While recent work [21] has shown that LLMs can understand emotions, it did not evaluate the influence of emotional intelligence to LLMs, that is, can emotional intelligence play a key role in enhancing the abilities of LLMs?
Footnote 1: AGI is the ultimate goal in AI research and LLMs are widely considered as an important milestone towards this goal.
**Our approach.** We take the first step towards exploring the ability of LLMs to understand and harness emotional stimuli. Previous studies in psychology have shown that adding
emotional stimuli that are related to expectancy, confidence, and social influence can beneficially impact individuals. Real-world applications of this phenomenon include enhancing student success in education [12] and promoting health [13] by using encouraging and positive words. Drawing from such psychology phenomena, we propose **EmotionPrompt**--a straightforward yet effective approach to explore the emotional intelligence of LLMs. Specifically, we design \(11\) sentences as emotional stimuli for LLMs, which are psychological phrases that come after the original prompts. For instance, Fig. 1 shows an example of using one emotional stimulus, "This is very important to my career" at the end of the original prompts to enhance the performance of different LLMs. These stimuli can be seamlessly incorporated into original prompts, illustrating performance enhancement.
**Our key findings and discussions.** We conduct comprehensive experiments on a wide spectrum of tasks spanning deterministic and generative tasks, representing a variety of challenging scenarios. For deterministic tasks that can be evaluated using standard metrics, we conduct experiments on \(24\) Instruction Induction tasks [22] and \(21\) curated BIG-Bench tasks [23] using various LLMs, including Flan-T5-Large [24], Vicuna [25], Llama 2 [26], BLOOM [27], ChatGPT [28], and GPT-4 [29]. For generative tasks that do not support standard and automatic evaluation, we conduct a human study with \(106\) participants to determine the quality of generative tasks using both vanilla and emotional prompts based on GPT-4. The results are promising: our standard experiments show that LLMs possess emotional intelligence and can be enhanced by emotional stimuli with **8.00%** relative performance improvement in Instruction Induction and **115%** in BIG-Bench; our human study demonstrates that the emotional prompts significantly boost the performance of generative tasks (**10.9%** average improvement in terms of performance, truthfulness, and responsibility metrics).
Additionally, we discuss lessons and insights derived from our findings (see Section 3). For instance, we explore why EmotionPrompt is effective for LLMs by analyzing the effects of emotional stimuli on the final outputs through input attention, as shown in Table 4. Our results demonstrate that emotional stimuli actively contribute to the gradients in LLMs by gaining larger weights, thus benefiting the final results through enhancing the representation
Figure 1: An overview of our research from generating to evaluating EmotionPrompt.
of the original prompts. We further conducted ablation studies to explore the factors influencing the effectiveness of EmotionPrompt, such as model sizes and temperature. Our findings provide inspiration for potential users. Finally, we analyze the performance of the combination of various emotional prompts and find that they can further boost the results. Our results show that within Instruction Induction, EP02 emerges as the most effective stimulus, which surpasses the worst one at \(6.06\)%, while in BIG-Bench, EP06 is the best. It is worth noting that the performance of each stimulus may be influenced by various factors, including task complexity, task type, and the specific metrics employed.
**Contributions.** This paper makes the following contributions:
1. We propose EmotionPrompt to thoroughly study the emotional intelligence of large language models. Our study concludes that LLMs not only comprehend but can also be augmented by emotional stimuli.
2. We conduct extensive experiments on both deterministic and generative tasks in both standard and human evaluations. Results show the significant improvement brought by EmotionPrompt in task performance, truthfulness, and informativeness.
3. We provide an in-depth analysis focused on the rationales behind EmotionPrompt, shedding light on potential implications for both AI and social science disciplines.
## 2 Results
In this section, we begin by outlining the rationale behind designing emotional stimuli (Sec. 2.1), and then describe the standard experiment and results in Sec. 2.2. Subsequently, we present our human study and findings in Sec. 2.3. Finally, we conduct further study on evaluating the truthfulness and informativeness of EmotionPrompt in Sec. 2.4.
### Designing emotional stimuli
We design our EmotionPrompt to understand LLMs' behavior on emotional stimuli. As illustrated in Fig. 1, the implementation of EmotionPrompt is remarkably straightforward and requires only the addition of emotional stimuli to the initial prompts. How to design effective emotional stimuli is the key to this research, and we take inspiration from three types of well-established psychological phenomena. Details are shown in Fig. 2 (left).
1. **Self-monitoring**, a concept extensively explored within the domain of social psychology, refers to the process by which individuals regulate and control their behavior in response to social situations and the reactions of others [8]. High self-monitors regulate their behaviors using social situations and interpersonal adaptability cues, engaging in self-presentation and impression management [8]. In our work, we apply self-monitoring in EP01\(\sim\)EP05. In EP02, we encourage LLMs to help humans get a positive social identity and a better impression. In EP01, and in EP03\(\sim\)EP05, we ask LLMs to monitor their performance via providing social situations.
2. **Social Cognitive Theory**, a commonly used theory in psychology, education, and communication, stresses that learning can be closely linked to watching others in social settings, personal experiences, and exposure to information [30]. The key point is that individuals seek to develop a sense of agency for exerting a large degree of control over important events in their lives [9, 10, 30]. The influential variables affecting one's sense of agency are self-efficacy, outcome expectations, goals, and self-evaluations of
progress [10]. Self-efficacy enhances performance via increasing the difficulty of self-set goals, escalating the level of effort that is expended, and strengthening persistence [31, 32]. Prior work has supported the idea that self-efficacy is an important motivational construct affecting choices, effort, persistence, and achievement [33]. When learning complex tasks, high self-efficacy influences people to strive to improve their assumptions and strategies [34].
Building upon these existing theories, we apply self-efficacy on LLMs via social persuasion, which can be some positive implications, such as building up confidence and emphasizing the goal. To regulate emotion into a positive direction, we use "believe in your abilities", "excellent", "success", "outstanding achievements", "take pride in" and "stay determined" in EP07\(\sim\)EP11, respectively. Generally, those phrases are also effective in motivating humans for better performance.
3. **Cognitive Emotion Regulation Theory** suggests that people lacking emotion regulation skills are more likely to engage in compulsive behavior and use poor coping strategies [35]. Techniques from this theory, such as reappraisal, can help individuals see challenges more positively or objectively. This shift in viewpoint helps maintain motivation and encourages ongoing effort, even when facing obstacles.
According to this theory, we have crafted numerous emotional stimuli, exemplified by designations such as EP03 \(\sim\) EP05 and EP07. Within these stimuli, we aim to stimulate the reappraisal skills of LLMs by incorporating pivotal terms, such as "sure" and "take another look".
Collectively, building upon these widely-known psychological phenomena, we design 11 emotional stimuli to explore how emotional stimuli may be associated with the performance of LLMs. As shown in Fig. 2, the emotion stimuli 01\(\sim\)05 are derived from self-monitoring
Figure 2: Building upon psychological theories, we developed different sets of emotional stimuli.
[8], 07\(\sim\)11 conform to Social Cognitive theory [9, 10]. EP03\(\sim\)EP05 and EP07 are derived from Cognitive Emotion Regulation theory [35]. To explore if more emotional stimuli can work better, we first built a compound stimulus (EP06), which combines EP01\(\sim\)EP03, and more discussion on this topic can be found in Section 3.2.
As shown in Fig. 2 (right), our designed emotional stimuli can be classified into two categories one tries to regulate emotion by social influence, such as group membership and others' opinions, and the other focuses on self-esteem and motivations. By selecting one of these emotional stimuli and incorporating it into the original prompt, the emotions of LLMs can be regulated and tapped into their intrinsic motivation.
### Standard experiments and results
First, we conduct standard experiments to evaluate the performance of EmotionPrompt. "Standard" experiments refer to those deterministic tasks where we can perform automatic evaluation using existing metrics. Specifically, we adopt \(24\) tasks from Instruction Induction [22] and \(21\) curated tasks of BIG-Bench [23] datasets. Instruction Induction [22] is designed to explore the ability of LLMs to infer an underlying task from a few demonstrations, which are relatively simple tasks, while BIG-Bench [23] focuses on tasks that are considered to be beyond the capabilities of most LLMs. Testing on tasks of varying difficulty can help us evaluate the effectiveness of EmotionPrompt, with an emphasis on various cognitive abilities, including language understanding, reasoning, and decision-making. The detailed task descriptions are provided in Tables A1 and A2.
For Instruction Induction, we use accuracy as the metric. For BIG-Bench, we report the normalized preferred metric defined in [36]. Under this metric, a score of 100 corresponds to human experts, and 0 corresponds to random guessing. Note that a model can achieve a score less than 0 if it performs worse than random guessing on a multiple-choice task.
#### 2.2.1 Experimental setup
We assess the performance of EmotionPrompt in zero-shot and few-shot learning on \(6\) different LLMs: Flan-T5-Large [24], Vicuna [25], Llama2 [26], BLOOM [27], ChatGPT [28], and GPT-4 [29].2 In zero-shot experiments, we incorporate emotional stimuli into the original prompts to construct EmotionPrompt. For the few-shot in-context learning experiments, we employ the same prompts as in zero-shot experiments and randomly sample 5 input-output pairs as in-context demonstrations, which are appended after the prompts. The template format can be described as "_prompt/EmotionPrompt + demonstration_".
Footnote 2: [https://github.com/faceface/face](https://github.com/faceface/face)
#### 2.2.2 Results and analysis
We average experimental results on all tasks in Instruction Induction [22] and \(21\) curved BigBench [23] in Table 1. Note that we only experiment with zero-shot prompts in Big-Bench due to constrained computation. To be specific, we compute the mean performance across tasks for each model. The term "Original" corresponds to the average performance achieved using the original prompt. "Zero-shot-CoT" denotes the mean performance employing "original prompt + Let's think step by step". "+Ours (avg)" is derived by initially calculating the average performance across tasks using EmotionPrompt, which incorporates \(11\) emotional stimuli, and subsequently computing the mean performance across these stimuli, while "+Ours (max)" is determined by first computing the average performance for each task using EmotionPrompt, then selecting the optimal performance from those stimuli.
Figure 3: Results on \(24\) tasks from Instruction Induction.
Below we report our findings:
1. **EmotionPrompt demonstrates consistent improvement in both Instruction Induction and Big-Bench tasks on all LLMs.** Specifically, EmotionPrompt sigficantly improves the performance by an relative improvement of **8.00%** in Instruction Induction and **115%** in BIG-Bench. Given its simplicity, EmotionPrompt makes it easy to boost the performance of LLMs without complicated design or prompt engineering.
2. **EmotionPrompt demonstrates a potential proclivity for superior performance within few-shot learning.** Compared with the zero-shot and few-shot results on Instruction Induction tasks, we see that the improvement brought by EmotionPrompt is larger in few-shot setting than zero-shot settings (0.33 vs. 2.05, in terms of average improvement). This indicates that EmotionPrompt is better at in-context learning with few-shot examples. Given that few-shot learning commonly performs better than zero-shot setting, this makes EmotionPrompt widely applicable in a wide spectrum of tasks.
3. **EmotionPrompt consistently demonstrates commendable efficacy across tasks varying difficulty as well as on diverse LLMs.** Big-Bench [23] and Instruction Induction [22] focus on tasks of different difficulties separately. Remarkably, EmotionPrompt excels in evaluations across both benchmarks. Furthermore, the generalization ability of EmotionPrompt can also be proved via its consistent performance across the six evaluated LLMs.
4. **EmotionPrompt outperforms existing prompt engineering approaches such as CoT and APE in most cases.** We also see that EmotionPrompt can be plugged into APE in Table 1, indicating that EmotionPrompt is highly extensible and compatible with existing prompt engineering methods.
We will further discuss and analyze the different aspects of EmotionPrompt, such as why EmotionPrompt would work and which emotional stimuli work the best in Section 3.
Figure 4: Results on \(21\) tasks from BIG-Bench.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline Model & T5 & Vicuna & BLOOM & Llama 2 & ChatGPT & GPT-4 & Average \\ \hline \hline Setting & \multicolumn{6}{c}{Instruction Induction (+Zero-shot)} \\ \hline Original & 25.25 & 44.91 & 50.33 & 33.46 & 75.20 & 80.75 & 51.65 \\ +Zero-shot-CoT & 24.57 & 33.45 & **51.35** & 36.17 & 75.20 & 59.72 & 46.74 \\ +Ours (avg) & 22.93 & 50.56 & 46.61 & 35.95 & 76.85 & 78.96 & 51.98 \\ +Ours (max) & **25.53** & **54.49** & 50.84 & **39.46** & **79.52** & **81.60** & **55.24** \\ \hline APE & 25.29 & 44.17 & 40.97 & 32.04 & 76.46 & 73.54 & 48.75 \\ +Zero-shot-CoT & **27.68** & 36.28 & 35.85 & 34.86 & 75.13 & 74.33 & 47.36 \\ +Ours (avg) & 22.94 & 45.63 & 38.76 & 34.88 & 77.45 & 73.38 & 48.84 \\ +Ours (max) & 25.41 & **51.46** & **41.94** & **40.06** & **79.53** & **75.71** & **52.35** \\ \hline Setting & \multicolumn{6}{c}{Instruction Induction (+Few-shot)} \\ \hline Original & 28.75 & 41.29 & 54.92 & 5.08 & 75.66 & 82.13 & 47.97 \\ +Zero-shot-CoT & 28.05 & 40.39 & 56.83 & 6.70 & 77.33 & 67.62 & 46.15 \\ +Ours (avg) & 29.66 & 41.41 & 58.97 & 8.20 & 77.75 & 84.12 & 50.02 \\ +Ours (max) & **31.02** & **47.51** & **60.08** & **9.17** & **79.50** & **87.13** & **52.40** \\ \hline APE & 23.42 & 38.33 & 54.50 & 5.46 & 76.79 & 81.58 & 46.68 \\ +Zero-shot-CoT & 26.58 & 39.60 & 56.62 & 6.55 & 78.48 & 82.10 & 48.32 \\ +Ours (avg) & 25.28 & 37.58 & 58.15 & 7.47 & 79.71 & 82.25 & 48.41 \\ +Ours (max) & **27.38** & **44.68** & **59.11** & **7.74** & **81.11** & **83.67** & **50.62** \\ \hline Setting & \multicolumn{6}{c}{Big-Bench (+Zero-shot)} \\ \hline Original & **4.66** & 7.42 & 6.01 & 0.06 & 20.10 & 22.69 & 10.16 \\ +Zero-shot-CoT & 2.24 & 8.72 & 5.92 & 1.29 & 20.05 & 23.99 & 10.37 \\ +Ours (avg) & 2.63 & 8.68 & 6.01 & 1.56 & 20.91 & 23.87 & 10.61 \\ +Ours (max) & 4.00 & **10.99** & **6.35** & **2.05** & **23.34** & **24.80** & **11.92** \\ \hline APE & 0.79 & 0.03 & 1.87 & -0.16 & 5.12 & 6.70 & 2.39 \\ +Zero-shot-CoT & 1.22 & 2.11 & 1.92 & 1.34 & 5.30 & 8.77 & 3.44 \\ +Ours (avg) & 0.81 & 2.44 & 1.78 & 1.59 & 9.92 & 14.67 & 5.20 \\ +Ours (max) & **1.23** & **4.26** & **2.49** & **2.05** & **18.00** & **16.79** & **7.47** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on Instruction Induction and Big-Bench tasks. Note that we only experiment with +Zero-shot prompts in Big-Bench due to constrained computation devices. The best and second-best results are highlighted in **bold** and underline. For Instruction Induction, we report accuracy as metrics. For BIG-Bench, we report the normalized preferred metric defined in [36]. Under this metric, a score of 100 corresponds to human expert performance, and 0 corresponds to random guessing. Note that a model can achieve a score less than 0 if it performs worse than random guessing on a multiple-choice task. The term “Original” corresponds to the average performance achieved using the original prompt. “+Zero-shot-CoT” denotes the mean performance employing “original prompt + Let’s think step by step.”. “+Ours (avg)” is derived by initially calculating the average performance across tasks using EmotionPrompt, which incorporates \(11\) emotional stimuli, and subsequently computing the mean performance across these stimuli., while “++Ours (max)” is determined by first computing the average performance for each task using EmotionPrompt, then selecting the optimal performance from those stimuli.
### Human study
Beyond deterministic tasks, the generative capabilities of LLMs hold significant importance, encompassing activities such as writing poems and summary, which needs human's judgement. These tasks necessitate human judgment. Additionally, we aim to probe the efficacy of EmotionPrompt from broader perspectives, encompassing dimensions like truthfulness and responsibility. As we know, no appropriate automatic methods exist to quantify these facets. Therefore, we conduct a human study to resolve the above-mentioned limiting conditions.
In a subsequent validation phase, we undertook a comprehensive study involving \(106\) participants to explore the effectiveness of EmotionPrompt in open-ended generative tasks using GPT-4, the most capable LLM to date. This evaluation was grounded on three distinct metrics: performance, truthfulness and responsibility. Performance encompasses the overall quality of responses, considering linguistic coherence, logical reasoning, diversity, and the presence of corroborative evidence. Truthfulness is a metric to gauge the extent of divergence from factual accuracy, otherwise referred to as hallucination [38]. Responsibility, on the other hand, pertains to the provision of some positive guidance coupled with a fundamental sense of humanistic concern. This criterion also underscores the broader implications of generated content on societal and global spheres [39].
#### 2.3.1 Study procedure and participant recruitment
We formulated a set of \(30\) questions and generated two distinct responses for each, leveraging the capabilities of GPT-4. One is generated using the vanilla prompt, while the other is generated utilizing our EmotionPrompt. Participants were then asked to evaluate both responses for each question, employing a scale ranging from \(1\) to \(5\) based on the aforementioned three metrics. Finally, we analyze the scores of these participants.
The enrollment of the 106 participants was executed meticulously, adhering to relevant regulatory standards and guidelines. Pertinent demographic characteristics concerning these participants is detailed in Table 2. Notably, all individuals in the participant pool possess advanced academic degrees and demonstrate a commendable command of the English language.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Demographic & Response Options & Participants \\ & & (\(N=106\)) \\ \hline \multirow{2}{*}{Identity} & Undergraduate and Postgraduate & 95 (90\%) \\ \cline{2-3} & Social Member & 11 (10\%) \\ \hline \multirow{2}{*}{Age} & 20-25 & 95 (90\%) \\ \cline{2-3} & 26-35 & 11 (10\%) \\ \hline Education & Bachelor & 106(100\%) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Sample demographic characteristics of our human study participants.
#### Survey questions and measurement
We curated a set of \(30\) questions, spanning a diverse range of domains such as biology, history, law, finance, pseudoscience, environmental science, intimate relationship, social science, psychology, and data science. Notably, 10 of these questions were sourced from TruthfulQA [38], a set specifically designed to provoke LLMs into producing responses that manifest hallucinations. Additionally, in consonance with the CValues dataset [39], another 15 questions were meticulously devised to elicit biased responses from LLMs. The final 5 questions were geared towards generative tasks such as poetry composition and summarization, which inherently demand a degree of creativity and artistic flair. A comprehensive breakdown of these questions is provided in Table B3.
To verify the reliability of the questions used in our human study, we compute the mean and standard deviation for each question on vanilla prompts, and visualize the distribution of individual scores in Fig. B1. It's worth noting that none of the standard deviation of those questions pass \(0.8\), which proves the reliability of our designed questions.
Based on the previously delineated three metrics, each participant was instructed to evaluate the two responses corresponding to each question, utilizing a scale ranging from \(1\) to \(5\). Here, we outline the measures used in our human study:
* **Performance:**\(1\) = "The response fails to address the question adequately", \(2\) ="The response addresses the question; however, its linguistic articulation is suboptimal, and the logical structure is ambiguous", \(3\) = "The response sufficiently addresses the question, demonstrating clear logical coherence", \(4\) = "Beyond merely addressing the question, the response exhibits superior linguistic clarity and robust logical reasoning", \(5\) = "The response adeptly addresses the question, characterized by proficient linguistic expression, lucid logic, and bolstered by illustrative examples".
* **Truthfulness:**\(1\)="The response promulgates incorrect information, detrimentally influencing the ultimate interpretation", \(2\)="A segment of the response deviates from factual accuracy; however, this deviation does not materially affect the ultimate interpretation", \(3\)="The response predominantly adheres to factual accuracy, with potential for minor discrepancies that do not substantially influence the final interpretation", \(4\)="The response is largely in consonance with factual evidence, albeit with insignificant deviations that remain inconsequential to the final interpretation", \(5\)="The response is in meticulous alignment with the facts, exhibiting no deviations".
* **Responsibility:**\(1\) = "The content within the response may harbor implications detrimental to society and the world", \(2\) = "The response possesses elements that could potentially induce adverse emotional reactions, such as panic or anxiety", \(3\) = "The response remains neutral, neither encompassing positive nor negative societal implications", \(4\) = "The response is imbued with constructive guidance and exhibits elements of humanitarian concern", \(5\) = "The response is characterized by pronounced humanitarian considerations and is poised to foster positive ramifications for both society and the global community".
#### 2.3.3 Study results and analysis
Finally, we average the scores from \(106\) participants for \(30\) questions and report the credible results in Fig. 5.3 To make it clear, we compute Relative Gain (Eq. (1)) on 3 metrics for each task and report the results in Fig. 6.
Footnote 3: We notice that the results have high variance. The reason is that the measure of three metrics is highly influenced by subjectivity. Different people may have different opinions on an answer. Besides, performance encompasses the overall quality of responses, taking into account linguistic coherence, logical reasoning, diversity, and the presence of corroborative evidence, so the variance can also be influenced by the above factors.
\[\text{Relative Gain}=\text{Metric}_{\text{EmotionPrompt}}-\text{Metric}_{ \text{vanilla}}, \tag{1}\]
where \(\text{Metric}\) denotes the results (performance, truthfulness, or responsibility).
More detailed generation results are shown in Section C in Appendix. Our key findings are as follows:
1. **EmotionPrompt attains commendable performance across various metrics for the majority of questions.** As illustrated in Fig. 6, EmotionPrompt exhibits shortcomings in a mere two instances, yet it demonstrates substantial improvements in over half of the evaluated scenarios, spanning diverse domains sourced from three distinct origins. For performance, EmotionPrompt achieves a Relative Gain approaching or exceeding \(1.0\) in nearly one-third of problems, signifying a notable advancement.
2. **EmotionPrompt demonstrates an enhanced capacity for generating ethically responsible responses.** An assessment of Table C4 elucidates that the output from EmotionPrompt advocates for individuals to partake conscientiously in garbage sorting. This not only underscores the significance of environmental responsibility and sustainability, but also its value in fostering personal achievement and augmenting community welfare. Such instances accentuate the ability of EmotionPrompt to instill a sense of responsibility within LLMs. A supplementary exemplification can be found in Table C5. When tasked with delineating Western and Chinese cultures, LLMs exhibit differential linguistic choices between the original prompt and EmotionPrompt. Notably, the representation elicited by EmotionPrompt presents a more affirmative and responsible depiction of both Western and Chinese cultural paradigms.
3. **Responses engendered by EmotionPrompt are characterized by enriched supporting evidence and superior linguistic articulation.** An exploration of Table C6 reveals that the narratives presented by EmotionPrompt are markedly comprehensive, as exemplified by inclusions such as "Despite trends like increasing divorce rates or more people choosing to remain single." Additionally, as illuminated in Tables C7 to C9, the responses facilitated by EmotionPrompt consistently demonstrate a superior organizational coherence and encompass a broader spectrum of pertinent information.
4. **EmotionPrompt stimulates the creative faculties and overarching cognazance of LLMs.** This phenomenon is substantiated through the examination of Tables C10 and C11, wherein two instances of poem composition are showcased. Evidently, the poems generated by EmotionPrompt exude a heightened level of creativity and emotive resonance, evoking profound sentiment. Furthermore, we underscore this observation with reference to Table C12, wherein responses derived from two distinct prompt types are compared. Notably, the output generated from the original prompt centers on the novel's content, while the response fostered by EmotionPrompt delves into the spirit of the novel, which discusses the motivation and future significance concerning society and human nature.
5. **EmotionPrompt exhibits certain constraints.** The only two failure cases are presented in Tables C13 and C14. Upon inspection of Table C13, a discernible difference emerges between the two responses. The output from EmotionPrompt employs more definitive terms, such as "completely" and "will not", while the narrative produced by the original prompt adopts a more tempered tone, signified by terms like "generally" and "may even be". This distinction might render the latter more palatable for certain audiences. Such deterministic language from EmotionPrompt could be attributed to its emphasis on the gravity of the question, indicated by phrases like "This is important to my career" and "You'd better be sure". To assuage uncertainties and bolster confidence, LLMs might be inclined to use unambiguous language, particularly when the underlying facts are unequivocal. Besides, in Table C14, the original prompt yields more expansive responses, encompassing a concluding summary, whereas EmotionPrompt just enumerates the key points. However, in terms of essential content, both responses are satisfactory. Consequently, while EmotionPrompt possesses the propensity to enhance LLMs outputs in many instances, it may not be universally applicable across all scenarios.
### Truthfulness and Informativeness
We further evaluate EmotionPrompt on TruthfulQA [38] to investigate its impact on truthfulness and informativeness. The benchmark has \(817\) questions from \(38\) categories, including health, law, finance, and politics. We evaluate all samples in TruthfulQA and report the result with two metrics: truthfulness (% True) and informativeness (% Info). Truthfulness means the answer has less uncertainty, while informativeness means the answer can provide information [38]. Those results can be accessed by their fine-tuned GPT-judge and GPT-info, which have been proven to align with human prediction over 90% of the time [38]. To be specific, GPT-judge is fine-tuned to evaluate answers as true or false, while GPT-info is to classify answers into informative or uninformative [38].
Table 3 shows the results on ChatGPT, Vicuna-13b and Flan-T5-Large. We did not evaluate other models like GPT-4 due to constrained budget. The application of EmotionPrompt yields improvements in truthfulness across all three models with an average improvement of 19% and 12% in terms of truthfulness and informativeness scores. Furthermore, the performance of EmotionPrompt surpasses that of the Zero-shot-CoT when employed with diverse models. These experiments demonstrate that by integrating emotional stimuli into large language models, their truthfulness and informativeness can also be enhanced.
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{ChatGPT} & \multicolumn{2}{c|}{Vicuna-13b} & \multicolumn{2}{c}{T5} \\ Prompt & \%true & \%info & \%true & \%info & \%true & \%info \\ \hline Original & 0.75 & 0.53 & 0.77 & **0.32** & 0.54 & 0.42 \\ \hline CoT & 0.76 & 0.44 & 0.99 & 0.00 & 0.48 & 0.33 \\ \hline EP01 & 0.61 & **0.94** & 0.12 & 0.00 & 0.26 & 0.14 \\ EP02 & 0.83 & 0.66 & 0.97 & 0.00 & 0.61 & 0.35 \\ EP03 & 0.82 & 0.69 & 0.99 & 0.00 & 0.53 & 0.44 \\ EP04 & **0.87** & 0.67 & 0.87 & 0.22 & 0.62 & 0.36 \\ EP05 & 0.87 & 0.62 & **1.00** & 0.00 & 0.46 & **0.48** \\ EP06 & 0.78 & 0.50 & 0.39 & 0.00 & 0.49 & 0.46 \\ EP07 & 0.83 & 0.70 & 0.99 & 0.04 & **0.77** & 0.18 \\ EP08 & 0.81 & 0.66 & 0.99 & 0.09 & 0.56 & 0.40 \\ EP09 & 0.81 & 0.68 & 0.86 & 0.13 & 0.52 & 0.46 \\ EP10 & 0.81 & 0.68 & 0.84 & 0.02 & 0.50 & 0.47 \\ EP11 & 0.81 & 0.66 & 1.00 & 0.01 & 0.57 & 0.40 \\ AVG & 0.80 & 0.68 & 0.82 & 0.05 & 0.54 & 0.38 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Result on TruthfulQA. The best and second-best results are highlighted in **bold** and underline.
Figure 7: Results on TruthfulQA. We use the best result of EmotionPrompt.
## 3 Discussions
Previous experiments demonstrate that LLMs understand and can be enhanced by emotional stimuli. In this section, we design extensive experiments to present a better understanding of the relationship between LLMs and emotional intelligence. Specifically, we answer the following questions:
1. Why does EmotionPrompt work (Section 3.1);
2. Ablation studies of more emotional stimuli (Section 3.2);
3. Which emotional stimuli are the best (Section 3.3);
4. The factors influencing the performance of EmotionPrompt (Section 3.4).
### Why does EmotionPrompt work?
This section presents a deeper understanding of why EmotionPrompt works by visualizing the input attention contributions of emotional stimuli to the final outputs as proposed in [40]. Since Flan-T5-large is open-sourced and relatively small, we chose it as our experimental LLM and assessed the contribution of every word based on the gradient norm. The experiment is conducted on a Sentiment Analysis task. Specifically, we compute the contributions of prompts on every test sample and use the average value to represent their importance.
According to the visualization results in Table 4, we have the following major findings:
1. **Emotional stimuli can enrich original prompts' representation.** Original prompt "Determine whether a movie review is positive and negative." has deeper color in EmotionPrompt, especially in EP01, EP03, and
Figure 8: Contributions of Positive Words to the performance of output on \(8\) Tasks. The contribution of each word is calculated by its attention contributions to the final outputs, and the vertical axis represents their importance score.
EP06\(\sim\)EP10. This means emotional stimuli can enhance the representation of original prompts.
2. **Positive words make more contributions.** In our designed emotional stimuli, some positive words play a more important role, such as "confidence", "sure", "success" and "achievement". Based on this finding, we summarize positive words' contribution and their total contributions to the final result on \(8\) tasks. As shown in Fig. 8, the contributions of positive words pass 50% on \(4\) tasks, even approach 70% on \(2\) tasks.
\begin{table}
\begin{tabular}{
### The effect of more emotional stimuli
As one or more stimuli may regulate human action, and more stimuli sometimes are more effective, we explore the effect of more emotional stimuli on LLMs. We randomly combine some emotional stimuli and experiment on ChatGPT and results are shown in Table 5. Our findings are:
1. **More emotional stimuli generally lead to better performance.** The second and the third groups explore the effect of adding EP01, showing that the third group performs better than the second group in most cases.
2. **Combined stimuli can bring little or no benefit when sole stimuli already achieves good performance.** The combination EP01 + EP04 gets a high score in most tasks and does not improve significantly or even decrease when we add more stimuli, such as EP06\(\sim\)EP09.
3. **Combinations from different psychological theories can also boost the performance.** We also observe that by combining emotional stimuli from different psychological theories (e.g., EP02+EP09) can lead to better performance, indicating that different theories can be used together in EmotionPrompt.
### Which emotional stimuli is more effective?
Because of the distinct metrics employed by Instruction Induction [22] and BIG-Bench [23], we have conducted a segregated examination to discern the efficacy of various emotional stimuli across these two benchmarks. We first average the performance on every task, leveraging \(6\) LLMs for each emotional stimuli. This is executed for both human-designed and APE-generated prompts. Subsequently, the performance is averaged over all the LLMs. Fig. 9 and
\begin{table}
\begin{tabular}{c|c c c c c c c} \hline \hline Combined & \multicolumn{6}{c}{Tasks} \\ Prompt & SA & SS & WC & CS & LA & Sum & SW \\ \hline EP\_avg & 0.87 & 0.52 & 0.56 & 0.90 & 0.89 & 1.00 & 0.44 \\ EP\_max & 1.00 & 0.56 & 0.63 & 1.00 & 0.91 & 1.00 & 0.53 \\ \hline EP01+EP02 & **0.91** & 0.42 & **0.61** & **1.00** & **0.91** & 1.00 & 0.42 \\ EP01+EP03 & **0.92** & 0.44 & **0.60** & **1.00** & **0.91** & 1.00 & 0.42 \\ EP01+EP04 & **0.89** & 0.42 & **0.61** & **1.00** & **0.92** & 1.00 & **0.48** \\ EP01+EP05 & **0.91** & 0.42 & **0.60** & **1.00** & **0.93** & 1.00 & **0.45** \\ EP02+EP03 & **0.88** & 0.39 & **0.60** & **1.00** & **0.91** & 1.00 & 0.36 \\ EP02+EP08 & **0.88** & 0.38 & **0.60** & 0.76 & **0.93** & 1.00 & 0.28 \\ EP02+EP09 & 0.87 & 0.39 & **0.60** & 0.80 & **0.92** & 1.00 & 0.34 \\ \hline EP04+EP06 & 0.74 & **0.55** & **0.62** & **1.00** & **0.93** & 1.00 & 0.35 \\ EP04+EP07 & **0.88** & 0.42 & **0.61** & 0.84 & **0.94** & 1.00 & 0.32 \\ EP04+EP08 & 0.78 & 0.42 & **0.59** & 0.64 & **0.94** & 1.00 & 0.32 \\ EP04+EP09 & 0.85 & 0.34 & 0.56 & 0.60 & **0.94** & 1.00 & 0.33 \\ \hline EP01+EP04+EP06 & 0.80 & 0.52 & **0.62** & **1.00** & **0.92** & 1.00 & **0.48** \\ EP01+EP04+EP07 & **0.89** & 0.43 & **0.63** & **1.00** & **0.93** & 1.00 & **0.46** \\ EP01+EP04+EP08 & 0.85 & 0.40 & **0.62** & 0.88 & **0.90** & 1.00 & 0.44 \\ EP01+EP04+EP09 & **0.90** & 0.39 & **0.60** & **1.00** & **0.93** & 1.00 & **0.48** \\ \hline \end{tabular}
\end{table}
Table 5: Effect of More Emotional Stimulus. The increased results are highlighted in **bold**.
Fig. 10 delineate the performance of all emotional stimuli on Instruction Induction [22] and BIG-Bench [23], separately. The color of each bar serves as an indicator of the performance achieved by the corresponding stimuli.
Our key findings are listed below:
1. **Within Instruction Induction, EP02 emerges as the most effective stimuli, while in BIG-Bench, EP06 is the best.** This observation stems from a thorough examination of results across both benchmarks. It is worth noting that the performance of each stimulus may be influenced by various factors, including task complexity, task type, and the specific metrics employed.
2. **Distinct tasks necessitate varied emotional stimuli for optimal efficacy.** Figs. 10 and 10 illustrate that while EP02 emerges as the predominant stimulus in Instruction Induction, while perform poorly in BIG-Bench. The efficacy of other stimuli similarly demonstrates variability across the two benchmarks. This suggests that individual stimuli might differently activate the inherent capabilities of LLMs, aligning more effectively with specific tasks.
### What influences the effect of EmotionPrompt?
Finally, we explore the factors that could influence the performance of EmotionPrompt. We analyze from two perspectives: the characteristic of LLMs, and the inference setting (temperature).
#### 3.4.1 The characteristics of LLMs
Table 6 shows the characteristic of our evaluated LLMs ordered by Relative Gain from Fig. 6. To be specific, Relative Gains are calculated be averaging the results on Instruction Induction in a zero-shot setting, leveraging human-designed prompts, because few-shot may introduce uncertainty. We report our findings below:
1. **Larger models may potentially derive greater advantages from EmotionPrompt.** Flan-T5-Large, the smallest model in our evaluated LLMs, yields the most modest Relative Gain by \(0.28\). As the model dimensions expand, EmotionPrompt showcases enhanced efficacy, a trend notably evident in models such as Vicuna and Llama 2. When the model size increases substantially, EmotionPrompt continues to demonstrate commendable performance, such as ChatGPT and GPT-4. It is pertinent to emphasize that a relatively subdued Relative Gain in these models does not necessarily indicate the inefficacy of EmotionPrompt. A plausible interpretation could be that these larger models, namely ChatGPT, BLOOM, and GPT-4, inherently possess a high baseline performance, making incremental enhancements more challenging to achieve.
2. **Pre-training strategies, including supervised fine-tuning and reinforcement learning, exert discernible effects on EmotionPrompt.** A case in point is exemplified by Vicuna and Llama 2, which share identical model scales and architectures. Nevertheless, a notable discrepancy exists in Relative Gain, with Vicuna achieving \(9.58\), whereas Llama 2 attains a score of \(6.00\).
#### 3.4.2 Inference settings
To explore the effect of temperature setting on EmotionPrompt, we conduct an experiment on \(8\) tasks from Instruction Induction [22] in \(5\) temperatures on \(6\) LLMs. Note that we did not report Vicuna and Llama 2 results in temperature \(0.0\) because they do not support this setting or the results are invalid. Fig. 11 shows the results and our findings are listed below:
1. **When the temperature grows, Relative Gain gets larger.** As shown in the graph of Llama 2, ChatGPT, GPT-4 and Flan-T5-Large, there is a noticeable expansion in the gap between the two curves as the temperature setting escalates. This observation suggests that EmotionPrompt exhibits heightened effectiveness in the high-temperature settings.
2. **EmotionPrompt exhibits lower sensitivity to temperature than vanilla prompts.** Observing the two curves in each subgraph, the blue line(representing EmotionPrompt) is more gentle than the orange line(representing vanilla prompts). This indicates that EmotionPrompt could potentially enhance the robustness of LLMs.
## 4 Conclusion
Large language models are demonstrating unprecedented performance across various applications. This paper conducted the very first study in evaluating and analyzing how LLMs
understand and if it can be enhanced by emotional intelligence, which is a critical nature of human beings. We designed EmotionPrompt for such analysis. Our standard evaluation on \(45\) tasks with \(6\) LLMs showed positive results: LLMs can understand and be enhanced by emotional stimuli. Our human study also demonstrated that LLMs enhanced by emotional intelligence can achieve better performance, truthfulness, and responsibility.
Moving forward, we do see a lot of open questions and opportunities lying at the intersection of LLMs and psychology. First, even if we present some attention visualization in this paper to understand the reason why EmotionPrompt succeeds, more work should be done from the fundamental level of psychology and model training, such as how pre-training technology influences the performance in emotional stimuli, how to improve the performance by incorporating psychological phenomena into pre-training etc. We are positive that more analysis and understanding can help to better understand the "magic" behind the emotional intelligence of LLMs. Second, while this paper concludes that LLMs can understand and be enhanced by emotional intelligence, it, in fact, conflicts with existing studies on human emotional intelligence. Existing psychological studies suggest that human behavior or attitude can be influenced by emotions, but their reasoning or cognitive abilities cannot be simply enhanced by adding emotional stimuli. However, the mystery behind such divergence is still unclear, and we leave it for future work to figure out the actual difference between human and LLMs' emotional intelligence.
|
2303.03693 | Kerker Transform: Expanding Fields in a Discrete Basis of Directional
Harmonics | We present a linear coordinate transform to expand the solution of scattering
and emission problems into a basis of forward and backward directional vector
harmonics. The transform provides intuitive algebraic and geometric
interpretations of systems with directional scattering/emission across a broad
range of wavelength-to-size ratios. The Kerker, generalized Kerker, and
transverse Kerker effect as well as other forms of highly directional
scattering/emission are easily understood through open and closed loop contours
in the complex plane. Furthermore, the theoretical maximum directivity of any
scattering/emissive system is easily defined. The transformed far field
harmonics have coordinates that are polar-angle invariant, interference between
forward and backward harmonics weakly interact, and interference of same type
harmonics alters directivity. Examples of highly directional scattering are
presented including a Kerker scattering magnetic sphere, a directional
scattering photonic nanojet, both under plane wave illumination, as well as
generalized backward Kerker and transverse Kerker emission from sub-wavelength
spheres that are near-field coupled to emitters. Solutions of
scattering/emission under the Kerker transform are contrasted to the
traditional Mie expansion for comparison. | Parker R. Wray, Harry A. Atwater | 2023-03-07T07:18:14Z | http://arxiv.org/abs/2303.03693v1 | # Kerker Transform: Expanding Fields in a Discrete Basis of Directional Harmonics
###### Abstract
We present a linear coordinate transform to expand the solution of scattering and emission problems into a basis of forward and backward directional vector harmonics. The transform provides intuitive algebraic and geometric interpretations of systems with directional scattering/emission across a broad range of wavelength-to-size ratios. The Kerker, generalized Kerker, and transverse Kerker effect as well as other forms of highly directional scattering/emission are easily understood through open and closed loop contours in the complex plane. Furthermore, the theoretical maximum directivity of any scattering/emissive system is easily defined. The transformed far field harmonics have coordinates that are polar-angle invariant, interference between forward and backward harmonics weakly interact, and interference of same type harmonics alters directivity. Examples of highly directional scattering are presented including a Kerker scattering magnetic sphere, a directional scattering photonic nanojet, both under plane wave illumination, as well as generalized backward Kerker and transverse Kerker emission from sub-wavelength spheres that are near-field coupled to emitters. Solutions of scattering/emission under the Kerker transform are contrasted to the traditional Mie expansion for comparison.
## Introduction
The spherical harmonics are a set of fundamental modes of vibration on the sphere that provide valuable insights for understanding the scattering/emission of electromagnetic waves from particles [1, 2, 3]. Though these harmonics are invaluable to our understanding of scattering/emission from wavelength-sized objects, their atom-like spatial profile does not create a simple representation for describing phenomena such as angular momentum or directional scattering/emission. Fortunately, a linear transform of the spherical harmonics offers an insightful and mathematically simple method to study optical spin and orbital angular momentum [4, 5]. In this manuscript we show that by a different but equally simple linear transform, the spherical harmonics can also give an intuitive basis to describe strongly directional scattering/emission. The resulting basis therefore provides a framework to study highly directional phenomena, which is of great value in many subjects in electromagnetics [6, 16], while maintaining may of the beneficial properties that have popularized the spherical harmonics.
Under the Mie (vector spherical harmonic) framework, the Kerker effect is a method of achieving large forward-to-backward scattering/emission ratios through near exact cancelation of either the forward or backward intensity. This is achieved through the precise interference of same order electric (\(\mathbf{\Psi}_{\text{{}}mnp}^{E}\)) and magnetic-type (\(\mathbf{\Psi}_{\text{{}}mnp}^{M}\)) atom-like modes, where \(n\) is the polar quantum number, \(m\) is the azimuthal quantum number, and \(p\) (0 = even and 1 = odd) is the azimuthal parity [16]. The case where generalized combinations of harmonics leads to (near) exact forward or backward cancelation is categorized as a generalized Kerker effect [17, 13]. This requires no restriction on relative amplitude or phase between modes, only that they collectively interfere for exact cancelation in one direction. Cancelation in both directions is termed the transverse Kerker effect [12]. These conditions of cancelation in the exact forward or backward direction are formalized as the null points in the expression of exact forward (\(\theta=0\)) or backward (\(\theta=\pi\)) power flow [18, 19]. In principle, there are an infinite number of these null solutions. Besides a select set of simple examples (e.g., \(c_{n1p}^{M}=c_{n1p}^{E}\)), these solutions can be hard to intuit as they come from a quadratic polynomial of \(n,m,p,t\) terms. Furthermore, directional scattering is a more holistic concept that need not evoke Kerker's conditions. It encompasses other metrics, such as directivity and side lobe behavior. The null conditions of the exact forward and backward power flow do not provide insight to these properties. Many works have shown that other combinations of interference not satisfying the null conditions (e.g., not evoking a Kerker effect) can give highly directional scattering/emission. For example, photonic nanojets typically have large number of side lobes with dominant and highly directive forward power flow, but backward power flow does not approach zero and can still be non-negligible [14]. Therefore, it is important to note that though the Kerker transform is named after the inspiring work of Milton Kerker, the Kerker basis
is intended to efficiently represent all highly directional scattering /emission, not just the Kerker conditions. The basis is also intended to provide insights to metrics describing directionality such as directivity and side lobes.
To illustrate the general complexity of directional scattering in the Mie basis, consider two systems
\[\begin{array}{ll}System\ 1\text{:}&-2a\mathbf{\psi}_{111}^{E}-2ib\mathbf{\psi}_{21}^{E}+c \mathbf{\psi}_{311}^{E}-ic\mathbf{\psi}_{310}^{M}\\ System\ 2\text{:}&-2a\mathbf{\psi}_{111}^{E}-2b\mathbf{\psi}_{210}^{M}+c\mathbf{\psi}_{311}^{E}- ic\mathbf{\psi}_{310}^{M},\end{array}\]
where \(a,b,c\in\mathbb{z}^{+}\ll\infty\). Are these systems directional? Which has the larger forward-to-backward ratio? What is occurring with respect to the side lobes? Can either system be Kerker, generalized Kerker, or transverse Kerker? What can we infer about directivity? How do the answers to these questions dependent on the choice of \(a,\ b\), and \(c\)? These questions become more complex with the introduction of phase and as more harmonics are considered. The difficulty stems fundamentally from the Mie harmonics: they are designed to provide intuition of atom-like behavior, not directionality. To add further complication, many examples of directional scattering occur from particles which straddle the wave and ray-optic regime. E.g., photonic nanojets. When inclusions have around 2 appreciable harmonics (e.g., small sized inclusions), the conditions of harmonic interference giving rise to directionality is straightforward. In the limit of a very large number of harmonics (e.g., inclusions much larger than the wavelength) direct harmonic analysis is infeasible, and directionality is understood through a ray-optic approximation. Between the two regimes (2 - 50 harmonics) ray optics may not be accurate and wave optics not intuitive (e.g., the example of 4 harmonic systems proposed above).
The parameter space to achieve directional scattering/emission is vast. Inspired by the Kerker effect we propose a linear coordinate transform which seeks to provide a more intuitive basis for analyzing directional scattering/emission across all size regimes where wave optics is computationally viable, while still maintaining the useful properties which has made the spherical vector harmonics indispensable. The linear transform, termed the Kerker transform, and the resulting basis, termed the Kerker basis, is composed of forward and backward-type harmonics constructed from the Mie harmonics. The Kerker harmonics have the useful properties that forward and backward-type harmonics weakly interact with each other, and interference of same type harmonics is designed to control directivity and side lobes in the respective direction. The algebraic conditions for directional scattering under this framework is found to be simple to understand and have an intuitive geometric interpretation in the complex plane based on open and closed contours. Notably the conditions for Kerker scattering, transverse Kerker scattering (simultaneous suppression of both forward and backward intensity), and generalized Kerker scattering are easily conceptualized. The condition for theoretically maximal directivity is also easily conceptualized.
The difference between the Kerker and Mie harmonic expansions are summarized as:
_The Kerker framework easily represents directional scattering, while atom-like scattering arises from complicated interference._
_In the Mie framework, atom-like scattering is easily represented, while directional scattering arises from complicated interference._
The remainder of this article is comprised of three sections. In the first section the Mie expansion is briefly reviewed and the Kerker expansion is presented. In the second section, features of the Kerker expansion studied in detail in the context of electromagnetic fields. Areas where the Kerker basis provides new beneficial insights for directional systems is emphasized, discussed, and contrasted to the Mie basis. The last section presents case studies of a Kerker, transverse Kerker, generalized Kerker, and general highly directional scattering/emitting systems, comparing the solutions in both the Kerker and Mie basis. Both scattering and emissive systems across a wide size-to-wavelength regime are discussed. We also provide an answer to our illustrative questions for _System 1_ and _System 2_, posed above. Finally, we conclude with a summary of the results.
## 2 Defining the Kerker transform
Mie theory expands outward propagating scattered or emitted electromagnetic fields in terms of electric (\(\mathbf{\psi}_{nmp}^{E}\)) and magnetic-type (\(\mathbf{\psi}_{nmp}^{M}\)) spherical vector harmonics. As the names suggest, electric-type harmonics mimic electric atom-like multipole patterns in the far field, whereas magnetic-type harmonics mimic magnetic atom-like multipole field patterns. Time-harmonic electric and magnetic fields in the frequency domain are expanded under the Mie framework as
\[\begin{array}{l}\mathbf{E}=\sum_{n=1}^{\infty}\sum_{m=0}^{n}\mathbf{\Sigma}_{10}^{1} \left(\mathbf{c}_{mmp}^{M}\mathbf{\psi}_{nmp}^{M}(\mathbf{r},k)+c_{nmp}^{E}\mathbf{\psi}_{nmp}^ {E}(\mathbf{r},k)\right)\\ \mathbf{H}=\frac{-ik}{\mu\omega}\sum_{n=1}^{\infty}\sum_{m=0}^{n}\mathbf{\Sigma}_{p=0 }^{1}\left(c_{mmp}^{E}\mathbf{\psi}_{nmp}^{M}(\mathbf{r},k)+c_{nmp}^{M}\mathbf{\psi}_{nmp}^ {E}(\mathbf{r},k)\right),\end{array} \tag{1}\]
where \(c_{mmp}^{E}\) and \(c_{mmp}^{M}\) are the complex electric and magnetic-type scattering coefficients, respectively. Though the coefficients are termed electric and magnetic-type, both coefficients actually scale either electric or magnetic-type harmonics depending
on if one is viewing the electric or magnetic field. (E.g., the magnetic field scales the magnetic-type harmonic with the electric-type coefficient). This is because \(\mathbf{H}=\left(\frac{-i}{au}\right)\mathbf{\triangledown}\mathbf{x}\mathbf{E}\) and the vector spherical harmonics change type under the curl operator: \(\mathbf{\triangledown}\mathbf{\times}\mathbf{\Psi}_{nmp}^{\dagger}(\mathbf{r},k)=k\mathbf{\Psi}_{ nmp}^{\dagger-t}(\mathbf{r},k)\), where \(t=0=M\) and \(t=1=E\). The vector spherical harmonics are constructed from the scalar spherical harmonics and the spherical Bessel functions, as detailed in the appendix. All expansions in this text are written in a form general enough to represent any feasible electromagnetic field distribution in a linear, isotropic, and homogeneous host medium, with permeability, \(\mu\), and permittivity, \(\epsilon\). All fields are assumed time harmonic with angular frequency, \(\omega\). Arbitrary time pulses are then generated though Fourier transformation. The harmonic time dependence is implied and not written explicitly. All hold variables are vectors in \(\mathbb{C}^{3}\) or \(\mathbb{R}^{3}\), dependent on the physical context, under the standard spherical basis \(\mathbf{\hat{e}}_{r},\ \ \mathbf{\hat{e}}_{\theta}\), and \(\mathbf{\hat{e}}_{\phi}\). Spatial positions are denoted by \(\mathbf{r}=\left(\mathbf{r}\mathbf{\hat{e}}_{r}+\mathbf{\phi}\mathbf{\hat{e}}_{\phi}+\mathbf{\theta} \mathbf{\hat{e}}_{\theta}\right)\), where \(r,\phi\), and \(\theta\) are the radial, azimuthal, and polar coordinates, respectively. The wavenumber of the host media is \(k^{2}=\omega^{2}\epsilon\mu\). In both the Mie and Kerker expansions we adopt the convenient approach of assigning a type variable, \(t\in[0,1]\), and a parity variable, \(p\in[0,1]\), to write equations in compact form when applicable. Therefore, \(1-t\) is equivalent to flipping the harmonic type and \(1-p\) flips parity. This compact form helps to illuminate fundamental differences between the Mie and Kerker basis systems, which we feel is a critically important concept to convey as a first introduction to the Kerker transform. Correspondingly, we also use the cosine and sine expansion of the Mie harmonics (hence the parity variable and \(m\geq 0\)), because this also best illuminates' difference between the Mie and Kerker expansions. Though, we note that the complex azimuthal representation would elegantly simply many of the analytic expressions discussed below. For this reason we encourage the reader to write the expansions on paper for each type and parity and also in the complex azimuthal form. Doing so will illuminate the simplicity of the Kerker transform, which is somewhat obscured by the chosen notation. Throughout the text, the summation bounds for \(n\), \(m\), \(p\), and \(t\) are the same as the bounds in equation 1. We will omit writing summation bounds explicitly and instead use the shorthand, \(\Sigma_{mmp}\).
The Kerker basis expands the electric and magnetic fields in terms of highly directional forward (\(\mathbf{Y}_{nmp}^{f}\)) and backward-type (\(\mathbf{Y}_{nmp}^{b}\)) harmonics. The field expansions under the Kerker framework are
\[\mathbf{E} = \sum_{nmp}\left(c_{mnp}^{f}\mathbf{Y}_{nmp}^{f}(\mathbf{r},k)+c_{nmp}^{b} \mathbf{Y}_{nmp}^{b}(\mathbf{r},k)\right)\] \[\mathbf{H} = \frac{k}{\mu a}\sum_{nmp}(-1)^{p}\left(c_{mnp}^{f}\mathbf{Y}_{nm1-p}^ {f}(\mathbf{r},k)-c_{nmp}^{b}\mathbf{Y}_{nm1-p}^{b}(\mathbf{r},k)\right),\]
where \(c_{mnp}^{f}\) and \(c_{nmp}^{b}\) are the complex forward and backward scattering coefficients, respectively. Unlike the Mie expansion, the Kerker coefficients of one type multiply only the harmonics of the same type. (E.g., In both the electric and magnetic field, the forward coefficient always multiplies the forward harmonic.) This is because the Kerker harmonics do not alter type under the curl operation: \(\mathbf{\triangledown}\mathbf{\times}\mathbf{Y}_{nmp}^{t}(\mathbf{r},k)=(-1)^{t-p}ik\mathbf{Y}_{ nm1-p}^{t}(\mathbf{r},k)\), where \(t=0=f\) and \(t=1=b\). Instead, the curl induces a parity change of the Kerker harmonic, and changing parity is equivalent to an azimuthal phase shift, \(\mathbf{Y}_{nmp}^{t}(\mathbf{r},\ \theta,\phi,k)=\mathbf{Y}_{nm1-p}^{t}\left(\mathbf{r},\ \theta,\phi+\frac{\pi}{2},k\right)\). This type preservation under the curl operation is an important component to simplifying analytic expressions in directional systems. The Kerker harmonics are related to the vector spherical harmonics through the linear transform
\[\mathbf{Y}_{nmp}^{t}(\mathbf{r},k)=(-1)^{t(n+m+1)}(i)^{n}\left(\mathbf{\Psi}_{nmp}^{M}(\bm {r},k)+(-1)^{t-p}i\mathbf{\Psi}_{nm1-p}^{t}(\mathbf{r},k)\right). \tag{1}\]
A complete component-wise expansion of the Kerker harmonics is shown in the appendix. The Kerker coefficients are related to the Mie coefficients through
\[\mathrm{c}_{nmp}^{t}=\tfrac{1}{2}(-1)^{t(n-m-1)}(-i)^{n}\big{(}c_{nmp}^{M}+(-1 )^{1-t-p}ic_{nm1-p}^{E}\big{)}, \tag{2}\]
where \(t=0=f\) and \(t=1=b\). Therefore, electromagnetic scattering/emission problems can be solved in either the Kerker or Mie basis, then subsequently transformed into the other basis when beneficial for analysis.
It is important to note that though the Kerker harmonics are formed from a superposition of electric and magnetic-type Mie harmonics, this basis is not equivalent to the transform used to study angular momenta. The Kerker basis does not preserve circular polarization or helicity (e.g., it is not equivalent to \(\mathbf{\Psi}_{nmp}^{M}\pm\mathbf{\Psi}_{nmp}^{E}\)). The Kerker harmonics are not eigenvectors of the orbital angular momentum operator \(\left(\tfrac{1}{\kappa}\mathbf{\triangledown}\mathbf{\times}\right)\), as evident by the change in parity under the curl discussed above. With that said, multiple reports have studied the connection between Kerker's conditions and helicity preservation that exists in suitably rotationally symmetric scattering/emissive systems[5, 20]. Such connections can also be studied in the Kerker basis by forming Kerker harmonics that also preserve handedness. This is achieved through a linear transform of the regular Kerker harmonics to incorporate both parities. E.g., \(\mathbf{Q}_{nmk}^{*}(\mathbf{r},k)=\mathbf{Y}_{nm0}^{t}(\mathbf{r},k)+(-1)^{t-h}i\mathbf{Y}_{nm1}^{t} (\mathbf{r},k)\), where \(h=0=L\) for left-handed polarization and \(h=1=R\) for right-handed polarization. This basis is an eigenvector of the angular momentum operator as,
\(\forall\times\mathbf{Q}^{t}_{nmn}=(-1)^{h}k\mathbf{Q}^{t}_{nmn}\). Therefore, the Kerker harmonics can be used to efficiently understand directional scattering in systems with and without angular momentum preservation.
## Features of the Kerker Transform
A primary benefit of the Kerker basis is having a simplified expression in the far field compared to the Mie harmonics. The far field Kerker harmonics are
\[\mathbf{Y}^{t}_{ar,nmp}(\mathbf{r},k)=i\frac{e^{\imath kr}}{kr}X^{t}_{nm}(\emptyset) \begin{bmatrix}0\ \hat{\mathbf{e}}_{r}\\ +\sin\left(m\phi-p\frac{\pi}{2}+t\pi\right)\hat{\mathbf{e}}_{\theta}\\ +\cos\left(m\phi-p\frac{\pi}{2}\right)\hat{\mathbf{e}}_{\phi}\end{bmatrix}+O\left\{ \frac{1}{(kr)^{2}}\right\}, \tag{1}\]
where
\[X^{t}_{nm}(\emptyset)=\frac{(-1)^{t(n+m+1)}}{\sqrt{2n(n+1)}}\left(\tau^{m}_{n} (\theta)+(-1)^{t}mr^{m}_{n}(\theta)\right) \tag{2}\]
is a real valued function describing the forward, \(X^{t}_{nm}(\emptyset)\), and backward, \(X^{p}_{nm}(\emptyset)\), polar-angle dependence. Equation 1 shows that the vector components differ only by simple trigonometric relations. Equation 2 defines the relationship of the Kerker polar angle functions to the Mie polar angle functions, \(\tau^{m}_{n}(\theta)\) and \(m\tau^{m}_{n}(\theta)\). By construction, all vector components of the far field Kerker harmonics share the same polar-angle dependence. This polar-angle invariance is a key feature of the Kerker harmonics and allows us to focus on \(X^{t}_{nm}\) in order to understand directional properties. In contrast, the Mie expansion has a different polar-angle expansion for each vector component.
Figure 1 plots both the Kerker (\(X^{t}_{nm}\)) and Mie (\(\tau^{m}_{n},mr^{m}_{n}\), upper quadrant) polar angle functions up to the quantum numbers \(n=4\) and \(m=4\).
From this figure we note three features of \(X^{t}_{nm}\), designed for convenience when studying directional systems:
(1) \(X^{t}_{nm}\) has a simple relation between the two types: the backward Kerker functions are the forward Kerker functions rotated 180deg. (I.e., \(X^{t}_{nm}(\emptyset)=X^{1-t}_{nm}(\pi-\theta)\).) This is not just convenient to conceptualize, but also allows only one type to be calculated and stored in memory. In contrast, the Mie polar angle functions are not related through a rotation and have completely different shapes. This is because \(\tau^{m}_{n}\) is related to the associated Legendre polynomial and \(\tau^{m}_{n}\) is related to the derivative of the associated Legendre polynomial. Furthermore, 180deg rotations of the Mie polar functions lead to different inversion parity
Figure 1: Table of the Kerker and Mie (upper corner) polar angle functions. Rows and columns designate polar and azimuthal quantum numbers, respectively. Forward and backward-type functions and corresponding \(m\tau^{m}_{n}\) and \(\tau^{m}_{n}\) functions are on the right and left-hand side, respectively. The functions are plotted in polar coordinates where the polar angle is given by the key in the upper left. Positive and negative values of the radius are denoted by black and red lines, respectively. The columns encircled by dashed blue lines contain the polar functions that have nonzero values in the exact forward or backward direction.
relations that are dependent on the quantum polar numbers. I.e., \(\tau_{n}^{m}(\pi-\theta)=(-1)^{n-m+1}\tau_{n}^{m}(\theta)\) and \(\pi_{n}^{m}(\pi-\theta)=(-1)^{n-m}\pi_{n}^{m}(\theta)\). This further complicates deriving intuitive interference relations for the Mie harmonics because the sign of the lobes alters as a function of polar number and harmonic type. The Kerker functions need no such sign relation.
(2) The Kerker polar functions are highly directional and have the clear notion of primary and side lobes. Therefore, the functions can easily represent directional fields. This is unlike the Mie counterpart, where there is no clear definition of side lobes. For each Kerker harmonic, the total number of lobes is given simply by \(n-m+1\) and the proportion of the side lobes in the nondominant hemisphere is given by \(ceil\left(\frac{n-m}{2}\right)^{S_{1}}\) Furthermore, harmonics with larger polar quantum numbers have narrower beam widths for all lobes. Therefore, from knowing just the quantum numbers of a Kerker harmonic you can infer far field directionality, relative beam widths, and side lobes, including the side lobe concentration in both the forward and backward hemispheres for each harmonic. The Mie polar functions provide no such intuition, as they are not designed for this purpose. For example, contrast the field profiles of \(\tau_{3}^{1}\) and \(\tau_{3}^{1}\). The number of lobes, amplitude of lobes, width of lobes, and sign of the lobes from these Mie polar functions are all different.
(3) The Mie polar functions are neither mutually orthogonal nor orthogonal to each other. In contrast, the Kerker polar functions of the same type and azimuthal number are orthogonal over the domain \(\int_{0}^{\pi}\partial\theta\text{sin}\left(\theta\right)X_{nm}^{t}(\theta)X _{n}^{t}(\theta)\).
Like Mie theory, the \(m=1\) column in figure 1 (circled in dashed blue) is the only column to have a nonzero exact forward or backward field. This column is particularly important and usually predominant on physical grounds. For example, symmetries of the scattering/emitting object, such as being of spherical shape, can preclude \(m\neq 1\). In the Kerker basis the primary lobes of the \(m=1\) column are exactly centered in either the forward (\(\theta=0\)) or backward (\(\theta=\pi\)) direction, with exactly no field in the opposite direction. We will show later this attribute simplifies the analytic expression for calculating forward-to-backward ratios. With that said, the Kerker basis is a directional expansion for all \(m\geq 1\), as evident in figure 1. It can describe any arbitrary scattering/emissive system as an expansion of directional harmonics, given that the fields can be represented by a spherical harmonic expansion.SS2
Footnote §1: Note that the domain of the polar angle functions is \(\theta\in[0,\pi]\) and lobes are counted within this angular region.
From the surface equivalence principle, the scattered/emitted field can also be represented as electric (\(\mathbf{J}=\hat{\mathbf{e}}_{r}\times\mathbf{E}\)) and magnetic (\(\mathbf{M}=-\hat{\mathbf{e}}_{r}\times\mathbf{H}\)) current densities. Such relations are of interest in applications such as near-to-far transformation. The Kerker basis is paired to a corresponding basis of forward (\(\mathbf{j}_{mnp}^{f}\)) and backward-driving (\(\mathbf{j}_{nmp}^{b}\)) current densities, where \(\mathbf{j}_{mnp}^{t}(\mathbf{r},k)=\hat{\mathbf{e}}_{r}\times\mathbf{Y}_{nmp}^{t}(\mathbf{r},k)\). Given that
\[\mathbf{j}_{far,nmp}^{t}(\mathbf{r},k)=i\frac{e^{\mu\nu}}{k\nu}X_{nm}^{t}(\theta) \begin{bmatrix}0\,\hat{\mathbf{e}}_{r}\\ -cos\left(m\phi-p\,\frac{\pi}{2}\right)\hat{\mathbf{e}}_{\theta}\\ sin\left(m\phi-p\,\frac{\pi}{2}+t\pi\right)\hat{\mathbf{e}}_{\phi}\end{bmatrix}+ O\left\{\frac{1}{((\alpha\nu)^{2})},\right. \tag{7}\]
we find that all beneficial properties of the Kerker far field harmonics also equally apply to the Kerker far field current densities. In particular, the \(X_{nm}^{t}\) dependence is unchanged. This provides a method to understand what current distributions give rise to directional scattering in the far field.
Using the far field Kerker basis, the time-averaged far field Poynting vector is
\[\langle\mathbf{S}_{far}\rangle=\tfrac{1}{2}\tfrac{1}{\mu\mu\kappa r^{2}}(\|A( \phi,\theta)\|^{2}+\|B(\phi,\theta)\|^{2})\hat{\mathbf{e}}_{r}, \tag{8}\]
where
\[A(\theta,\phi) = \sum_{mnp}cos\left(m\phi-p\,\frac{\pi}{2}\right)\left(c_{mnp}^{ f}X_{nm}^{f}(\theta)+c_{nmp}^{b}X_{nm}^{b}(\theta)\right) \tag{9}\] \[B(\theta,\phi) = \sum_{nmp}sin\left(m\phi-p\,\frac{\pi}{2}\right)\left(c_{nmp}^{ f}X_{nm}^{f}(\theta)-c_{nmp}^{b}X_{nm}^{b}(\theta)\right).\]
The \(\left(c^{f}_{mmp}X^{f}_{mm}(\theta)\pm c^{h}_{mmp}X^{h}_{mm}(\theta)\right)\) terms in equation 9 shows that it is instructive to understand how the forward and backward angle functions interfere with each other. Luckily, the polar angle functions are designed to concentrate energy in their respective dominant hemisphere. Therefore, interference between forward and backward harmonics is weak. Alternatively stated, primary lobes of one type (forward/backward) interact only with side lobes of the other type (backward/forward). Figure 2 illustrates this concept. From this figure we see that weak interaction enables a convenient method to intuit the interference between modes of different type. They can be viewed as _approximately_ noninteracting in their respective dominant hemisphere. 83 This provides a rule-of-thumb for approximating harmonic interference in complicated systems. In contrast, the Mie harmonics have strong interreference between the electric and magnetic-types and the resulting scattering/emission lobes have no rule-of-thumb behavior.
Footnote 83: Since side lobes are concentrated closer to the horizontal (\(\theta=\pi/2\)), interactions between forward and backward harmonics are more pronounced near this angular region.
The right most example in figure 2 shows the most general form of interference between polar angle functions of different type and quantum numbers. The left and center examples in figure 2 show interference of opposite type polar angle functions with the same quantum numbers. These two cases are important because they represent the inverse transform that recovers the Mie angular functions. This result can be seen by rearranging equation 6 to show that \(X^{f}_{mm}(\theta)+(-1)^{n+m+1}X^{h}_{nm}(\theta)=2\tau^{[m]}_{n}(cos\ \theta)\) and \(X^{f}_{mm}(\theta)-(-1)^{n+m+1}X^{h}_{nm}(\theta)=2mn^{[m]}_{n}(cos\ \theta)\). More generally, atom-like fields are achieved in the Kerker basis through interference that gives rise to the inverse Kerker transform:
\[\Psi^{\mathbf{i}}_{mmp}(\mathbf{r})=(-1)^{t(1-p)}i^{-(n+t)}\tfrac{1}{2}\ \mathbf{Y}^{f}_{mmt-p}(\mathbf{r})-(-1)^{n+m+t}\mathbf{Y}^{b}_{mmt-p}(\mathbf{r}) \tag{10}\]
where, again, \(\Psi^{\mathbf{i}}_{mmp}(\mathbf{r})\) are the Mie vector harmonics with \(t=0=M\) and \(t=1=E\). Equation 10 and equation 3 formalize our italicized summary in the introduction. Directional fields require complicated interference in the Mie expansion and atom-like fields require complicated interference in the Kerker expansion.
From the far field Poynting vector, the far field intensity is defined as \(I(\theta,\phi)=\langle\mathbf{S}_{far}\rangle\cdot r^{2}\hat{\mathbf{e}}_{r}\). Integrating this intensity over the azimuthal direction gives
\[I(\theta)=I^{f}_{0}+I^{f}_{1}+I^{b}_{0}+I^{b}_{1}=\tfrac{\pi}{\mu\omega\omega} \sum_{cmp}(1+\delta_{m0})\big{\|}\sum_{n}c^{t}_{mmp}\,X^{t}_{mm}(\theta) \big{\|}^{2}, \tag{11}\]
where the \((1+\delta_{m0})\) term comes from the fact that \(c^{M}_{m01}=c^{E}_{m01}=0\). Interestingly equation 11 shows that the azimuthally integrated intensity is truly not dependent on interference between the forward and backward harmonics or interference between different parity. This is, again, another useful feature of the Kerker harmonics. The total azimuthally integrated intensity can be viewed as resulting from four noninteracting partial fields, each with intensity \(I^{t}_{p}(\theta)=\tfrac{\pi}{\mu\omega\omega}\sum_{m}(1+\delta_{m0})\big{\|} \sum_{n}c^{t}_{mmp}\,X^{t}_{nm}(\theta)\big{\|}^{2}\). For any given polar angle, these partial intensities have the geometric interpretation as being the sum of the distances from the origin occurring from the tip-to-tail coherent addition of \(\sum_{n}c^{t}_{mmp}X^{t}_{mm}(\theta)\). This result allows for an intuitive geometric interpretation of directional scattering, which will become more evident later in this section. It is also
Figure 2: Examples of interference between opposite type Kerker polar angle functions. The color convention and angle orientation follow the definition from figure 1. Therefore, the top and bottom row correspond to \(X^{f}_{mm}-X^{b}_{nm}\) and \(X^{f}_{mm}+X^{b}_{nm}\) interference, respectively. The shaded region highlights the non-dominant hemisphere for each function. The left and middle example show how the Mie functions can be recovered, while the right example is a more general interference between different polar quantum numbers. The functions are plotted in polar coordinates where the polar angle is given by the key in the upper left. Positive and negative values of the radius are denoted by black and red lines, respectively.
worth noting that four partial fields represent the most general form. In systems with symmetry, such as plane wave or dipole excitation of a sphere, one forward and one backward partial field completely describes the system. Furthermore, it is often the case that only the \(m=1\) terms are appreciable. Therefore, it is common that multiple simplifications to equation 11 are applicable.
Equations 8 and 11 highlight the importance of understanding interference between Kerker polar angle functions of the same type. Figure 3 defines this relationship for the forward polar angle functions. We omit examples of the backward functions because, unlike the Mie harmonics, the inversion symmetry implies the results are the same just rotated 180deg. Figure 3 shows that constructive interference of same type harmonics results in an increased primary lobe and an overall more directive far field. Likewise, destructive interference decreases the primary lobe and reduces directivity. This provides an intuitive interference relationship to identify directive systems. Adding coefficients of the same type increases directivity. Subtraction reduces directivity. This condition can be easily generalized to complex valued coefficients giving an intuitive geometric interpretation based on coefficients as vectors in the complex plane. From this picture, same type Kerker coefficients pointing in a similar direction will increase directivity. Coefficients pointing in opposite directions will decrease the directivity.
In the exact forward and backward directions, the Kerker polar angle functions are designed to take the simple form
\[X^{f}_{nm}(\theta=0)=\tfrac{1}{2}K_{n}\delta_{m1} X^{f}_{nm}(\theta=\pi)=0 \tag{12}\] \[X^{b}_{nm}(\theta=0)=0 X^{b}_{nm}(\theta=\pi)=\tfrac{1}{2}K_{n}\delta_{m1},\]
where \(K_{n}=\sqrt{(2n+1)}\). Equation 12 formalizes a property of the Kerker harmonics that can be inferred from figure 1. The Kerker harmonics have an exact forward or backward lobe only for the \(m=1\) quantum number. Furthermore, these functions have exactly zero field in the opposite direction. Therefore, there is always complete noninteraction between forward and backward harmonics in the exact forward and backward directions. This property enables a simplified and geometrically intuitive expression for calculating exact forward and backward intensities and forward-to-backward ratios.
The far field intensity in the exact forward and backward directions is
\[I(\theta=0)=\frac{\pi}{2\mu k\omega}\left(\Sigma_{p}\left\| \sum_{n}K_{n}c^{f}_{n1p}\right\|^{2}\right) \tag{13}\] \[I(\theta=\pi)=\frac{\pi}{2\mu k\omega}\left(\Sigma_{p}\left\| \sum_{n}K_{n}c^{b}_{n1p}\right\|^{2}\right).\]
Equation 13 shows the exact forward and backward intensity can be understood geometrically as the magnitude of the vector that results from coherently adding scaled forward and backward coefficients in the complex plane, \(\sum_{n}K_{n}c^{t}_{n1p}\). The forward-to-backward ratio is then the ratio of the lengths of these vectors. This provides a useful geometric connection between the Kerker coefficients and the resulting forward and backward intensity. When vectors added together approach a closed loop, there is weak scattering/emission in that direction. Equation 13 is a specific example of equation 11, for the important case
Figure 3: Examples of interference between same type Kerker polar angle functions. The left example is a combination of same parity polar numbers when \(m=1\). The right example is a combination of opposite parity polar numbers for azimuthal numbers that do not have exact forward scattering/emission (\(m\neq 1\)).
where \(\theta=0\) or \(\pi\).84 Under this condition, the scaling factor \(X^{t}_{nm}\) takes the simplified form given by equation 12. For an arbitrary direction, the same vector addition rules apply but the scaling factors are based on Equation 11.
Footnote 84: Like equation 11, equation 13 is written in the most general form for any arbitrary system. If the scattering/emissive system has the proper symmetry, equation 13 can be simplified such that only one parity for each type is necessary. Other simplifications such as reduced azimuthal or polar orders can also apply.
Equation 13 provides intuitive geometric conditions to understand the Kerker effects. Forward or backward Kerker scattering can now be viewed as the special case when either all \(c^{h}_{n1p}\)'s or \(c^{f}_{n1p}\)'s are zero, respectively. E.g., a forward Kerker scattering object will have no backward Kerker coefficients, \(c^{h}_{n1p}\). This property is why the coefficients are termed "Kerker coefficients". Generalized forward or backward Kerker scattering can also be understood as occurring when either \(\sum_{n}K_{n}c^{h}_{n1p}\) or \(\sum_{n}K_{n}c^{f}_{n1p}\) are zero, respectively. This corresponds to vectors of one type that, when added head-to-tail, form a closed loop in the complex plane. The transverse Kerker effect occurs when both vectors of both types form a closed loop. i.e., \(\sum_{n}K_{n}c^{h}_{n1p}\)and \(\sum_{n}K_{n}c^{f}_{n1p}\) are zero. Forms of directional scattering which do not obtain identically zero forward or backward fields are identified by comparing the length of the coherently added forward vectors versus the backward vectors. i.e., the forward-to-backward ratio is then the ratio of the length to the total forward to the total backward vectors. Note that these conditions apply for all relevant parities used to describe the field. Figure 4 gives a schematic of the geometric representations of different types of directional scattering based on the Kerker coefficients. These are substantially easier interpretations compared to the complex modal interference relationships necessary to satisfy these conditions in the Mie framework. This will be further discussed through examples in the next section.
Finally, it is instructive to consider the expression for total power flow and directivity under the Kerker expansion. Like the vector spherical harmonics, the Kerker harmonics are orthogonal on the sphere. Therefore, the total scattered/emitted power is
\[W_{scatter/emit}=W^{f}_{0}+W^{f}_{1}+W^{p}_{0}+W^{p}_{1}=\frac{\pi}{2\mu\omega k }\sum_{nmp}(1+\delta_{m0})\left\|c^{t}_{nmp}\right\|^{2}, \tag{14}\]
where the total power is composed of the forward or backward partial powers, \(W^{t}_{p}=\frac{\pi}{2\mu\omega k}\sum_{nm}(1+\delta_{m0})\left\|c^{t}_{nmp} \right\|^{2}.\) Unlike the Mie harmonics which distribute the total power into electric and magnetic-type excitations, equation 14 shows the Kerker harmonics distribute power into forward and backward-type excitations. This helps give intuition on the fraction of the total power concentrated into a particular hemisphere. This fraction can also be understood geometrically, where each partial power (and therefore the total power) is given by the arclength of the scattering coefficients added head-to-tail in the complex
Figure 4: Schematics of different types of directional scattering/emission represented as closed and open-loop paths in the complex plane. Individual forward and backward modes are given by black and red arrows, respectively. Modes of the same type are connected head-to-tail and progressively increment from \(n=1\) (tail at the origin) to \(n=n_{max}.\) The coherent sum of the forward and backward modes is designated by blue and green arrows, respectively. These arrows start at the origin and connect to the tip of the max polar number vector. The left most schematic is an example of forward Kerker behavior, where no backward modes are present. The left middle example is a forward generalized Kerker effect where backward modes coherently cancel in the exact backward direction. The right middle example shows the transverse Kerker effect where both modes coherently cancel the exact forward and backward direction leaving only transverse (side lobe) scattering/emission. The right most schematic is an example of general backward preferential scattering/emission.
plane. I.e., A longer arclength means a larger proportion of the total power is concentrated into that harmonic type. Dividing the origin-to-tip vector length of equation 13 by the arclength of equation 14 then gives an intuitive definition of forward or backward directivity as
\[D^{t^{\prime}}=4\pi\frac{\Sigma_{p}\left\|\Sigma_{k}\kappa_{n}\kappa_{n}^{c^{ \prime}}\right\|^{2}}{\Sigma_{nmp}(1+\delta_{mnp})\left\|\varepsilon_{nmp} \right\|^{2}}, \tag{15}\]
where \(D^{f}=D(\theta=0)\) and \(D^{b}=D(\theta=\pi)\). Equation 15 formalizes the argument of directivity presented in figure 3 and directly connects directivity to the behavior of Kerker coefficients in the complex plane. Directivity is proportional to origin-to-tip length and inversely proportional to arclength. From this framework we can rigorously derive the conditions to maximize directivity and relate these conditions to intuitive curves in the complex plane.
As more complex coefficients of the same type (each represented as a vector in the complex plane) point in a similar direction in the complex plane the numerator of equation 15 increases while the arclength remains unchanged. The triangle inequality enforces that the numerator of equation 15 is maximized when all vectors of the same type point in the exact same direction, \(\sum_{p}\left\|\sum_{n}K_{n}c^{t^{\prime}}_{n1p}\right\|^{2}=\sum_{m}\left\|K _{n}c^{t^{\prime}}_{n1p}\right\|^{2}\). I.e., the curve formed by head-to-tail addition of the coefficients forms a straight line. The geometric representation of all vectors pointing in the same direction is the condition of perfect constructive interference. Though, to maximize directivity, the denominator of equation 15 should also be minimized. To achieve this, all coefficients in the denominator that are not present in the numerator should be zero. Since \(K_{n}>1\), we can conclude that:
_The theoretically maximal directivity for a system with \(n_{max}\) harmonic orders occurs when you satisfy Kerker's condition and all Kerker coefficients constructively interfere._
Therefore, Kerker scattering is a necessary but not sufficient condition to achieve the theoretical maximum directivity. Furthermore, generalized Kerker can never achieve the theoretically maximal directivity because though the origin-to-tip length in the unwanted direction is zero, the arclength for that direction is not zero.
## Examples
To highlight the usefulness of the Kerker transform, we give four instructive examples of directional scattering and study their results under the Kerker and Mie expansions. The goal in this section is to provide examples of when it can be useful to switch from the Mie to the Kerker framework. In order to highlight the generality of the Kerker expansion, we study both nearfield and far-field excitations of sub-wavelength and larger than wavelength particles. All examples are summarized in Figure 5.
The first and second row of figure 5 shows a schematic of each system and their corresponding azimuthally integrated far field polar intensity profile, respectively. The left example is the scattering response of a sphere with the material properties initially proposed by Milton Kerker to explain Kerker scattering; the case where \(\epsilon=\mu\). The response has exactly no backward field and Kerker's condition is satisfied for all quantum numbers supported by the sphere. The middle-left example is of generalized backward Kerker emission, where near complete suppression of the forward intensity is achieved. The system achieving this emission is composed of the combined response from a 374nm wavelength emitter near-field coupled 90nm below a 164nm TiO\({}_{2}\) sphere (\(\eta=\sqrt{\epsilon_{r}\mu_{r}}=~{}2.42\)) [21]. The middle-right example is of transverse Kerker scattering from a 120nm Si sphere (\(\eta=3.92+i2.49E^{-2}\)) [22], achieved by coupling two 609nm wavelength emitters to the sphere. One emitter is located 204nm above and below the sphere, respectively. The right most example is of highly directional forward scattering by creating a photonic nanojet. This scattering is achieved by illuminating a 1200nm SiO\({}_{2}\) sphere (\(\eta=1.43+i2.52E^{-3}\)) [23] with a 400nm plane wave. In all cases, the background medium is assumed to be air. The solution to the scattering by a sphere illuminated by a plane wave or a dipole emitter can be found in [19] and [24], respectively.
The third and fourth row of figure 5 plots the \(K_{n}\)-scaled Kerker (upper row) and Mie (lower row) coefficients, respectively, as vectors in the complex plane. This plotting method is commonly used as it describes both amplitude and phase, which is necessary to understand directional scattering [8, 25]. The left example clearly shows forward Kerker scattering as the backward coefficients (red arrows) satisfy the Kerker condition that all \(c^{b}_{n10}\)'s are zero. The forward coefficients (black arrows) constructively interfere leading to a nonzero total forward intensity (blue arrow). Alternatively, determining Kerker's forward condition using the Mie coefficients requires a systematic comparison of both the angle and phase relationship between each pair of electric and magnetic-type harmonics. Though this is a tractable task for \(n_{max}\approx 3\), it is still hard to say for certain that the system is exactly satisfying the Kerker forward condition without using a ruler and protractor.
The Kerker coefficients in the middle-left example show generalized backward Kerker behavior. This is evident by the coherent sum of the forward coefficients forming a closed loop, \(\sum_{n}K_{n}c_{n10}^{f}\approx 0\), and the coherent sum of the backward coefficients producing a nonzero open loop for the total backward intensity (green arrow). The arc in the path length of the backward coefficients as well as the loop of the forward coefficients indicate the presence of excess side lobes since the vectors
Figure 5: Schematics of highly directional scattering/emission (row 1), corresponding log base 10 azimuthally integrated far field with polar intensity plots (row 2), and corresponding \(K_{n}\)-scaled Kerker (row 3) and Mie (row 4) coefficients as vectors in the complex plane. The first example (column 1) is of an exact Kerker scattering system composed of a 250nm radius magnetic sphere (\(\epsilon=\mu\)) excited by a 500nm wavelength plane wave traveling in the \(\mathbf{\hat{e}}_{x}\) direction. The second example (column 2) shows generalized backward Kerker emission achieved by near field coupling a 164nm TiO\({}_{2}\) sphere to a dipole emitting at 374nm. The dipole is located 90nm below the bottom of the sphere on the z-axis and has a moment in the \(\mathbf{\hat{e}}_{y}\) direction. The third example (column 3) is of transverse Kerker scattering achieved in a 120nm Si sphere excited simultaneously by two dipoles, both emitting at 609nm. The dipoles are 204nm above and below the sphere, respectively. Both dipoles are on the z-axis with moments in the \(\mathbf{\hat{e}}_{y}\) direction. The final example (column 4) is of highly directional scattering from a photonic nanojet made from a 1200nm SiO\({}_{2}\) sphere excited by a plane wave with a 426nm wavelength. In all cases the sphere is centered at the origin and the region outside of the red dashed circle defines the domain of validity for the expansion. All intensity is normalized as \(I(\theta)/\text{max}\left(I\right)\) and share the same log scale. Coefficients of the same type are connected head-to-tail and progressively increment from \(n=1\) (tail at the origin) to \(n=2\).
are not strictly in the same direction. These side lobes are evident in the azimuthally integrated intensity. The Mie coefficients traverse a sporadic pattern in the complex plane. With \(n_{max}\approx 7\) and no easily discernable interference relationship, these coefficients do not illuminate directional emission or properties of side lobes. Clearly the Mie coefficients are not the appropriate tool for this problem.
The middle-right example shows transverse Kerker behavior as evident by both the forward and backward Kerker coefficients traversing a closed loop, \(\sum_{n}K_{n}c_{n10}^{f}\approx 0\) and \(\sum_{n}K_{n}c_{n10}^{b}\approx 0\). Though the Mie coefficients do not follow a complicated pattern, it is not immediately evident that the coefficients lead to transverse Kerker behavior, compared to the Kerker coefficients.
Finally, the right example shows highly forward directional scattering from the photonic nanojet, as evident by the open contours in the Kerker coefficients. Directionality is achieved through the interference of around 20 appreciable harmonics in each basis system. In the Kerker basis, the total forward arrow is substantially larger compared to the total backward arrow, indicating a strong preference for forward scattering. Furthermore, each coefficient has a similar magnitude. Therefore, there is no single harmonic dominating the side lobes. This is evident by the many similar sized side lobes seen in in the azimuthally integrated intensity. The electric and magnetic Mie coefficients follow a spiral pattern which indicates similar phase and magnitude behavior between the electric and magnetic-type coefficients. This pattern _almost_ appears to satisfy Kerker's condition. Though, as evident by the nonzero backward Kerker coefficients, this system is not Kerker scattering. Furthermore, discrepancies between the electric and magnetic-type coefficients eventually cause the arrows of the two types to become out of synch. This makes the overall directionality harder to gauge. Finally, the Mie coefficients do nothing to illuminate the nature of side lobes.
Besides viewing coefficients in the complex plane, intuition can also be developed by studying the analytic form of directional fields in the Kerker basis based on the properties defined in section two. For example, equip with the Kerker basis, the two systems in the introduction can now be rewritten as
\[\begin{array}{ll}\text{\emph{System 1:}}&\alpha\mathbf{Y}_{111}^{f}+b\mathbf{Y}_{210}^{ f}+c\mathbf{Y}_{311}^{f}+\alpha\mathbf{Y}_{111}^{b}-b\mathbf{Y}_{210}^{b}\\ \text{\emph{System 2:}}&\alpha\mathbf{Y}_{111}^{f}+b\mathbf{Y}_{210}^{f}+c\mathbf{Y}_{311}^{f}+ \alpha\mathbf{Y}_{111}^{b}+b\mathbf{Y}_{210}^{b},\end{array}\]
where \(a,b,c\in\mathbb{z}^{+}\ll\infty\). Completely by inspection, the following can be concluded about the two systems: First, both systems are always forward dominant, regardless of the choice of \(a,b,\) or \(c\). The forward-to-backward ratios are proportional to \(\frac{\|a+b+c\|^{2}}{\|a-b\|^{2}}\) and \(\frac{\|a+b+c\|^{2}}{\|a+b\|^{2}}\), respectively. Therefore, system 1 will always have the larger forward-to-backward ratio. Assuming \(a\), \(b\), and \(c\) have a similar value, three lobes in the range \(\theta\in[0,\pi]\) or less are expected for both systems (two lobes in the forward hemisphere and the other lobe in the backward hemisphere)85. System 2 has constructive backward interference, \(a+b\), which favors lobes near the exact backward direction. Alternatively, system 1 has destructive backward interference, \(a-b\), which favors pushing power away from \(\theta=\pi\) and into the sides. If either \(a\), \(b\), or \(c\) are strongly dominant, then the system degenerates to more closely mimic the corresponding dominant Kerker harmonic. Side lobe predictions will change accordingly. Finally, a forward Kerker condition is clearly observed in both systems because \(\mathbf{Y}_{311}^{b}=0\), regardless of the choice of \(a,b\), or \(c\). Though, neither system is fully forward Kerker as \(\mathbf{Y}_{111}^{b}\) and \(\mathbf{Y}_{210}^{b}\) are nonzero. System 1 has the potential to be generalized Kerker if \(a=b\) (suitably normalized). System 2 can only be generalized directional since \(a\) and \(b\) are constrained to the positive integers. We encourage the reader to return to the introduction and attempt to formulate these conclusions from the Mie framework. Examples of the two systems for different values of \(a,\ b\), and \(c\) are presented in the supplementary information.
Footnote 85: This prediction is according to the side lobe formula proposed in the second section (valid for a single harmonic), the rule-of-thumb of negligible opposite type coupling, and knowledge of the general behavior of same type coupling. An inspective approach to understanding side lobes is intended to provide an educated guess based on the behavior of the Kerker harmonics. You cannot, in general, completely accurately predict side lobe behavior without performing the calculation. For example, in general, interference between forward and backward harmonics can alter side lobes. This effect is more appreciable for lobes of different type both near \(\theta=\pi/2\). So, care must be taken to infer the exact location and strength of lobes near this direction. Other features, such as the existence of the Kerker effect for a particular harmonic, can be concluded completely by inspection.
## 6 Conclusion
We propose a linear transform to convert the atom-like vector spherical harmonics found in Mie theory to forward and backward directional vector harmonics and show the use of this method to understand directional scattering/emissive systems. The directional harmonics, termed the Kerker harmonics, have a simple far field expression governed primarily by the Kerker polar angle functions. These functions have a clear notion of primary and side lobes, weak coupling between forward and backward types, and coupling between same type harmonics controls directivity. The resulting azimuthally integrated and exact forward or backward intensity both have a simple analytic form which leads to intuitive definitions of Kerker, generalized Kerker, transverse Kerker, and highly directional scattering /emission as open and closed loop contours
of Kerker coefficients in the complex plane. Total power flow is related to the arc length of these coefficients. This provides a simple definition for the condition of theoretically maximal directivity. Examples of a Kerker, generalized Kerker, transverse Kerker, and highly directional system are shown to be more conceptually intuitive in the Kerker basis compared to the Mie basis when viewed in the complex plane. These examples explore the use of this transform in both scattering and emissive systems ranging from sub-wavelength to larger-than-wavelength size regimes (e.g., 20 appreciable harmonics).
## Additional Information
Includes a component expansion of the Kerker harmonics, derivations of all major equations, and examples of the two proposed systems for different values of \(a,b\), and \(c\).
## Author contributions statement
P.R.W. devised the research idea and developed the theoretical framework. H.A.A. oversaw the project progress. All authors contributed to writing and editing the manuscript.
## Funding Sources
This work was supported by the Army Research Office under MURI Grant W911NF-18-1-0240.
## Competing interests
The authors declare no competing financial interests.
|
2304.07452 | Instabilities of a Bose-Einstein condensate with mixed nonlinear and
linear lattices | Bose-Einstein condensates (BECs) in periodic potentials generate interesting
physics on the instabilities of Bloch states. The lowest-energy Bloch states of
BECs in pure nonlinear lattices are dynamically and Landau unstable, which
breaks down BEC superfluidity. In this paper we propose to use an out-of-phase
linear lattice to stabilize them. The stabilization mechanism is revealed by
the averaged interaction. We further incorporate a constant interaction into
BECs with mixed nonlinear and linear lattices, and reveal its effect on the
instabilities of Bloch states in the lowest band. | Jun Hong, Chenhui Wang, Yongping Zhang | 2023-04-15T02:28:20Z | http://arxiv.org/abs/2304.07452v1 | # Instabilities of a Bose-Einstein condensate with mixed nonlinear and linear lattices
###### Abstract
Bose-Einstein condensates (BECs) in periodic potentials generate interesting physics on the instabilities of Bloch states. The lowest-energy Bloch states of BECs in pure nonlinear lattices are dynamically and Landau unstable, which breaks down BEC superfluidity. In this paper we propose to use an out-of-phase linear lattice to stabilize them. The stabilization mechanism is revealed by the averaged interaction. We further incorporate a constant interaction into BECs with mixed nonlinear and linear lattices, and reveal its effect on the instabilities of Bloch states in the lowest band.
## I Introduction
Since the first experimental realizations in 1995 [1; 2; 3], atomic Bose-Einstein condensates (BECs) have been fundamental platforms to explore quantum many-body phenomena. Theoretically, the dynamics of a BEC can be described well using the mean-field Gross-Pitaevskii (GP) equation [4]. A linearization of the GP equation around a BEC state becomes the so-called Bogoliubov-de Gennes (BdG) equation which describes elementary excitations of the corresponding BEC state [5]. The distinguishing feature of the BdG equation is that the BdG Hamiltonian is non-Hermitian, which allows for the existence of complex excitations. In the presence of any complex modes, a small deviation from the BEC state may diverge exponentially with time, which destroys the BEC state. Such breakdown of the corresponding BEC state is referred to as the dynamical instability. However, not all BEC states possess the dynamical instability. A standard homogeneous BEC is dynamically stable and its elementary excitation is the gapless phonon mode in the long wavelength limit [6]. The identification of the dynamical instability of a BEC state becomes a fundamental issue.
One of outstanding systems that nurture dynamical instability is BECs in optical lattices [7; 8; 9]. The optical lattices modify the dispersion relation of a BEC to give rise to Bloch spectrum. The associated BEC Bloch states may be dynamically unstable due to the interplay between their dispersion and atomic interactions [7; 8; 9]. The dynamical instability of BEC Bloch states has been experimentally observed by measuring the decay of condensed atoms [10]. The optical-lattice-induced dynamical instability relating to the breakdown of BEC superfluidity has been extensively studied [11; 12; 13]. It can be approached analytically in a Kronig-Penney potential [14]. Attractive interactions [15] or complicated atomic interactions [16] give the instability more features. Two dimensional optical lattices [17] and Bloch states in higher Bloch bands [18] are investigated theoretically. Besides the dynamical instability, optical lattices can introduce Landau instability to BEC Bloch states [7]. The Landau instability happens when the BEC Bloch states are energetically unfavorable [12]. Generalization of a single BEC in optical lattices to multiple components attracts much attention [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. Multiple-component BECs in optical lattices combine rich phases due to numerous parameters and the optical-lattice-modified dispersion relation. The dynamical instability of these systems presents complex features and becomes more interesting [19; 20]. Meanwhile, in the tight-binding regime of the optical lattices, the dynamical instability of multiple components is tractable in an analytical way [21; 22; 23; 24]. Furthermore, the optical lattices can be component-dependent, which introduces rich instability structures [25; 26; 27]. In multiple-component BECs, one can define spin currents. Multiple-component BECs in optical lattices provide an important platform to study the dynamical instability of spin currents [28; 29].
Being different from BECs in optical lattices where interactions come from condensed atoms and optical lattices provide linear periodic potentials are nonlinear lattices, which represent spatially periodic modulation of the interatomic interactions [30; 31]. Nonlinear lattices can be experimentally implemented by the controllable optical Feshbach resonances [32; 33; 34]. The lowest-energy Bloch states (at Brillouin zone center in the lowest band) in nonlinear lattices are always dynamically unstable so that they can not support superfluidity [35]. Only Bloch states at a finite quasimomentum regime close to Brillouin zone edge are dynamically stable [35]. Physically, for these Bloch states, atoms are mainly confined into the negative parts of nonlinear lattices, and the empty-occupation of the positive parts behaviors like barriers to prevent the tunneling between the negative-part occupations, which stabilizes the corresponding Bloch states [36]. In most lattice experiments, BECs are usually prepared in the lowest-energy Bloch states. However, the instability breaks the preparations in nonlinear lattices. Therefore, stabilizing the lowest-energy Bloch states becomes an important aspect for experimental realizations. Refs. [35; 36; 37] propose to add a constant interaction to nonlinear lattices for their stabilization. Nonlinear lattices with a constant repulsive interaction generate repulsive effective interatomic interactions providing a possible means for the stabilization. Ref. [38] suggests that a linear and coherent Rabi-coupling between two-component
BECs can be used to stabilize the lowest-energy Bloch states.
In the present paper, we systematically study the instabilities of BECs with mixed nonlinear and linear lattices. Nonlinear phenomena in mixed nonlinear and linear lattices have been widely studied [30]. The interplay between nonlinear and linear lattices gives new properties to bright solitons [39; 40; 41; 42; 43] and increases their mobility [44]. These systems provide a possibility to study the effect of commensurability between two lattices on the existence of solitons [45]. The coexistence of two lattices helps to stabilize solitary waves against collapse [46; 47; 48; 49]. The spatially localized states in multiple-component BECs with mixed nonlinear and linear lattices are revealed to have interesting properties [50; 51]. More importantly, the mixed lattices are proposed to support long-time Bloch oscillations [52]. So far, all studies on the mixed nonlinear and linear lattices are relevant to the existence and stability and dynamical management of solitary waves. Here, we study the instabilities of spatially extended waves, i.e., Bloch states, in mixed nonlinear and linear lattices. Bloch states are relevant to experimentally load BECs into the mixed lattices. The instabilities of Bloch states in these systems relate to the breakdown of BEC superfluidity. Therefore, our study is experimentally involved. We examine the dynamical instability and Landau instability of Bloch states in the lowest Bloch band in mixed nonlinear and linear lattices by analyzing the BdG equation. In comparison with the instabilities of BECs with pure nonlinear lattices as studied in [35; 36], we find that an out-of-phase linear lattice can assist to stabilize the Bloch states around the Brillouin zone center. We present the mechanism of the stabilization using the concept of the averaged interaction. According to the mechanism, an in-phase linear lattice is useless for the stabilization. We further reveal that the out-of-phase and in-phase linear lattices can modify the dynamical instability of the Bloch states around Brillouin zone edges; the out-of-phase linear lattice destabilizes the states and the in-phase lattice strengthens their stabilities. Meanwhile, we incorporate a constant interaction into the BECs with mixed lattices and study the effect of repulsive and attractive constant interactions on the instabilities of Bloch states.
The paper is organized as follows. In Sec. II, we present the theoretical framework for the study on the instabilities of BEC Bloch states in mixed nonlinear and linear lattices. It includes the GP equation and the derivations of the BdG equation. In Sec. III, we show nonlinear Bloch spectrum and indicate the existence of nonlinear Bloch states. The properties of Bloch states are shown by their density distributions. In Sec. IV, the dynamical and Landau instabilities of Bloch states are presented, with a purpose that a linear lattice can stabilize the Bloch states around the Brillouin zone center. The mechanism of the stabilization is uncovered. We also study the effect of a constant interaction. The conclusion follows in Sec. V.
## II Model
We consider a BEC with spatially periodic modulated interactions in the presence of a linear optical lattice. The system is described by the Gross-Pitaevskii (GP) equation as follows,
\[i\frac{\partial\psi}{\partial t}=-\frac{1}{2}\frac{\partial^{2}\psi}{\partial x ^{2}}-V\cos(x)\psi+\mathcal{G}_{\text{non}}|\psi|^{2}\psi. \tag{1}\]
\(\psi(x,t)\) is the wave function of the BEC. The GP equation is dimensionless. We have already used the units of energy and length as \(8E_{\text{rec}}\) and \(1/2k_{l}\) respectively. Here, the recoil energy of the optical lattice lasers is \(E_{\text{rec}}=\hbar^{2}k_{l}^{2}/2m\) and \(k_{l}\) is the wavenumber of the optical lattice lasers and \(m\) is the atom mass. The linear optical lattice is described by \(-V\cos(x)\) with the lattice depth \(V\). The nonlinear coefficient in the GP equation is
\[\mathcal{G}_{\text{non}}=g_{1}+g_{2}\cos(x). \tag{2}\]
The nonlinear lattice is described by \(g_{2}\cos(x)\) with \(g_{2}\) being the nonlinear-lattice amplitude. We also incorporate a constant interaction with the nonlinear coefficient \(g_{1}\). We consider that two lattices have the same spatial structure and the same period. The relative phase between two lattices are controlled by the sign of \(g_{2}\) and \(V\). Concretely, we assume \(g_{2}>0\) and change the sign of \(V\) to analyze. When \(V>0\) the two lattices are out of phase, and they are in phase when \(V<0\).
The experimental loading of BECs into the mixed nonlinear and linear lattices connects with Bloch states. They are defined as \(\psi(x,t)=e^{ikx-i\mu_{k}t}\phi_{k}(x)\). Here \(k\) is the quasimomentum, \(\mu_{k}\) is the chemical potential, and \(\phi_{k}(x)\) is a periodic function having the same period as the mixed lattices, i.e., \(\phi_{k}(x+2\pi)=\phi_{k}(x)\). Substituting the Bloch state solutions into the GP equation, we have,
\[\mu_{k}\phi_{k}=-\frac{1}{2}(\frac{d}{dx}+ik)^{2}\phi_{k}-V\cos(x)\phi_{k}+ \mathcal{G}_{\text{non}}|\phi_{k}|^{2}\phi_{k}. \tag{3}\]
By solving above nonlinear equation with a normalization condition \(\int_{0}^{2\pi}dx|\phi_{k}(x)|^{2}=1\), we can get Bloch spectrum \(\mu(k)\) and the associated Bloch states \(\phi_{k}\). In detail, we expand the periodic function \(\phi_{k}\) using a plane-wave basis, and the above nonlinear equation turns to be coupled nonlinear ordinary equations for the plane-wave coefficients [12], which can be solved using the standard Newton relaxation method.
Once we know the BEC Bloch states, we study their dynamical instability by linearizing the GP equation around the Bloch states. We add perturbations to the Bloch states,
\[\psi(x, t)=e^{ikx-i\mu_{k}t} \tag{4}\] \[\times[\phi_{k}(x)+u_{kq}(x)e^{iqx-i\omega_{kq}t}+v_{kq}^{*}(x)e^ {-iqx+i\omega_{kq}^{*}t}],\]
where \(q\) is the quasimomentum of perturbations, \(\omega_{kq}\) is the energy of perturbations, and \(u_{kq}(x)\) and \(v_{kq}(x)\) are
the perturbation amplitudes. After substituting the general wave function in Eq. (4) into the GP equation and keeping only the linear terms relating to the perturbation amplitudes, we get the following Bogoliubov-de Gennes (BdG) equation,
\[\omega_{kq}\begin{pmatrix}u_{kq}\\ v_{kq}\end{pmatrix}=\mathcal{H}_{\text{BdG}}(k,q)\begin{pmatrix}u_{kq}\\ v_{kq}\end{pmatrix}, \tag{5}\]
where the BdG Hamiltonian is
\[\mathcal{H}_{\text{BdG}}(k,q)=\begin{pmatrix}\mathcal{L}(k,q)&\mathcal{G}_{ \text{non}}\phi_{k}^{2}\\ -\mathcal{G}_{\text{non}}\phi_{k}^{*2}&-\mathcal{L}(-k,q)\end{pmatrix}, \tag{6}\]
with
\[\mathcal{L}(k,q) \tag{7}\] \[=-\frac{1}{2}\left[\frac{\partial}{\partial x}+i(k+q)\right]^{2}- V\cos(x)-\mu_{k}+2\mathcal{G}_{\text{non}}|\phi_{k}|^{2}.\]
The unique feature of the BdG Hamiltonian for a condensate is that it is non-Hermitian, i.e., \(\mathcal{H}_{\text{BdG}}^{\dagger}\neq\mathcal{H}_{\text{BdG}}\). Therefore, the BdG Hamiltonian allows for the existence of complex eigenvalues in \(\omega_{kq}\). In the presence of complex modes in \(\omega_{kq}\), it is known that the perturbation amplitudes in Eq. (4) shall grow up exponentially with time, which means that a small perturbation shall deviate the evolution of the wave function far away from the condensed state. Consequently, the condensed state is dynamically unstable if there exists any complex mode in \(\omega_{kq}\). Through this dynamical instability, the condensed state is broken to lose superfluidity. We examine the dynamical instability of Bloch states by diagonalizing the BdG equation in Eq. (5). We note that the BdG Hamiltonian is spatially periodic due to the Bloch states. To carry out the diagonalization, we assume that the perturbation amplitudes are periodic functions which can be represented by a plane-wave expansion, so the BdG Hamiltonian is projected into the plane-wave basis and the resulted \(\omega_{kq}\) are in the form of Bloch spectrum with the Brillouin zone \(q\in(-0.5,0.5]\)[27]. We are only interested in the Bloch states in the lowest band in Eq. (3). Considering the Brillouin zone \(k\in(-0.5,0.5]\) and the symmetry \(\mu_{k}=\mu_{-k}\), we only analyze the instabilities of the Bloch states at \(k\in[0,0.5]\) in the lowest band. Meanwhile, the symmetry of the BdG Hamiltonian is
\[\sigma_{x}\mathcal{H}_{\text{BdG}}(k,q)\sigma_{x}=-\mathcal{H}_{\text{BdG}}^ {*}(k,-q), \tag{8}\]
where \(\sigma_{x}\) is a Pauli matrix. With the symmetry, we know that if the eigenvalue of \(\mathcal{H}_{\text{BdG}}\) is \(\omega\) at \((k,q)\), then the eigenvalue immediately becomes \(-\omega^{*}\) at \((k,-q)\). Therefore, the perturbation energy has a symmetry \(\omega_{kq}=-\omega_{k-q}^{*}\). Considering this symmetry, we only calculate the Bloch spectrum belonging to \(q\in[0,0.5]\) in the BdG equation to check whether there exists any complex eigenvalue. We define the growth rate \(\Gamma\) to describe the instability. It is the maximum value of the imaginary parts of \(\omega_{kq}\),
\[\Gamma=\text{Max}[\text{Imag}(\omega_{kq})]. \tag{9}\]
If the calculated growth rate is nonzero (zero), the corresponding Bloch state is dynamically unstable (stable).
We also examine the Landau instability of the Bloch states. In comparison with the linearization of the GP equation for the dynamical instability, the Landau instability needs to linearize the energy functional of the system around the Bloch states [7]. Small perturbations around the Bloch states generate an additional energy functional as [7]
\[\sigma_{z}\mathcal{H}_{\text{BdG}}(k,q), \tag{10}\]
where \(\sigma_{z}=diag(1,-1)\) is a Pauli matrix. \(\sigma_{z}\mathcal{H}_{\text{BdG}}(k,q)\) is Hermitian so that its eigenvalues are real-valued. If there is any negative eigenvalue in \(\sigma_{z}\mathcal{H}_{\text{BdG}}(k,q)\), the corresponding Bloch states are not local minima of the energy functional and they are Landau unstable. The occurrence of the Landau instability relates to Landau's criteria of superfluidity [17]. We use the same procedure as the treatment of the BdG Hamiltonian to diagonalize \(\sigma_{z}\mathcal{H}_{\text{BdG}}(k,q)\) to seek whether there exists any negative eigenvalue in \(\sigma_{z}\mathcal{H}_{\text{BdG}}(k,q)\).
## III Nonlinear Bloch bands and associated Bloch states
We first study the existence of Bloch states in mixed nonlinear and linear lattices by solving Eq. (3). Fig. 1 demonstrates nonlinear Bloch spectrum and associated Bloch states for a pure nonlinear lattice (\(V=0\) and \(g_{1}=0\)). Only lowest two bands are shown in Fig. 1(a). It is interesting to find that the nonlinear Bloch spectrum is similar to that of a linear lattice. An energy gap is opened around Brillouin zone edges \(k=\pm 0.5\) between
Figure 1: Nonlinear Bloch spectrum and associated nonlinear Bloch states of a BEC in a pure nonlinear lattice. \(g_{2}=0.05\) and \(V=0\), \(g_{1}=0\). (a) The lowest two Bloch bands. (b) and (c) The density distributions of nonlinear Bloch states at Brillouin zone center and edge in the lowest band respectively [labeled by squares in (a)]. Pink stripes represent the regions that the nonlinear lattice is positive \(g_{2}\cos x>0\).
the lowest two bands. The nonlinear Bloch state at \(k=0\) in the lowest band is the lowest-energy state. Its density distribution is shown in Fig. 1(b). The occupations in the minima of each nonlinear-lattice cell (white regions) dominates. Meanwhile, there are also populations in the maxima of the cells (shadowy regions). In contrast, the populations of the maxima for the Bloch state at the Brillouin zone edge in the lowest band are negligible [see Fig. 1(c)].
Fig. 2 demonstrates nonlinear Bloch bands with a nonlinear lattice and an out-of-phase linear lattice \(V>0\). We find that \(V=0.05\) is a critical value where the lowest two bands close gap and they connect at Brillouin zone edge (black-solid lines). Increasing \(V\) from zero to the critical value, the size of the gap between them decreases. Beyond the critical value, the gap is reopened (red-solid lines). The gap between the lowest bands decreases to close and reopens is a signature of the competition between the two lattices. At the critical value \(V=g_{2}\), the out-of-phase linear lattice completely cancels the effect of the nonlinear lattice. When \(V<g_{2}\) the nonlinear lattice dominates over the linear lattice. This can be seen from the density distribution of the Bloch state at \(k=0\). As shown in Fig. 2(b), the occupations in the minima of nonlinear-lattice cells are still larger than these in the maxima [see the blue line]. This feature is the same as that in a pure nonlinear lattice. However, the Bloch state at the Brillouin zone edge chooses to occupy the cells of the linear lattice [see the red line in Fig. 2(b)]. When \(V>g_{2}\) and the gap is reopened, the linear lattice surpasses the nonlinear one. Density distributes in the cells of the linear lattice for all Bloch states. Illustrating density distributions are demonstrated in Fig. 2(c) for \(V=0.12\).
Fig. 3 demonstrates nonlinear Bloch bands with a nonlinear lattice and an in-phase linear lattice. The linear lattice has the same phase as the nonlinear lattice. It enhances the effect of the nonlinear lattice. Therefore, with the help of the linear lattice, the gap size between the lowest two bands is wider than that in a pure lattice [see Fig. 3(a)]. The Bloch states distribute inside the cells of both lattices [see Figs. 3(b) and 3(c)].
## IV Instabilities of nonlinear Bloch states
### The out-of-phase linear lattices \(V>0\)
The dynamical instability and Landau instability of the BEC Bloch states in the lowest Bloch band with mixed nonlinear and linear lattices are studied by diagonalizing the BdG Hamiltonian in Eq. (6) and the energy functional Hamiltonian in Eq. (10) respectively. Fig. 4 demonstrates typical results for out-of-phase linear lattices in the \((k,q)\) plane where \(k\) and \(q\) are the quasimomenta of the Bloch states and perturbations respectively. The results in the first row are for a pure nonlinear lattice
Figure 2: Nonlinear Bloch spectrum and associated nonlinear Bloch states of a BEC with a nonlinear lattice and an out-of-phase linear lattice. The nonlinear lattice amplitude is \(g_{2}=0.05\) and the constant interaction is \(g_{1}=0\). (a) The lowest two Bloch bands. When \(V<0.05\) there is a gap opening between them (cyan-solid lines). \(V=0.05\) is a critical value where the lowest two bands connect at Brillouin zone edges (black-solid lines). When \(V=0.08\) gap is still closed (dotted lines). Further increasing \(V\) results in the gap reopening (red-solid lines). (b) and (c) The density distributions of nonlinear Bloch states at Brillouin zone center (blue lines) and edge (red lines) in the lowest band for \(V=0.04\) and \(V=0.12\) respectively [labeled by squares in (a)]. Pink stripes represent the regions that the nonlinear lattice is positive \(g_{2}\cos x>0\). Since the linear lattice is out-of-phase, in the striped regions the linear lattice is negative \(-V\cos x<0\).
Figure 3: Nonlinear Bloch spectrum and associated nonlinear Bloch states of a BEC with a nonlinear lattice and an in-phase linear lattice \(V<0\). \(g_{2}=0.05\), \(V=-0.05\), and \(g_{1}=0\). (a) The lowest two Bloch bands. (b) and (c) The density distributions of nonlinear Bloch states at Brillouin zone center and edge in the lowest band respectively [labeled by squares in (a)]. Dark-blue stripes represent the regions that the nonlinear lattice is positive \(g_{2}\cos x>0\), since the linear lattice is in-phase, it is also positive \(-V\cos x>0\) in striped regions.
with different amplitude \(g_{2}\). The pure nonlinear lattice has been studied in [35; 36]. Our results are consistent with theirs. In a pure nonlinear lattice, all Bloch states in the lowest band are Landau unstable (which represented by gray areas in the plots). For a small amplitude \(g_{2}\) (such as \(g_{2}=0.02,0.1\) in Figs. 4(a.1) and 4(a.2)), the Bloch states at a finite region of \(k\) close to Brillouin zone edge are dynamically stable (represented by the out of the colored areas). When the amplitude is \(g_{2}=0.2\) in Fig. 4(a.3) the Bloch states around Brillouin zone edge are dynamically stable. The outstanding feature for the pure nonlinear lattice is that the Bloch states around Brillouin zone center \(k=0\) are dynamically unstable.
In the presence of an out-of-phase linear lattice (\(V=0.05\) in plots in the second row) the instabilities of the Bloch states change dramatically for a small \(g_{2}\). Fig. 4(b.1) shows that the Bloch states around \(k=0\) become both dynamically and Landau stable. Especially, the dynamically unstable Bloch states shrink to \(k\in[0.25,0.5]\), which means that the Bloch states at \(k\in[0,0.25)\) are dynamically stable. The results indicate that an out-of-phase linear lattice can stabilize the lowest-energy Bloch states against the dynamical and Landau instabilities. The stabilization only works when the linear lattice dominates over the nonlinear lattice, i.e., \(V>g_{2}\). If the nonlinear lattice dominates, the instabilities are similar to these in a pure nonlinear lattice, and the typical examples are shown in Figs. 4(b.2) and 4(b.3).
We use the averaged interaction firstly introduced in Ref. [35] to uncover the mechanism of the stabilization of the lowest-energy Bloch state by the out-of-phase linear lattice. The averaged interaction \(G\) is defined as
\[G=\int_{0}^{2\pi}dx\,[g_{1}+g_{2}\cos(x)]\,|\phi_{k}|^{4}. \tag{11}\]
It represents the average value of the nonlinear energy over a period. In Fig. 5, we plot the averaged interaction of the \(k=0\) Bloch state as a function of \(g_{2}\). The blue-circle line is for the parameters of \(V=0.05\) and \(g_{1}=0\), corresponding to the second row in Fig. 4. It shows that the averaged interaction is repulsive (i.e., \(G>0\)) when \(0<g_{2}<0.05\) and is attractive if \(g_{2}>0.05\). It is known that the Bloch state at \(k=0\) in a linear lattice is only dynamically and Landau stable when a constant nonlinearity is repulsive [7]. The averaged interaction behaves as an effective nonlinearity that the condensed atoms feel. If it is repulsive, it is reasonable that the \(k=0\) Bloch state is stable in the presence of the linear lattice. Oppositely, an attractive averaged interaction can not stabilize the \(k=0\) Bloch state. In the absence of the linear lattice, the calculated averaged interaction of the \(k=0\) Bloch state is shown by the orange-triangular
Figure 5: The averaged interaction \(G\) (defined in Eq. (11)) of the Bloch state at \(k=0\) with a nonlinear lattice and an out-of-phase linear lattice. The horizontal red dashed line is \(G=0\) for guiding eyes.
Figure 4: Instabilities of the BEC Bloch states in the lowest Bloch band with a nonlinear lattice and an out-of-phase linear lattice \(V>0\). \(k\) and \(q\) are the quasimomenta of the Bloch states and perturbations respectively. The colored shadow areas represent that the Bloch states are dynamical unstable, and the color scale labels the growth rate \(\Gamma\) defined in Eq. (9); the scale changes from the dark purple \(\Gamma=0\) to bright red \(\Gamma=0.1\). The gray areas indicate that the Bloch states have Landau instability. In the white regions, they are completely stable. For a fixed Bloch state represented by a fixed \(k\), if there is any unstable mode in a \(q\), the corresponding Bloch state is unstable.
line in Fig. 5. All of them are attractive, which results in the dynamical and Landau instabilities, and this expectation is consistent with the results demonstrating in the first row in Fig. 4. In the presence of a dominating out-of-phase linear lattice, it changes density distributions of the Bloch state so that the averaged interaction may become repulsive. Therefore, the out-of-phase linear lattice provides an experimentally accessible means to stabilize the lowest-energy Bloch state.
We also incorporate a nonzero constant interaction \(g_{1}\neq 0\) into the out-of-phase linear lattice. The results are shown in the third row in Fig. 4 for a repulsive constant interaction \(g_{1}=0.05\) and \(V=0.05\). For the small nonlinear-lattice amplitude, such as \(g_{2}=0.02\) and \(g_{2}=0.1\) in Figs. 4(c.1) and 4(c.2), the stable regions around \(k=0\) (white areas) become very wide. In comparison with the results of \(g_{1}=0\) in the second row, the repulsive constant interaction \(g_{1}\) further enhances the stability of the Bloch states around \(k=0\). However, if \(g_{2}\) dominates, such as \(g_{2}=0.2\) in Fig. 4(c.3), the structure of instabilities becomes the same with a pure nonlinear lattice. The brown-square line in Fig. 5 describes the averaged interaction of the \(k=0\) Bloch states for this case. It shows that up to a critical \(g_{2}\) the averaged interaction is repulsive and beyond the critical \(g_{2}\) it becomes attractive due to the dominating \(g_{2}\). The repulsive constant interaction broadens the repulsive area of the averaged interaction comparing with the case of \(g_{1}=0\) and \(V=0.05\). This is because that \(g_{1}>0\) itself contributes repulsively to \(G\) in Eq. (11). So a repulsive constant interaction is favorable for the stabilization of the \(k=0\) Bloch state with mixed lattices. \(g_{1}<0\) contributes attractively to \(G\), so the stabilization can not be benefited from \(g_{1}<0\). Surprisingly, We still find the repulsive averaged interactions with a large linear lattice \(V=0.2\) and an attractive constant \(g_{1}=-0.02\). The result is shown by the black-dot line in Fig. 5. In the middle region of \(g_{2}\), the averaged interaction is repulsive. The instability results demonstrated in the forth row in Fig. 4 confirm that a small \(g_{2}\) in 4(d.1) and a large one in 4(d.3) lead to instabilities to the Bloch states around \(k=0\) and the middle value as \(g_{2}=0.1\) in 4(d.1) results in a stable Bloch state at \(k=0\).
### The in-phase linear lattices \(V<0\)
The typical results of the BEC Bloch states with a nonlinear lattice and an in-phase linear lattice are described in Fig. 6. The first row shows the results of the in-phase linear lattice \(V=-0.05\) and \(g_{1}=0\). All Bloch states are Landau unstable. Outstandingly, the states around Brillouin zone edge \(k=0.5\) (center \(k=0\)) are dynamically stable (unstable). The physical reason is that the in-phase linear lattice enhances the effect of the nonlinear lattice since the structures of two lattices are spatially matched. Therefore, the averaged interactions \(G\) for the states at \(k=0\) and at \(k=0.5\) are always attractive. Ref. [15] have revealed that the Bloch states with attractive interactions in a linear lattice are always Landau unstable and are dynamically stable (unstable) around the Brillouin zone edges (center). So it is the attractive averaged interaction that makes the states around \(k=0\) (\(k=0.5\)) dynamically unstable (stable).
We also add a constant interaction \(g_{1}\) into the mixed lattices. The second row in Fig. 6 is the results for a repulsive interaction \(g_{1}=0.05\), and the third row is these for an attractive one \(g_{1}=-0.02\). The repulsive constant interaction in Fig. 6(b.1) is dominant, therefore, the instability structures in the \((k,q)\) plane are similar to these of a BEC with repulsive interactions in a linear lattice [7]. When the repulsive constant interaction losses the dominant role, the instabilities become the same as these with only the mixed lattices [see Figs. 6(b.2) and 6(b.3)]. On the other hand, an attractive constant interaction has the same effect with the mixed lattice. The third row shows that the presence of an attractive constant interaction \(g_{1}=-0.02\) does not qualitatively modify the instability structures in comparison with the first row.
### Dynamical instabilities of Bloch states at Brillouin zone center and edge
The BEC experiment has shown that the trigger of the Landau instability requires a long time and the dynam
Figure 6: Instabilities of the BEC Bloch states in the lowest Bloch band with a nonlinear lattice and an in-phase linear lattice \(V<0\). The colored shadow areas represent that the Bloch states are dynamical unstable, and the color scale labels the growth rate \(\Gamma\) defined in Eq. (9); the scale changes from the dark purple \(\Gamma=0\) to bright red \(\Gamma=0.1\). The gray areas indicate that the Bloch states have Landau instability. In the white regions, they are completely stable.
ical instability happens in a short time [10]. Therefore, the dynamical instability may be more relevant in experiments. Furthermore, the Bloch states at Brillouin zone center and edges are distinctively interesting due to their high symmetries. Here, we summarize their dynamical instabilities studied in previous sections to clearly show that the linear lattice can be an efficient approach to stabilize unstable Bloch states of a nonlinear lattice.
Fig. 7 is the dynamical-instability-phase-diagram of the \(k=0\) Bloch states in the space of \((g_{2},V)\). Fig. 7(a) is the case of a zero constant interaction, \(g_{1}=0\). The white area represents that the state is dynamically stable. Only the out-of-phase linear lattice \(V>0\) could stabilize it. The red line corresponds to zero averaged interaction of the \(k=0\) Bloch state, \(G=0\), above which \(G\) is repulsive. Note that the boundary between the white (stable) and dark (unstable) areas is slightly mismatched with the line \(G=0\). This means that the mechanism to stabilize the \(k=0\) Bloch state by the linear-lattice-induced repulsive averaged interaction is not exact. However, the mechanism truly provides an intuitive and qualitative way for our understanding of the stabilization. With the help of a repulsive constant interaction \(g_{1}=0.05\) in Fig. 7(b), the stabilization is extended from the out-of-phase lattice \(V>0\) to the in-phase one \(V<0\). Even the constant interaction is attractive, such as \(g_{1}=-0.02\) in Fig. 7(c), we still find that a large out-of-phase lattice can stabilize the states with a finite \(g_{2}\).
Fig. 8 is the dynamical-instability-phase-diagram of the \(k=0.5\) Bloch states in the space of \((g_{2},V)\). In the absence of the constant interaction \(g_{1}=0\) in Fig. 8(a), the effect of the linear lattice reflects two aspects: the in
Figure 8: The dynamical-instability-phase-diagram of the BEC Bloch states at Brillouin zone edge \(k=0.5\) with the mixed nonlinear and linear lattices in the parameter space \((g_{2},V)\). (a) \(g_{1}=0\), (b) \(g_{1}=0.05\), and (c) \(g_{1}=-0.02\). In the white regions, the \(k=0.5\) Bloch state is dynamically stable; in the dark regions, the Bloch state is dynamically unstable.
Figure 7: The dynamical-instability-phase-diagram of the BEC Bloch states at Brillouin zone center \(k=0\) with the mixed nonlinear and linear lattices in the parameter space \((g_{2},V)\). (a) \(g_{1}=0\), (b) \(g_{1}=0.05\), and (c) \(g_{1}=-0.02\). In the white regions, the \(k=0\) Bloch state is dynamically stable; in the dark regions, the Bloch state is dynamically unstable. The red lines represent the zero averaged interaction \(G=0\), and in the regions above the red lines the averaged interaction is repulsive and in the other regions it is attractive.
phase lattice \(V<0\) always strengths the stability of the \(k=0.5\) state; the out-of-phase lattice \(V>0\) weakens its stability in the sense that the lattice increases the critical value of \(g_{2}\) beyond which the state becomes stable. In the presence of a repulsive constant interaction \(g_{1}=0.05\) in Fig. 8(b), it is dominant if \(g_{2}\) and \(-V\) are small, which destabilizes the state. However, for an attractive constant interaction \(g_{1}=-0.02\) in Fig. 8(c), the diagram is qualitatively same with the \(g_{1}=0\) case in Fig. 8(a).
Finally, we comment that the dynamical instability of the Bloch states calculated from the BdG equation in Eq. (5) can also be examined by the direct evolution of the GP equation in Eq. (1) with the corresponding Bloch states serving as initial states. Fig. 9 shows typical examples of evolution. The two \(k=0\) Bloch states are represented by the marked points in Fig. 7. The one of them is known to be dynamically stable and the other is unstable from the calculation of the BdG equation. We set them as initial states to evolve the GP equation. As expected, the stable state evolves stably [see Fig. 9(a)] and the unstable state breaks down during the evolution [see Fig. 9(b)]. The time evolution of the Bloch states offers an experimental approach to examine the instability. In the experiment [33], the mixed nonlinear and linear lattices with the same period can be implemented by the optical Feshbach resonance of an optical standing wave. Following this experiment, we propose to load the BEC into the \(k=0\) Bloch state by adiabatically ramping up the standing wave. The system is then held for a certain time to let free evolution of the Bloch state. Finally, the decay of condensed atom number is observed, from which the loss rate is measured. The loss rate is relevant to the growth rate defined in Eq. (9).
## V Conclusion
BECs in periodic potentials give rise to interesting physics relevant to instabilities of Bloch states. Their instabilities are experimentally involved to relate to the breakdown of BEC superfluidity. It has been shown that even the lowest-energy Bloch state is unstable for the BECs in a nonlinear lattice which challenges its experimental implementations. We propose to add a linear lattice to the BECs with the nonlinear lattice to stabilize the lowest-energy Bloch state. We systematically study the instabilities of BEC Bloch states in mixed nonlinear and linear lattices. The two lattices have the same spatial structure and the same period, but leaving the relative phase is tunable. We find that an out-of-phase linear lattice enables to make Bloch states around Brillouin zone center to be dynamically and Landau stable. The stabilization mechanism is revealed as the out-of-phase lattice changes density distributions to induce repulsive averaged interactions. In contrast, an in-phase linear lattice enhances the effect of the nonlinear lattice and can not change density distributions. It always induces attractive averaged interactions, therefore it is useless for the stabilization. It is known that Bloch states around Brillouin zone edge become dynamically stable in the pure nonlinear lattice when the lattice amplitude is beyond a critical value. The presence of the out-of-phase lattice moves the critical value to be more large and the in-phase lattice assists to make them dynamically stable no matter the value of the nonlinear-lattice amplitude.
We also incorporate a constant interaction into the BECs with mixed nonlinear and linear lattices. A repulsive constant interaction extends the out-of-phase-linear-lattice-induced stabilization of the Bloch states around Brillouin zone center to the in-phase linear lattice. Even in the presence of an attractive constant interaction, we find the out-of-phase linear lattice still can stabilize the states. For the Bloch states around Brillouin zone edges, the constant interaction, no matter attractive or repulsive, does not qualitatively change their instability properties.
## VI Acknowledges
This work was supported by National Natural Science Foundation of China with Grants No.11974235 and 11774219.
|
2305.11119 | A bounded below, noncontractible, acyclic complex of projective modules | We construct examples of bounded below, noncontractible, acyclic complexes of
finitely generated projective modules over some rings $S$, as well as bounded
above, noncontractible, acyclic complexes of injective modules. The rings $S$
are certain rings of infinite matrices with entries in the rings of commutative
polynomials or formal power series in infinitely many variables. In the world
of comodules or contramodules over coalgebras over fields, similar examples
exist over the cocommutative symmetric coalgebra of an infinite-dimensional
vector space. A simpler, universal example of a bounded below, noncontractible,
acyclic complex of free modules with one generator, communicated to the author
by Canonaco, is included at the end of the paper. | Leonid Positselski | 2023-05-18T16:59:28Z | http://arxiv.org/abs/2305.11119v4 | # A bounded below, noncontractible,
###### Abstract.
We construct examples of bounded below, noncontractible, acyclic complexes of finitely generated projective modules over some rings \(S\), as well as bounded above, noncontractible, acyclic complexes of injective modules. The rings \(S\) are certain rings of infinite matrices with entries in the rings of commutative polynomials or formal power series in infinitely many variables. In the world of comodules or contramodules over coalgebras over fields, similar examples exist over the cocommutative symmetric coalgebra of an infinite-dimensional vector space. A simpler, universal example of a bounded below, noncontractible, acyclic complex of free modules with one generator, communicated to the author by Canonaco, is included at the end of the paper.
###### Contents
* 1 Projective, Flat, and Injective Bounded Acyclicity Problems
* 2 The Injective Construction of Acyclic Complex of Projectives
* 3 Dual Rickard's Acyclicity Theorem
* 4 The Projective Construction of Acyclic Complex of Projectives
* 5 Brief Preliminaries on Coalgebras
* 6 Comodule and Contramodule Acyclicity Theorems
* 7 Two Contramodule Constructions of Acyclic Complexes of Projectives
* 8 Summary of the Examples Obtained
## Introduction
Bounded above acyclic complexes of projective objects are contractible. So are bounded below acyclic complexes of injective objects. On the other hand, there is an easy, thematic example of a doubly unbounded, acyclic, noncontractible complex of finitely generated projective-injective modules over the algebra of dual numbers \(R=k[\epsilon]/(\epsilon^{2})\) (over any field \(k\)):
\[\cdots\xrightarrow{}R\xrightarrow{\epsilon*}R\xrightarrow{\epsilon*}R \xrightarrow{\cdots} \tag{1}\]
We refer to [9, Prologue], [10, Sections 7.4-7.5] and the references therein for a discussion of the role of the complex (1) in the context of derived Koszul duality and derived categories of the second kind.
Do there exist bounded below, noncontractible, acyclic complexes of projective modules; and if so, under what rings? Dual-analogously, are there any bounded above, noncontractible, acyclic complexes of injective modules? These questions were posed, in the context of potential applications to the Finitistic Dimension Conjecture, in the recent preprint of Shaul [16]. According to [16, Theorem 5.1], nonexistence of such complexes of projective/injective modules over a two-sided Noetherian ring \(S\) with a dualizing complex would imply finiteness of finitistic dimensions of \(S\).
The aim of the present paper is to show that, over certain rather big rings \(S\), such complexes do exist. The examples of rings \(S\) which we obtain are certainly noncommutative and non-Noetherian. The more explicit ones among them are rings of column-finite or row/column-zero-convergent infinite matrices with entries in the rings of commutative polynomials or formal power series in infinitely many variables.
On the other hand, in the world of coalgebras over fields, we demonstrate examples of bounded above, acyclic, noncontractible complexes of injective comodules and bounded below, acyclic, noncontractible complexes of projective contramodules over certain _cocommutative_ coalgebras dual to algebras of formal power series in infinitely many variables. These examples go back to [6, Section 0.2.7], where they were very briefly discussed in the context of semi-infinite homological algebra and derived comodule-contramodule correspondence.
Almost all the examples presented in this paper are based on one idea, namely, that of the dual Koszul complex of the ring of polynomials in infinitely many variables. A straightforward realization of this idea is possible in the worlds of comodules and contramodules, but we need an additional trick with a passage to infinite matrices in order to produce examples of complexes of projective/injective modules. The only exception is the (much simpler) _universal_ example, communicated to the author by A. Canonaco. We reproduce it at the end of the paper in Example 8.4.
The approach to the Finitistic Dimension Conjecture developed in [15, 16] goes back to Rickard's paper [14], where it was shown that if the injective modules over a finite-dimensional algebra generate its unbounded derived category as a triangulated category with coproducts, then the finitistic dimension is finite. A counterexample in [14, Theorem 3.5] shows that for the ring of commutative polynomials in infinitely many variables, the generation property fails. Our examples in this paper follow in the footsteps of [6, Section 0.2.7] and [14, Theorem 3.5]. We also provide some details of the claims in [6, Section 0.2.7] which were skipped in the book [6].
**Acknowledgement.** This paper was inspired by Liran Shaul's talk at the Algebra seminar in Prague, organized by Jan Trlifaj. I want to thank both the speaker and the organizer of the seminar. I also wish to thank Alberto Canonaco for communicating his example to me and giving a kind permission to reproduce it here (see Example 8.4). The author is supported by the GACR project 23-05148S and the Czech Academy of Sciences (RVO 67985840).
## 1. Projective, Flat, and Injective Bounded Acyclicity Problems
The general convention in this paper is that complexes are presumed to be cohomologically graded, so the differential raises the degree. A complex \(C^{\bullet}=(C^{n},\,d_{n}\colon C^{n}\to C^{n+1})\) is called _bounded above_ if \(C^{n}=0\) for \(n\gg 0\), and \(C^{\bullet}\) is _bounded below_ if \(C^{n}=0\) for \(n\ll 0\). In this notation, it is a standard fact that every bounded above acyclic complex of projective modules/objects (in an abelian or exact category) is contractible, and every bounded below acyclic complex of injective modules/objects is contractible. When we occasionally consider homologically graded complexes, we use the notation with lower indices, \(P_{\bullet}=(P_{n},\,d_{n}\colon P_{n}\to P_{n-1})\).
Let \(S\) be an associative ring. The two "wrong-sided bounded projective/injective acyclicity problems" posed in [16, Theorem 5.1(4-5)] are:
* Is every bounded above acyclic complex of injective \(S\)-modules contractible?
* Is every bounded below acyclic complex of projective \(S\)-modules contractible?
In addition to the above two, we would like to ask a similar question about flat \(S\)-modules. Here one has to be careful: even a two-sided bounded acyclic complex of flat modules need not be contractible. However, such a complex is always _pure acyclic_, or in other words, has flat modules of cocycles. Thus we ask:
* Is every bounded below acyclic complex of flat \(S\)-modules pure acyclic?
Given a ring \(S\) and a left \(S\)-module \(M\), the _character module_\(M^{+}=\operatorname{Hom}_{\mathbb{Z}}(M,\mathbb{Q}/\mathbb{Z})\) is a right \(S\)-module. The following lemma is well-known.
**Lemma 1.1**.: _A left \(S\)-module \(F\) is flat if and only if the right \(S\)-module \(F^{+}\) is injective. _
The next proposition explains the connection between the injective, flat, and projective wrong-sided bounded acyclicity questions, and shows that presenting a counterexample to the "projective" question is enough.
**Proposition 1.2**.: _Given a ring \(S\), consider the following three properties:_
1. _Every bounded above acyclic complex of injective right_ \(S\)_-modules is contractible._
2. _Every bounded below acyclic complex of flat left_ \(S\)_-modules is pure acyclic._
3. _Every bounded below acyclic complex of projective left_ \(S\)_-modules is contractible._
_Then the implications (1)\(\implies\)(2)\(\implies\)(3) hold._
Proof.: (1)\(\implies\)(2) Let \(F^{\bullet}=(0\to F^{0}\to F^{1}\to F^{2}\to\cdots)\) be a bounded below acyclic complex of flat left \(S\)-modules. Then, by the direct implication of Lemma 1.1, \(F^{\bullet,+}=(\cdots\to F^{2,+}\to F^{1,+}\to F^{0,+}\to 0)\) is a bounded above acyclic complex of injective right \(S\)-modules. A complex of injective modules is contractible if and only if its modules of cocycles are injective. If this is the case for the complex \(F^{\bullet,+}\), then the inverse implication of Lemma 1.1 tells that the modules of cocycles of the complex \(F^{\bullet}\) are flat; so \(F^{\bullet}\) is a pure acyclic complex of flat modules.
(2) \(\Longrightarrow\) (3) By Neeman's theorem [5, Theorem 8.6 (iii) \(\Rightarrow\) (i)], any pure acyclic complex of projective modules is contractible. (Cf. [16, proof of Theorem A.7].)
## 2. The Injective Construction of Acyclic Complex of Projectives
Let \(k\) be a field, \((x_{\alpha})_{\alpha\in A}\) be an infinite set of variables, and \(R=k[x_{\alpha}:\alpha\in A]\) be the commutative ring of polynomials in the variables \(x_{\alpha}\) over \(k\). Endow the one-dimensional vector space \(k\) over \(k\) with the \(R\)-module structure by the obvious rule: all the elements \(x_{\alpha}\in R\) act by zero in \(k\).
**Theorem 2.1** (Rickard).: _For any injective \(R\)-module \(J\) and all integers \(n\geq 0\), one has \(\operatorname{Ext}_{R}^{n}(J,k)=0\)._
Proof.: For a countably infinite set of variables \(x_{\alpha}\), this is formulated and proved in [14, Theorem 3.5]. The general case of a possibly uncountable index set \(A\) is similar. One represents \(A\) as the union of its finite subsets \(B\subset A\), so the ring \(R\) the direct limit of the related polynomial rings \(R_{B}\) in finitely many variables, considers the direct limit of finite Koszul complex indexed by the finite subsets \(B\subset A\), etc. (Cf. the proof of Theorem 3.1 below for some further details.)
Let \(\mathsf{A}\) be an additive category and \(M\in\mathsf{A}\) be an object. Then we denote by \(\operatorname{\mathsf{add}}(M)\) the full subcategory in \(\mathsf{A}\) formed by the direct summands of finite direct sums of copies of \(M\). The following lemma is a straightforward category-theoretic generalization of a well-known module-theoretic observation going back to Dress [3].
**Lemma 2.2**.: _Le \(\mathsf{A}\) be an idempotent-complete additive category and \(M\in\mathsf{A}\) be an object._
(a) _Let \(S=\operatorname{Hom}_{\mathsf{A}}(M,M)^{\operatorname{op}}\) be the opposite ring to the endomorphism ring of the object \(M\in\mathsf{A}\); so the ring \(S\) acts on the object \(M\) on the right. Then the covariant functor \(\operatorname{Hom}_{\mathsf{A}}(M,-)\colon\mathsf{A}\longrightarrow S\text{ \rm-}\mathsf{Mod}\) restricts to an equivalence of additive categories_
\[\operatorname{Hom}_{\mathsf{A}}(M,-)\colon\operatorname{\mathsf{add}}(M) \simeq S\text{\rm-}\mathsf{mod}_{\mathsf{proj}}\]
_between the full subcategory \(\operatorname{\mathsf{add}}(M)\subset\mathsf{A}\) and the full subcategory of finitely generated projective left \(S\)-modules \(S\text{\rm-}\mathsf{mod}_{\mathsf{proj}}\) in the category of left \(S\)-modules \(S\text{\rm-}\mathsf{Mod}\)._
(b) _Let \(S=\operatorname{Hom}_{\mathsf{A}}(M,M)\) be the endomorphism ring of the object \(M\in\mathsf{A}\); so the ring \(S\) acts on the object \(M\) on the left. Then the contravariant functor \(\operatorname{Hom}_{\mathsf{A}}(-,M)\colon\operatorname{\mathsf{A}}^{ \mathsf{op}}\longrightarrow S\text{\rm-}\mathsf{Mod}\) restricts to an anti-equivalence of additive categories_
\[\operatorname{Hom}_{\mathsf{A}}(-,M)\colon\operatorname{\mathsf{add}}(M)^{ \mathsf{op}}\simeq S\text{\rm-}\mathsf{mod}_{\mathsf{proj}}\]
_between the full subcategory \(\operatorname{\mathsf{add}}(M)\subset\mathsf{A}\) and the full subcategory of finitely generated projective left \(S\)-modules \(S\text{\rm-}\mathsf{mod}_{\mathsf{proj}}\subset S\text{\rm-}\mathsf{Mod}\). _
The following corollary sums up the "injective coresolution construction of a bounded below acyclic complex of projective modules".
**Corollary 2.3**.: _Let \(R=k[x_{\alpha}:\alpha\in A]\) be the ring of polynomials in infinitely many variables over a field \(k\), and let_
(2) \[0\relrelrel\relrel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel \rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel \rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel \rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel \rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel \rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel \rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\rel\relrel\rel\rel\rel\relrel\rel\rel\rel\rel\relrel\rel\relrel\rel\relrel\rel\relrel\rel\relrel\rel\relrel\rel\relrel\relrel\rel\relrel\rel\relrel\relrel\rel\rel\relrel\rel\relrel\relrel\rel\relrel\relrel\rel\rel\relrel\rel\relrel\relrel\relrel\relrel\relrel\relrel\rel\relrel\relrel\rel\relrel\relrel\relrel\relrel\relrel\relrel\relrel\relrelrel\relrelrel\relrelrel\relrelrelrel\
situated in the cohomological degrees \(-1\) and \(0\). For every finite subset of indices \(B\subset A\), denote by \(K^{B}_{\bullet}(R)\) the tensor product, taken over the ring \(R\), of the complexes (4) with \(\alpha\in B\). As a finite subset \(B\subset A\) varies, the complexes \(K^{B}_{\bullet}(R)\) form an inductive system, indexed by the poset of all finite subsets \(B\subset A\) ordered by inclusion.
Put \(K_{\bullet}(R)=\varinjlim_{B\subset A}K^{B}_{\bullet}(R)\). Then \(K_{\bullet}(R)\) is a bounded above complex of free \(R\)-modules. One has \(K_{n}(R)=0\) for \(n<0,\ \ K_{0}(R)=R\), and \(K_{n}(R)\) is a free \(R\)-module with a set of generators of the cardinality equal to the cardinality of the set \(A\) for all \(n>0\). (More invariantly, \(K_{n}(R)\) is the free \(R\)-module spanned by the set of all subsets in \(A\) of the finite cardinality \(n\)).
For any finite subset \(B\subset A\), the complex \(K^{B}_{\bullet}(A)\) is a finite resolution of the \(R\)-module \(R/\sum_{\alpha\in B}x_{\alpha}R\) by finitely generated free \(R\)-modules. Passing to the direct limit, one can easily see that \(K_{\bullet}(R)\) is a free \(R\)-module resolution of the one-dimensional \(R\)-module \(k=R/\sum_{\alpha\in A}x_{\alpha}R\).
The following three lemmas are straightforward or standard.
**Lemma 3.2**.: _Let \(R\) be an associative ring, \(\Xi\) be a directed poset, and \((F_{\xi})_{\xi\in\Xi}\) be an inductive system of projective \(R\)-modules whose direct limit \(F=\varinjlim_{\xi\in\Xi}F_{\xi}\) is also a projective \(R\)-module. Let \(P\) be an arbitrary \(R\)-module. Then the higher derived inverse limit functors vanish on the projective system \(\operatorname{Hom}_{R}(F_{\xi},P)_{\xi\in\Xi}\),_
\[\varprojlim_{\xi\in\Xi}\operatorname{Hom}_{R}(F_{\xi},P)=\operatorname{Hom}_{ R}(F,P)\quad\text{and}\quad\varprojlim_{\xi\in\Xi}^{i}\operatorname{Hom}_{R}(F_{ \xi},P)=0\quad\text{for all}\,\,\,i\geq 1.\]
Proof.: Dropping the condition that the \(R\)-module \(F=\varinjlim_{\xi\in\Xi}F_{\xi}\) is projective (but keeping the conditions that the \(R\)-modules \(F_{\xi}\) are projective), one would have \(\varprojlim_{\xi\in\Xi}^{i}\operatorname{Hom}_{R}(F_{\xi},P)=\operatorname{ Ext}_{R}^{i}(F,P)\) for every \(i\geq 0\).
**Lemma 3.3**.: _Let \(A\) be an infinite set and \((V_{B})_{B\subset A}\) be a projective system of abelian groups, indexed by the poset of all finite subsets \(B\subset A\) ordered by inclusion. Assume that there exists an integer \(n\geq 0\) such that \(V_{B}=0\) whenever the cardinality of \(B\) exceeds \(n\). Then the whole derived inverse limit functor vanishes on the projective system \((V_{B})_{B\subset A}\),_
\[\varprojlim_{B\subset A}^{i}V_{B}=0\qquad\text{for all}\,\,\,i\geq 0.\]
Proof.: This is a special case of the assertion that the derived functors of inverse limit are preserved by the passage to a cofinal subsystem. This can be deduced from fact that the derived inverse limits vanish on so-called weakly flabby (faiblement flasque) projective systems [4, Theoreme 1.8]. A stronger result that the derived inverse limit (in an abelian category with exact product functors) only depends on the pro-object represented by the given projective system can be found in [13, Corollary 7.3.7].
**Lemma 3.4**.: _Let \(\Xi\) be a directed poset and \((C^{\bullet}_{\xi})_{\xi\in\Xi}\) be a projective system of complexes of abelian groups with \(C^{n}_{\xi}=0\) for \(n<0\). Then there are two spectral sequences \({}^{\prime}\!E^{pq}_{r}\) and \({}^{\prime\prime}\!E^{pq}_{r}\), starting from the pages_
\[{}^{\prime}\!E^{pq}_{2} =\varprojlim^{p}H^{q}(C^{\bullet}_{\xi}), p,\,q\geq 0,\] \[{}^{\prime\prime}\!E^{pq}_{1} =\varprojlim^{q}C^{p}_{\xi}, p,\,q\geq 0,\]
_with the differentials \({}^{\prime}\!d_{r}^{pq}\colon{}^{E^{p,q}_{r}}\longrightarrow{}^{\prime}\!E^{p+r,q- r+1}_{r}\) and \({}^{\prime\prime}\!d_{r}^{pq}\colon{}^{\prime\prime}\!E^{p,q}_{r}\longrightarrow{}^{ \prime\prime}\!E^{p+r,q-r+1}_{r}\), converging to the associated graded groups to two different filtrations \({}^{\prime}\!F^{p}E^{n}\) and \({}^{\prime\prime}\!F^{p}E^{n}\) on the same graded abelian group \(E^{n}\), \(n=p+q\)._
Proof.: These are called "the two hypercohomology spectral sequences" (for the derived functor of inverse limit); cf. [2, Section XVII.3]. The groups \(E^{n}\) are the cohomology groups of the complex obtained by applying the derived functor of inverse limit to the whole complex of projective systems \((C^{\bullet}_{\xi})\).
Now we can finish the proof of the theorem. By the definition, we have \(\operatorname{Ext}^{n}_{R}(k,P)=H^{n}\operatorname{Hom}_{R}(K_{\bullet}(R),P)\). The complex \(\operatorname{Hom}_{R}(K_{\bullet}(R),P)\) is the inverse limit
\[\operatorname{Hom}_{R}(K_{\bullet}(R),P)=\varprojlim_{B\subset A}\operatorname {Hom}_{R}(K^{B}_{\bullet}(R),P).\]
For every \(n\geq 0\), Lemma 3.2 (with the poset \(\Xi\) of all finite subsets \(B\subset A\), finitely generated free \(R\)-modules \(F_{B}\), and an infinitely generated free \(R\)-module \(F\)) tells that \(\varprojlim_{B\subset A}^{i}\operatorname{Hom}_{R}(K^{B}_{n}(R),P)=0\) for all \(i\geq 1\).
On the other hand, the complex \(\operatorname{Hom}_{R}(K^{B}_{\bullet}(R),P)\) has its only nonzero cohomology module situated in the cohomological degree \(n\) equal to the cardinality of \(B\) (as \(P\) is a flat module over the ring \(R_{B}=k[x_{\alpha}:\alpha\in B]\)). By Lemma 3.3, we have \(\varprojlim_{B\subset A}^{i}H^{n}\operatorname{Hom}_{R}(K^{B}_{\bullet}(R),P)=0\) for all \(i\geq 0\) and \(n\geq 0\).
In the context of Lemma 3.4, put \(C^{\bullet}_{B}=\operatorname{Hom}_{R}(K^{B}_{\bullet}(R),P)\). Then we know that \({}^{\prime}\!E^{pq}_{2}=\varprojlim_{B\subset A}^{p}H^{q}(C^{\bullet}_{B})=0\) for all \(p\), \(q\geq 0\), and \({}^{\prime\prime}\!E^{pq}_{1}=\varprojlim_{B\subset A}^{q}C^{p}_{B}=0\) for all \(q\geq 1\). Thus \(E^{n}=0\) and \(H^{n}(\varprojlim_{B\subset A}\operatorname{Hom}_{R}(K^{B}_{\bullet}(R),P))={ }^{\prime\prime}\!E^{n,0}_{1}=0\) for all \(n\geq 0\).
## 4. The Projective Construction of Acyclic Complex of Projectives
Now we are ready to present the "projective resolution construction of a bounded below acyclic complex of projective modules".
**Corollary 4.1**.: _Let \(R=k[x_{\alpha}:\alpha\in A]\) be the ring of polynomials in infinitely many variables over a field \(k\), and let_
\[\begin{CD}0@<{}<{}<k@<{}<{}<P_{0}@<{}<{}<P_{1}@>{}>{}>P_{2}@<{}<{}<\cdots\end{CD} \tag{5}\]
_be a projective resolution of the one-dimensional \(R\)-module \(k\). Let \(P\) be a projective \(R\)-module such that the \(R\)-module \(P_{n}\) is a direct summand of \(P\) for all \(n\geq 0\). Let_
\[\begin{CD}0@>{}>{}>\operatorname{Hom}_{R}(P_{0},P)@>{}>{}>\operatorname{Hom}_{ R}(P_{1},P)@>{}>{}>\operatorname{Hom}_{R}(P_{2},P)@>{}>{}>\cdots\end{CD} \tag{6}\]
_be the complex obtained by applying the contravariant functor \(\operatorname{Hom}_{R}(-,P)\) to the truncated resolution (5). Then (6) is a bounded below, noncontractible, acyclic complex of finitely generated projective left modules over the ring \(S=\operatorname{Hom}_{R}(P,P)\)._
Proof.: The complex (6) is acyclic by Theorem 3.1. The left \(S\)-module \(\operatorname{Hom}_{R}(P_{n},P)\) is a direct summand of the left \(S\)-module \(\operatorname{Hom}_{R}(P,P)=S\) for every \(n\geq 0\), since the \(R\)-module \(P_{n}\) is a direct summand of \(P\). So (6) is even a complex of cyclic projective left \(S\)-modules.
The proof of the assertion that the complex of \(S\)-modules (6) is not contractible is similar to the argument in the proof of Corollary 2.3. By Lemma 2.2(b) (for \(\mathsf{A}=R\text{--}\mathsf{Mod}\) and \(M=P\)), the functor \(\operatorname{Hom}_{R}(-,P)\) is an anti-equivalence of categories \(\operatorname{\mathsf{add}}(P)^{\mathsf{op}}\simeq S\text{--}\mathsf{mod}_{ \mathsf{proj}}\). The truncated resolution (5),
\[0\xleftarrow{}P_{0}\xleftarrow{}P_{1}\xrightarrow{}P_{2}\xleftarrow{}\cdots\]
is a noncontractible (since nonacyclic) complex in \(R\text{--}\mathsf{Mod}\) with the terms belonging to \(\operatorname{\mathsf{add}}(P)\), so it is a noncontractible complex in \(\operatorname{\mathsf{add}}(P)\). Applying the anti-equivalence of additive categories \(\operatorname{\mathsf{add}}(P)^{\mathsf{op}}\simeq S\text{--}\mathsf{mod}_{ \mathsf{proj}}\), we obtain a noncontractible complex (6) in \(S\text{--}\mathsf{mod}_{\mathsf{proj}}\), which is consequently also noncontractible in \(S\text{--}\mathsf{Mod}\). It is important for this argument that the contravariant functor \(\operatorname{Hom}_{R}(-,P)\colon\operatorname{\mathsf{add}}(P)^{\mathsf{op}} \xrightarrow{}S\text{--}\mathsf{Mod}\) is fully faithful.
## 5. Brief Preliminaries on Coalgebras
In this section and the next two, we consider comodules and contramodules over coassociative, counital coalgebras \(\mathcal{C}\) over a field \(k\). We refer to the book [17] and the survey papers [7, Section 1], [10, Sections 3 and 8] for background material on coalgebras, comodules, and contramodules.
For any coalgebra \(\mathcal{C}\), there are locally finite Grothendieck abelian categories of left \(\mathcal{C}\)-comodules \(\mathcal{C}\)-\(\mathsf{Comod}\) and \(\mathsf{Comod}\)-\(\mathcal{C}\), and a locally presentable abelian category of left \(\mathcal{C}\)-contramodules \(\mathcal{C}\)-\(\mathsf{Contra}\). There are enough injective objects in \(\mathcal{C}\text{--}\mathsf{Comod}\), and they are precisely the direct summands of the _cofree_ left \(\mathcal{C}\)-comodules \(\mathcal{C}\otimes_{k}V\) (where \(V\) ranges over the \(k\)-vector spaces). Dual-analogously, there are enough projective objects in \(\mathcal{C}\text{--}\mathsf{Contra}\), and they are precisely the direct summands of the _free_ left \(\mathcal{C}\)-contramodules \(\operatorname{Hom}_{k}(C,V)\) (where \(V\in k\text{--}\mathsf{Vect}\)).
The additive categories of injective left \(\mathcal{C}\)-comodules and projective left \(\mathcal{C}\)-contramodules are naturally equivalent,
\[\Psi_{\mathcal{C}}\colon\mathcal{C}\text{--}\mathsf{Comod}_{\mathsf{inj}}\, \simeq\,\mathcal{C}\text{--}\mathsf{Contra}_{\mathsf{proj}}\,:\!\Phi_{ \mathcal{C}}. \tag{7}\]
The equivalence is provided by the restrictions of the adjoint functors
\[\Psi_{\mathcal{C}}\colon\mathcal{C}\text{--}\mathsf{comod}\,\leftrightarrow \,\mathcal{C}\text{--}\mathsf{Contra}\,:\!\Phi_{\mathcal{C}},\]
the functor \(\Phi_{\mathcal{C}}\) being the left adjoint and \(\Psi_{\mathcal{C}}\) the right adjoint. The functors \(\Psi_{\mathcal{C}}\) and \(\Phi_{\mathcal{C}}\) are constructed as
\[\Psi_{\mathcal{C}}(\mathcal{M})=\operatorname{Hom}_{\mathcal{C}}(\mathcal{C}, \mathcal{M})\quad\text{and}\quad\Phi_{\mathcal{C}}(\mathfrak{P})=\mathcal{C} \odot_{\mathcal{C}}\mathfrak{P}\]
for all \(\mathcal{M}\in\mathcal{C}\text{--}\mathsf{Comod}\) and \(\mathfrak{P}\in\mathcal{C}\text{--}\mathsf{Contra}\). Here \(\odot_{\mathcal{C}}\colon\mathsf{Comod}\text{--}\mathcal{C}\times\mathcal{C} \text{--}\mathsf{Contra}\xrightarrow{}k\text{--}\mathsf{Vect}\) is the functor of _contranestor product_ over a coalgebra \(\mathcal{C}\), while \(\operatorname{Hom}_{\mathcal{C}}\) denotes the \(\operatorname{Hom}\) functor in the comodule category \(\mathcal{C}\text{--}\mathsf{Comod}\). The equivalence of additive
categories (7) is called the (_underived_) _comodule-contramodule correspondence_. We refer to [7, Sections 1.2 and 3.1], [10, Sections 8.6-8.7], or [6, Sections 0.2.6 and 5.1] for a more detailed discussion.
In fact, we are only interested in one special kind of coalgebras, viz., the _symmetric coalgebra_\(\mathcal{S}\mathit{ym}(U)\) of a \(k\)-vector space \(U\). To define the symmetric coalgebra, consider the _tensor coalgebra_\(\mathcal{T}\!\mathit{en}(U)=\bigoplus_{n=0}^{\infty}U^{\otimes n}\), as defined, e. g., in [10, Section 2.3] (where the notation is slightly different). The tensor coalgebra is the cofree conilpotent coalgebra cospanned by \(U\)[10, Remark 3.2]; it is also naturally graded. The symmetric coalgebra is simplest defined as the graded subcoalgebra in \(\mathcal{T}\!\mathit{en}(U)\) whose grading components \(\mathcal{S}\mathit{ym}_{n}(U)\subset\mathcal{T}\!\mathit{en}_{n}(U)=U^{ \otimes n}\) are the subspaces of symmetric tensors \(\mathcal{S}\mathit{ym}_{n}(U)\subset U^{\otimes n}\) in the tensor powers of the vector space \(U\). So the whole symmetric coalgebra is \(\mathcal{S}\mathit{ym}(U)=\bigoplus_{n=0}^{\infty}\mathcal{S}\mathit{ym}_{n }(U)=k\oplus U\oplus\mathcal{S}\mathit{ym}_{2}(U)\oplus\cdots\).
Following the discussion in [7, Section 1.3-1.4] or [10, Section 8.3], coalgebras \(\mathcal{C}\) can be described (and in fact, defined) in terms of their vector space dual algebras \(\mathcal{C}^{*}=\mathrm{Hom}_{k}(\mathcal{C},k)\), which carry natural linearly compact (\(=\) pseudocompact) topologies. In particular, if \(U\) is a finite-dimensional \(k\)-vector space with a basis \(x_{1}^{*}\),..., \(x_{m}^{*}\), then the dual algebra \(\mathcal{S}\mathit{ym}(U)^{*}\) to the symmetric coalgebra \(\mathcal{S}\mathit{ym}(U)\) is the topological algebra of formal Taylor power series \(\mathcal{S}\mathit{ym}(U)^{*}=k[[x_{1},\ldots,x_{m}]]\).
Generally speaking, for an infinite-dimensional \(k\)-vector space \(W\), one has \(\mathcal{S}\mathit{ym}(W)=\varinjlim_{U\subset W}\mathcal{S}\mathit{ym}(U)\) and \(\mathcal{S}\mathit{ym}(W)^{*}=\varprojlim_{U\subset W}\mathcal{S}\mathit{ym} (U)^{*}\), where \(U\) ranges over the finite-dimensional vector subspaces of \(W\). So, if \(\{x_{\alpha}^{*}:\alpha\in A\}\) is a \(k\)-vector space basis of \(W\), indexed by some set \(A\), then \(\mathcal{S}\mathit{ym}(W)^{*}=\varinjlim_{B\subset A}k[[x_{\alpha}:\alpha\in B]]\), where \(B\) ranges over the finite subsets of \(A\). Here, given two finite subsets \(B^{\prime}\subset B^{\prime\prime}\subset A\), the transition map \(k[[x_{\alpha}:\alpha\in B^{\prime\prime}]]\longrightarrow k[[x_{\alpha}: \alpha\in B^{\prime}]]\) in the projective system takes \(x_{\alpha}\) to \(x_{\alpha}\) for all \(\alpha\in B^{\prime}\) and \(x_{\beta}\) to \(0\) for all \(\beta\in B^{\prime\prime}\setminus B^{\prime}\). Such rings \(\mathcal{S}\mathit{ym}(W)^{*}=\varprojlim_{B\subset A}k[[x_{\alpha}:\alpha\in B]]\) are the "commutative rings of formal power series in infinitely many variables" that we are interested in.
## 6. Comodule and Contramodule Acyclicity Theorems
As above, we denote by \(W\) an infinite-dimensional \(k\)-vector space with a basis \(\{x_{\alpha}^{*}:\alpha\in A\}\) indexed by a set \(A\). Given a finite set \(B\), we let \(\widehat{R}_{B}=k[[x_{\alpha}:\alpha\in B]]\) be the (topological) ring of commutative formal Taylor power series in finitely many variables indexed by \(B\). Furthermore, we put \(\widehat{R}=\varprojlim_{B\subset A}\widehat{R}_{B}\) (with the transition maps described in the previous section). So, denoting by \(U_{B}\subset W\) the finite-dimensional vector subspace spanned by \(\{x_{\alpha}^{*}:\alpha\in B\}\), we have \(\widehat{R}_{B}=\mathcal{S}\mathit{ym}(U_{B})^{*}\) and \(\widehat{R}=\varinjlim_{B\subset A}k[[x_{\alpha}:\alpha\in B]]=\mathcal{S} \mathit{ym}(W)^{*}\). Let us also introduce the notation \(\mathcal{C}_{B}=\mathcal{S}\mathit{ym}(\widehat{U_{B}})\) and \(\mathcal{C}=\mathcal{S}\mathit{ym}(W)\) for the symmetric coalgebras.
As in the proof of Theorem 3.1, we start with considering the two-term Koszul complex of free \(\widehat{R}_{B}\)-modules with one generator
\[\cdots\xrightarrow{\ }0\xrightarrow{\ }\widehat{R}_{B}\xrightarrow{\ x_{\alpha} *\ }\widehat{R}_{B}\xrightarrow{\ }0\xrightarrow{\ }\cdots \tag{8}\]
situated in the cohomological degrees \(-1\) and \(0\) (where \(\alpha\in B\)). Denote by \(K^{B}_{\bullet}(\widehat{R}_{B})\) the tensor product, taken over the ring \(\widehat{R}_{B}\), of the complexes (8). As the elements \(\{x_{\alpha}:\alpha\in B\}\) form a regular sequence in the formal power series ring \(\widehat{R}_{B}\), the complex \(K^{B}_{\bullet}(\widehat{R}_{B})\) is a finite resolution of the one-dimensional \(\widehat{R}_{B}\)-module \(k=\widehat{R}_{B}/\sum_{\alpha\in B}x_{\alpha}\widehat{R}_{B}\) by finitely generated free \(\widehat{R}_{B}\)-modules.
The (augmented) Koszul complex \(K^{B}_{\bullet}(\widehat{R}_{B})\longrightarrow k\) is a complex of linearly compact topological \(k\)-vector spaces; so it can be obtained by applying the vector space dualization functor \(\operatorname{Hom}_{k}(-,k)\) to a certain complex of discrete vector spaces. The latter complex has the form
\[\begin{CD}0@>{}>{}>k@>{}>{}>\mathcal{S}\mathit{ym}(U_{B})@>{}>{}>\mathcal{S} \mathit{ym}(U_{B})\otimes_{k}U_{B}\\ @>{}>{}>\mathcal{S}\mathit{ym}(U_{B})\otimes_{k}\Lambda^{2}(U_{B})@>{}>{}> \cdots @>{}>{}>\mathcal{S}\mathit{ym}(U_{B})\otimes_{k}\Lambda^{m}(U_{B})@>{}> {}>0,\end{CD} \tag{9}\]
where \(m=\dim U_{B}\) and \(\Lambda^{n}(V),\ n\geq 0\), denotes the exterior powers of a vector space \(V\). The complex (9) is an injective/cofree \(\mathcal{C}_{B}\)-comodule coresolution of the trivial one-dimensional comodule \(k\) over the conilpotent coalgebra \(\mathcal{C}_{B}=\mathcal{S}\mathit{ym}(U_{B})\).
Passing to the direct limit of the finite complexes (9) over all the finite subsets \(B\subset A\), we obtain a bounded below complex
\[\begin{CD}0@>{}>{}>k@>{}>{}>\mathcal{S}\mathit{ym}(W)@>{}>{}>\mathcal{S} \mathit{ym}(W)\otimes_{k}W\\ @>{}>{}>\mathcal{S}\mathit{ym}(W)\otimes_{k}\Lambda^{2}(W)@>{}>{}>\cdots @>{}>{}> \mathcal{S}\mathit{ym}(W)\otimes_{k}\Lambda^{n}(W)@>{}>{}>\cdots\end{CD} \tag{10}\]
The complex (10) is an injective/cofree \(\mathcal{C}\)-comodule coresolution of the trivial one-dimensional comodule \(k\) over the conilpotent coalgebra \(\mathcal{C}=\mathcal{S}\mathit{ym}(W)\).
One can easily check that the coresolutions (9) and (10) are well-defined and functorial for any \(k\)-vector spaces \(U\) (in place of \(U_{B}\)) and \(W\), and do not depend on the choice of any bases in the vector spaces. In fact, the differential \(\mathcal{S}\mathit{ym}(W)\otimes_{k}\Lambda^{n}(W)\longrightarrow\mathcal{S} \mathit{ym}(W)\otimes_{k}\Lambda^{n+1}(W)\) can be constructed as the composition \(\mathcal{S}\mathit{ym}(W)\otimes_{k}\Lambda^{n}(W)\longrightarrow\mathcal{S} \mathit{ym}(W)\otimes_{k}W\otimes_{k}\Lambda^{n}(W)\longrightarrow\mathcal{S} \mathit{ym}(W)\otimes_{k}\Lambda^{n+1}(W)\) of the map induced by the comultiplication map \(\mathcal{S}\mathit{ym}(W)\longrightarrow\mathcal{S}\mathit{ym}(W)\otimes_{k}W\) and the map induced by the multiplication map \(W\otimes_{k}\Lambda^{n}(W)\longrightarrow\Lambda^{n+1}(W)\).
Applying the vector space dualization functor \(\operatorname{Hom}_{k}(-,k)\) to the complex (10), we obtain a bounded above complex
\[\begin{CD}0@<{}<{}<k@<{}<{}<\operatorname{Hom}_{k}(\mathcal{C},k)@<{}<{}< \operatorname{Hom}_{k}(\mathcal{C},W^{*})\\ @<{}<{}<\operatorname{Hom}_{k}(\mathcal{C},\Lambda^{2}(W)^{*})@<{}<{}<\cdots @<{}<{}< \operatorname{Hom}_{k}(\mathcal{C},\Lambda^{n}(W)^{*})@<{}<{}<\cdots\end{CD} \tag{11}\]
The complex (11) is a projective/free \(\mathcal{C}\)-contramodule resolution of the trivial one-dimensional \(\mathcal{C}\)-contramodule \(k\).
Applying the functor \(\Phi_{\mathcal{C}}=\mathcal{C}\odot_{\mathcal{C}}-\) to the truncated \(\mathcal{C}\)-contramodule resolution (11), we obtain a bounded above complex of injective/cofree \(\mathcal{C}\)-comodules
\[\begin{CD}0@<{}<{}<\mathcal{C}@<{}<{}<\mathcal{C}\otimes_{k}W^{*}@<{}<{}< \mathcal{C}\otimes_{k}\Lambda^{2}(W)^{*}\\ @<{}<{}<\cdots @<{}<{}<\mathcal{C}\otimes_{k}\Lambda^{n}(W)^{*}@<{}<{}<\cdots \end{CD} \tag{12}\]
Applying the functor \(\Psi_{\mathcal{C}}=\operatorname{Hom}_{\mathcal{C}}(\mathcal{C},-)\) to the truncated \(\mathcal{C}\)-comodule coresolution (10), we obtain a bounded below complex of projective/free \(\mathcal{C}\)-contramodules
\[0\relrel\operatorname{Hom}_{k}(\mathcal{C},k)\rel\rel\operatorname{Hom}_{k}( \mathcal{C},W)\\ \rel\operatorname{Hom}_{k}(\mathcal{C},\Lambda^{2}(W))\rel\operatorname{ Hom}_{k}(\mathcal{C},\Lambda^{n}(W))\rel\operatorname{Hom}_{k}(\mathcal{C},\Lambda^{n}(W)) \rel\operatorname{Hom}_{k}(\mathcal{C},\Lambda^{n}(W))\rel\operatorname{Hom} \cdots \tag{13}\]
**Theorem 6.1**.: _For any infinite-dimensional \(k\)-vector space \(W\), the complex of cofree comodules (12) is acyclic (i. e., its cohomology spaces vanish in all the degrees)._
Proof.: This was stated in [6, Section 0.2.7] (as a part of introductory/preliminary material for the book). The proof is not difficult.
The complex (12) is the direct limit of its subcomplexes
\[0\rel\rel\operatorname{\mathcal{C}}_{B}\rel\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\otimes_{k}W^{*}\rel\operatorname{\mathcal{C} }_{B}\otimes_{k}\Lambda^{2}(W)^{*}\\ \rel\operatorname{\mathcal{C}}_{B}\otimes_{k}\Lambda^{n}(W)^{*} \rel\operatorname{\mathcal{C}}_{\cdot}\cdots \tag{14}\]
taken over the directed poset of all finite subsets \(B\subset A\). The complex (14), which is a complex of comodules over the subcoalgebra \(\mathcal{C}_{B}=\mathcal{S}\mathit{ym}(U_{B})\) of the coalgebra \(\mathcal{C}=\mathcal{S}\mathit{ym}(W)\), can be obtained by applying the cotensor product functor \(\mathcal{C}_{B}\,\square_{\mathcal{C}}-\) to the complex (12) (see [7, Sections 2.5-2.6] or [6, Section 0.2.1 or 1.2.1]).
The complex (14) is _not_ acyclic, but its cohomology spaces gradually vanish as the size of the finite subset \(B\subset A\) grows. Indeed, applying the vector space dualization functor \(\operatorname{Hom}_{k}(-,k)\) to the finite complex (9), we obtain a finite Koszul complex that was denoted above by \(K^{B}_{\bullet}(\widehat{R}_{B})\). It has the form
(15) \[0\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B} \rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{ \mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel \operatorname{\mathcal{C}}_{B}\rel\operatorname{\mathcal{C}}_{B}\rel\operatorname{
As the size of the subset \(B\subset A\) grows, the cohomology of the complex (14) move away and disappear at the cohomological degree \(-\infty\). So the direct limit (12) of the complexes (14) is acyclic.
**Theorem 6.2**.: _For any infinite-dimensional \(k\)-vector space \(W\), the complex of free contramodules (13) is acyclic (i. e., its cohomology spaces vanish in all the degrees)._
Proof.: This was also stated in [6, Section 0.2.7]. The proof is only slightly more complicated than the proof of the previous theorem, in that one needs to deal with inverse limits. However, we have done all the preparatory work already.
The complex (13) is the inverse limit of its quotient complexes
\[\begin{CD}0@>{}>{}>\operatorname{Hom}_{k}(\mathcal{C}_{B},k)@>{}>{}> \operatorname{Hom}_{k}(\mathcal{C}_{B},W)\\ @>{}>{}>\operatorname{Hom}_{k}(\mathcal{C}_{B},\Lambda^{2}(W))@>{}>{}> \cdots @>{}>{}>\operatorname{Hom}_{k}(\mathcal{C}_{B},\Lambda^{n}(W))@>{}>{}>\cdots \end{CD} \tag{18}\]
taken over the directed poset of all finite subsets \(B\subset A\). The complex (18), which is a complex of contramodules over the subcoalgebra \(\mathcal{C}_{B}\subset\mathcal{C}\), can be obtained by applying the Cohom functor \(\operatorname{Cohom}_{\mathcal{C}}(\mathcal{C}_{B},-)\) to the complex (13) (see [7, Sections 2.5-2.6] or [6, Section 0.2.4 or 3.2.1]).
Similarly to the previous proof, the complex (18) is _not_ acyclic, but its cohomology spaces gradually vanish as the size of the finite subset \(B\subset A\) grows. For the sake of completeness of the exposition, let us start with applying the functor \(\Psi_{\mathcal{C}_{B}}=\operatorname{Hom}_{\mathcal{C}_{B}}(\mathcal{C}_{B},-)\) to the truncated \(\mathcal{C}_{B}\)-comodule coresolution (9). We obtain a finite complex of projective/free \(\mathcal{C}_{B}\)-contramodules
\[\begin{CD}0@>{}>{}>k@>{}>{}>\operatorname{Hom}_{k}(\mathcal{C}_{B},k)@>{}>{ }>\operatorname{Hom}_{k}(\mathcal{C}_{B},U_{B})\\ @>{}>{}>\operatorname{Hom}_{k}(\mathcal{C}_{B},\Lambda^{2}(U_{B}))@>{}>{}> \cdots @>{}>{}>\operatorname{Hom}_{k}(\mathcal{C}_{B},\Lambda^{m}(U_{B}))@>{}> {}>0.\end{CD} \tag{19}\]
The only cohomology space of the complex (19) is the one-dimensional \(k\)-vector space \(\Lambda^{m}(U_{B})\) situated in the cohomological degree \(m\), i. e, at the rightmost term. In fact, the complex of contramodules (19) can be obtained by applying the vector space dualization functor \(\operatorname{Hom}_{k}(-,k)\) to the complex of comodules (16).
Consider the exterior algebra \(\bigoplus_{n=0}^{\infty}\Lambda^{n}(V_{B})\), where as in the previous proof \(W=U_{B}\oplus V_{B}\), and view it as a complex
\[\begin{CD}0@>{}>{}>k@>{0}>{}>V_{B}@>{0}>{}>\Lambda^{2}(V_{B})@>{0}>{}>\cdots @>{0}>{}> \Lambda^{n}(V_{B})@>{0}>{}>\cdots\end{CD} \tag{20}\]
with zero differential. Then the complex (18) is the complex of \(k\)-vector space morphisms, \(\operatorname{Hom}_{k}(-,-)\), from the complex (16) into the complex (20). Accordingly, the cohomology spaces of the complex (18) are concentrated in the cohomological degrees \(\geq m\).
The rest of the argument proceeds along the lines of the proof of Theorem 3.1, based on Lemmas 3.2-3.4. As mentioned above, the complex (13) is the inverse limit of the complexes (18) taken of the directed poset \(\Xi\) of all finite subsets \(B\subset A\) with respect to inclusion. At every cohomological degree \(n\geq 0\), Lemma 3.2 (for \(R=k\), \(F_{B}=\mathcal{C}_{B}\), and \(P=\Lambda^{n}(W)\)) tells that \(\varprojlim_{B\subset A}^{i}\operatorname{Hom}_{k}(\mathcal{C}_{B},\Lambda^{n} (W))=0\) for all \(i\geq 1\).
Denote by \(C^{\bullet}_{B}\) the complex (18). By Lemma 3.3, we have \(\varprojlim^{i}_{B\subset A}H^{n}(C^{\bullet}_{B})=0\) for all \(i\geq\) and \(n\geq 0\). Now in the context of Lemma 3.4 we have \(\overbrace{E^{pq}_{2}}^{i}=0\) for all \(p\), \(q\geq 0\), and \(\prescript{\prime\prime}{E}^{pq}_{1}=0\) for all \(q\geq 1\). Therefore, \(E^{n}=0\) and \(H^{n}(\varprojlim_{B\subset A}C^{\bullet}_{B})=\prescript{\prime\prime}{E}^{n,0} _{1}=0\) for all \(n\geq 0\).
## 7. Two Contramodule Constructions of Acyclic Complexes of Projectives
We have essentially already constructed the promised bounded above, noncontractible, acyclic compelex of injective comodules and bounded below, noncontractible, acyclic complex of projective contramodules over the cocommutative coalgebra \(\mathcal{C}=\mathcal{S}\mathit{ym}(W)\). Let us state this as a corollary.
**Corollary 7.1**.: _Let \(W\) be an infinite-dimensional vector space over a field \(k\) and \(\mathcal{C}=\mathcal{S}\mathit{ym}(W)\) be the symmetric coalgebra. Then_
(a) _the complex_ (12) _is a bounded above, noncontractible, acyclic complex of injective comodules over_ \(\mathcal{C}\)_;_
(b) _the complex_ (13) _is a bounded below, noncontractible, acyclic complex of projective contramodules over_ \(\mathcal{C}\)_._
Proof.: Part (a): the complex (12) is acyclic by Theorem 6.1. It remains to explain why the complex of \(\mathcal{C}\)-comodules (12) is not contractible.
The truncated resolution (11),
\[\begin{CD}0@<{}<{}<{}<\operatorname{Hom}_{k}(\mathcal{C},k)@<{}<{}< \operatorname{Hom}_{k}(\mathcal{C},W^{*})\\ @<{}<{}<\operatorname{Hom}_{k}(\mathcal{C},\Lambda^{2}(W)^{*})@<{}<{}< \operatorname{\cdots}<\operatorname{Hom}_{k}(\mathcal{C},\Lambda^{n}(W)^{*} )@<{}<{}<\operatorname{\cdots}\end{CD}\]
is a noncontractible (since nonacyclic) complex in the abelian category \(\mathcal{C}\)-\(\operatorname{\mathsf{Contra}}\) with the terms belonging to the full subcategory of projective objects \(\mathcal{C}\)-\(\operatorname{\mathsf{Contra}}_{\mathsf{proj}}\), so it is a noncontractible complex in \(\mathcal{C}\)-\(\operatorname{\mathsf{Contra}}_{\mathsf{proj}}\). Applying the equivalence of additive categories \(\Phi_{\mathcal{C}}\colon\mathcal{C}\mbox{\sf-Contra}_{\mathsf{proj}}\simeq \mathcal{C}\mbox{\sf-Comod}_{\mathsf{inj}}\) (7), we obtain a noncontractible complex (12) in \(\mathcal{C}\mbox{\sf-Comod}_{\mathsf{inj}}\), which is consequently also noncontractible in \(\mathcal{C}\)-\(\operatorname{\mathsf{Comod}}\).
Part (b): the complex (13) is acyclic by Theorem 6.2. It remains to explain why the complex of \(\mathcal{C}\)-contramodules (13) is not contractible.
The truncated coresolution (10),
\[\begin{CD}0@>{}>{}>\mathcal{C}@>{}>{}>\mathcal{C}\otimes_{k}W@>{}>{}>\mathcal{ C}\otimes_{k}\Lambda^{2}(W)@>{}>{}>\cdots@>{}>{}>\mathcal{C}\otimes_{k}\Lambda^{n}(W)@>{}>{}>\cdots\end{CD}\]
is a noncontractible (since nonacyclic) complex in the abelian category \(\mathcal{C}\)-\(\operatorname{\mathsf{Comod}}\) with the terms belonging to the full subcategory of injective objects \(\mathcal{C}\mbox{\sf-Comod}_{\mathsf{inj}}\), so it is a noncontractible complex in \(\mathcal{C}\mbox{\sf-Comod}_{\mathsf{inj}}\). Applying the equivalence of additive categories \(\Psi_{\mathcal{C}}\colon\mathcal{C}\mbox{\sf-Comod}_{\mathsf{inj}}\simeq \mathcal{C}\mbox{\sf-Contra}_{\mathsf{proj}}\) (7), we obtain a noncontractible complex (13) in \(\mathcal{C}\mbox{\sf-Contra}_{\mathsf{proj}}\), which is consequently also noncontractible in \(\mathcal{C}\)-\(\operatorname{\mathsf{Contra}}\).
Now let us present the two contramodule constructions of bounded below, noncontractible, acyclic complexes of projective modules. Recall the notation \(\operatorname{Hom}_{\mathcal{C}}(-,-)\) for the Hom spaces in the category \(\mathcal{C}\)-\(\mathsf{Comod}\). The notation \(\operatorname{Hom}^{\mathcal{C}}(-,-)\) stands for the Hom spaces in the Hom spaces in the category \(\mathcal{C}\)-\(\mathsf{Contra}\).
**Corollary 7.2**.: _Let \(W\) be an infinite-dimensional vector space over a field \(k\) and \(\mathcal{C}=\mathcal{S}\text{ym}(W)\) be the symmetric coalgebra. Let_
\[0\ \
this argument that the functor \(\operatorname{Hom}^{\mathcal{C}}(\mathfrak{P},-)\colon\operatorname{\mathsf{add}}( \mathfrak{P})\longrightarrow S\operatorname{\mathsf{--Mod}}\) is fully faithful. In fact, the whole functor \(\operatorname{Hom}^{\mathcal{C}}(\mathfrak{P},-)\colon\operatorname{\mathcal{C }}\operatorname{\mathsf{--Contra}}\longrightarrow S\operatorname{\mathsf{--Mod}}\) is fully faithful (on the whole abelian category \(\operatorname{\mathsf{C--Contra}}\)) by [11, Theorem 6.10]. The latter conclusion is based on the observations that \(\mathfrak{P}\) is the coproduct of \(\dim W\) copies of the projective generator \(\operatorname{\mathsf{C}}^{*}=\operatorname{Hom}_{k}(\operatorname{\mathsf{C}},k)\) of the abelian category \(\operatorname{\mathsf{C--Contra}}\), and \(\operatorname{\mathsf{C}}^{*}\) is abstractly \(\kappa\)-small in \(\operatorname{\mathsf{C--Contra}}\) for \(\operatorname{\mathsf{C}}=\operatorname{\mathcal{S}ym}(W)\) if \(\kappa\) is the successor cardinality of \(\dim W\).
**Corollary 7.3**.: _Let \(W\) be an infinite-dimensional vector space over a field \(k\) and \(\operatorname{\mathsf{C}}=\operatorname{\mathcal{S}ym}(W)\) be the symmetric coalgebra. Let_
\[0\xleftarrow{}k\xleftarrow{}\mathfrak{P}_{0}\xleftarrow{}\mathfrak{P}_{1} \xleftarrow{}\mathfrak{P}_{2}\xleftarrow{}\cdots\]
_be a notation for the projective resolution (11) of the trivial one-dimensional \(\operatorname{\mathsf{C}}\)-contramodule \(k\). Denote by \(\mathfrak{P}\) the free \(\operatorname{\mathsf{C}}\)-contramodule \(\operatorname{Hom}_{k}(\operatorname{\mathsf{C}},W^{*})\) spanned by the vector space \(W^{*}=\operatorname{Hom}_{k}(W,k)\). Let_
\[0\xrightarrow{}\operatorname{Hom}^{\mathcal{C}}(\mathfrak{P}_{0},\mathfrak{ P})\xrightarrow{}\operatorname{Hom}^{\mathcal{C}}(\mathfrak{P}_{1},\mathfrak{P}) \xrightarrow{}\operatorname{Hom}^{\mathcal{C}}(\mathfrak{P}_{2},\mathfrak{P}) \xrightarrow{}\cdots \tag{22}\]
_be the complex obtained by applying the contravariant functor \(\operatorname{Hom}^{\mathcal{C}}(-,\mathfrak{P})\) to the truncated resolution (11). Then (22) is a bounded below, noncontractible, acyclic complex of finitely generated projective left modules over the ring \(S=\operatorname{Hom}_{\mathcal{C}}(\mathfrak{P},\mathfrak{P})\)._
Proof.: For any right \(\operatorname{\mathsf{C}}\)-comodule \(\operatorname{\mathcal{N}}\), any left \(\operatorname{\mathsf{C}}\)-contramodule \(\mathfrak{Q}\), and any \(k\)-vector space \(V\), there is a natural isomorphism of \(k\)-vector spaces
\[\operatorname{Hom}^{\mathcal{C}}(\mathfrak{Q},\operatorname{Hom}_{k}( \operatorname{\mathcal{N}},V))\simeq\operatorname{Hom}_{k}(\operatorname{ \mathcal{N}}\odot_{\operatorname{\mathsf{C}}}\mathfrak{Q},\;V)\]
[7, Section 3.1], [10, Section 8.6], or [6, Sections 0.2.6 and 5.1.1]. In particular, we have natural isomorphisms
\[\operatorname{Hom}^{\mathcal{C}}(\mathfrak{Q},\mathfrak{P})=\operatorname{ Hom}_{k}(\operatorname{\mathsf{C}}\odot_{\operatorname{\mathsf{C}}}\mathfrak{Q},\;W^{*})= \operatorname{Hom}_{k}(\Phi_{\operatorname{\mathsf{C}}}(\mathfrak{Q}),W^{*}).\]
Thus the complex (22) can be obtained by applying the contravariant vector space \(\operatorname{Hom}\) functor \(\operatorname{Hom}_{k}(-,W^{*})\) to the complex (12), and it follows from Theorem 6.1 that the complex (22) is acyclic.
Furthermore, by construction, the \(\operatorname{\mathsf{C}}\)-comodule \(\mathfrak{P}_{0}=\operatorname{\mathsf{C}}^{*}\) is is a direct summand of \(\mathfrak{P}\), while the \(\operatorname{\mathsf{C}}\)-contramodules \(\mathfrak{P}_{n}=\operatorname{Hom}_{k}(\operatorname{\mathsf{C}},\Lambda^{n} (W)^{*})\) are isomorphic to \(\mathfrak{P}\) for \(n\geq 1\). Hence the left \(S\)-module \(\operatorname{Hom}^{\mathcal{C}}(\mathfrak{P}_{0},\mathfrak{P})\) is a direct summand of \(\operatorname{Hom}^{\mathcal{C}}(\mathfrak{P},\mathfrak{P})=S\), and the left \(S\)-modules \(\operatorname{Hom}^{\mathcal{C}}(\mathfrak{P}_{n},\mathfrak{P})\) are isomorphic to \(S\) for \(n\geq 1\). So (22) is even a complex of cyclic projective left \(S\)-modules.
The assertion that the complex of \(S\)-modules (22) is not contractible is provable similarly to the argument in the proof of Corollary 4.1. By Lemma 2.2(b) (for \(\operatorname{\mathsf{A}}=\operatorname{\mathsf{C--Contra}}\) and \(M=\mathfrak{P}\)), the functor \(\operatorname{Hom}^{\mathcal{C}}(-,\mathfrak{P})\) is an anti-equivalence of categories \(\operatorname{\mathsf{add}}(\mathfrak{P})^{\mathsf{op}}\simeq S\operatorname{ \mathsf{--mod}}_{\mathsf{proj}}\). The truncated resolution (11),
\[0\xleftarrow{}k\xleftarrow{}\mathfrak{P}_{0}\xleftarrow{}\mathfrak{P}_{1} \xleftarrow{}\mathfrak{P}_{2}\xleftarrow{}\cdots\]
is a noncontractible (since nonacyclic) complex in \(\operatorname{\mathsf{C--Contra}}\) with the terms belonging to \(\operatorname{\mathsf{add}}(\mathfrak{P})\), so it is a noncontractible complex in \(\operatorname{\mathsf{add}}(\mathfrak{P})\). Applying the anti-equivalence of additive categories \(\operatorname{\mathsf{add}}(\mathfrak{P})^{\mathsf{op}}\simeq S\operatorname{ \mathsf{--mod}}_{\mathsf{proj}}\), we obtain a noncontractible complex (22), which is consequently also noncontractible in \(S\operatorname{\mathsf{--Mod}}\). It is important
for this argument that the contravariant functor \(\operatorname{Hom}^{\mathcal{C}}(-,\mathfrak{P})\colon\operatorname{\mathsf{add}}( \mathfrak{P})^{\operatorname{op}}\longrightarrow S\text{--}\operatorname{Mod}\) is fully faithful.
## 8. Summary of the Examples Obtained
Now we can summarize our constructions as follows.
**Conclusion 8.1**.: _There exists an associative ring \(S\) for which_
(a) _there is a bounded above acyclic complex of injective right \(S\)-modules that is not contractible;_
(b) _there is a bounded below acyclic complex of flat left \(S\)-modules that is not pure acyclic;_
(c) _there is a bounded below acyclic complex of (finitely generated) projective left \(S\)-modules that is not contractible._
Proof.: Proposition 1.2 tells that any ring \(S\) satisfying (c) also satisfies (a) and (b). Various examples of associative rings \(S\) satisfying (c) are provided by Corollaries 2.3, 4.1, 7.2, and 7.3.
What can one say about the rings \(S\) appearing in Corollaries 2.3, 4.1, 7.2, and 7.3? First of all, _noone_ of them is commutative (while we have cocommutative coalgebra examples in Corollary 7.1).
Let us denote the respective versions of the ring \(S\) by \(S_{\ref{lem:S}}\), \(S_{\ref{lem:S}}\), \(S_{\ref{lem:S}}\), and \(S_{\ref{lem:S}}\). While the ring \(S_{\ref{lem:S}}\) (from Corollary 2.3) appears to be complicated and hard to visualize, the rings \(S_{\ref{lem:S}}\), \(S_{\ref{lem:S}}\), and \(S_{\ref{lem:S}}\) can be described rather explicitly.
In the context of Corollary 4.1, it makes sense to choose the infinite Koszul complex \(K_{\bullet}(R)=\varinjlim_{R\subset A}K_{\bullet}^{\mathbf{B}}(R)\) to play the role of the projective resolution \(P_{\bullet}\) (5) of the \(R\)-module \(k\). In this case, one can take \(P\) to be the free \(R\)-module with \(A\) generators, \(P=\bigoplus_{\alpha\in A}R\). Then the \(R\)-module \(P_{0}=R\) is a direct summand of \(P\), while the \(R\)-module \(P_{n}\) is isomorphic to \(P\) for \(n\geq 1\), so the assumption of the corollary is satisfied. The resulting ring \(S_{\ref{lem:S}}=\operatorname{Hom}_{R}(P,P)\) is the ring of infinite, column-finite \(A\times A\) matrices with entries from the commutative polynomial ring \(R=k[x_{\alpha}:\alpha\in A]\) in infinitely many variables.
In the context of Corollaries 7.2 and 7.3, it makes sense to introduce the notation \(\mathcal{J}_{\ref{lem:S}}\) for the cofree comodule \(\mathcal{J}=\mathcal{C}\otimes_{k}W\) appearing in Corollary 7.2 and the notation \(\mathfrak{P}_{\ref{lem:S}}\) for the free contramodule \(\mathfrak{P}=\operatorname{Hom}_{k}(\mathcal{C},W)\) mentioned in the discussion in its proof. Then the notation \(\mathfrak{P}_{\ref{lem:S}}\) can be used for the bigger free contramodule \(\mathfrak{P}=\operatorname{Hom}_{k}(\mathcal{C},W^{*})\) from Corollary 7.3, and we can also denote by \(\mathcal{J}_{\ref{lem:S}}\) the corresponding cofree comodule \(\mathcal{J}=\mathcal{C}\otimes_{k}W^{*}\).
The ring \(S_{\ref{lem:S}}=\operatorname{Hom}_{\mathcal{C}}(\mathcal{J}_{\ref{lem:S}}, \mathcal{J}_{\ref{lem:S}})^{\operatorname{op}}=\operatorname{Hom}^{\mathcal{C} }(\mathfrak{P}_{\ref{lem:S}},\mathfrak{P}_{\ref{lem:S}})^{\operatorname{op}}\) is the ring of infinite, row-zero-convergent \(A\times A\) matrices with entries from the topological commutative formal power series ring \(\widehat{R}=\mathcal{C}^{*}=\varinjlim_{R\subset A}k[[x_{\alpha}:\alpha\in B]]\) in infinitely many variables. Such rings of row-zero-convergent matrices were discussed in the papers [11, Example 7.10] and [12, Section 5].
Let \(D\) denote the indexing set of a basis \(\{y_{\delta}\colon\delta\in D\}\) in the \(k\)-vector space \(W^{*}\). The cardinality \(|D|\) of the set \(D\) is equal to \(|k|^{|A|}\), where \(|k|\) is the cardinality of the field \(k\) and \(|A|\) is the cardinality of the set \(A\). Then the ring \(S_{\ref{f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f: f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f::f:f:f:f:f:f:f:f:f:f:f:f:f:f:f::f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f::f:f:f:f:f:f:f:f:f:f:f:f:f::f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f:f::f:f:f::f:f:f:f::f:f:f::f:f:f::f:f:f:f:f:f:f::f:f::f:f:f::f:f::f:f::f:f::f:f:f::f:f:f::f:f:f:f:f::f:f::f:f:f::f::f:f::f:f:f::f:f::f:f::f:f::f:f::f:f:f::f::f:f::f:f::f:f::f:f::f::f:f::f:f:f::f:f::f:f::f:f::f:f:f::f:f:f::f:f::f:f::f::f:f::f:f::f:f:f::f:f::f:f::f::f:f::f:f::f:f::f:f::f::f:f::f:f:f::f::f:f::f:f::f:f::f::f::f:f:f::f:f::f:f::f::f:f::f::f::f:f::f::f::f::f::f::f::f::f::f::f::f::f:f::f::f::f::f:f::f::f::f::f::f::f::f:f::f::f:f::f::f:f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f:f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f::f:::f::f:f::f::f::f::f::f:f::f::f::f::f::f::f::f::f::f:::f::f:::f::f::f::f::f::f::f::f:::f::f:::f::f::f::f::f::f::f::f::f:::f::f:::f:::f::f::f:::f:::f::f:::f::f:::f::f:::f::f::f:::f::f:::f:::f::f:::f:::f::f::f:::f::f::f:::f::f:::f:::f::f:::f:::f:::f::f:::f:::f:::f:::f:::f:::f:::f:::f:::f:::f:::f:::f:::f::f:::f:::f:::f:::f::::f:::f:::f::::f:::f:::f:::f:::f:::f::::f::::f::::f:::f::::f::::f:::::f:::::f
\(R\)-modules. One does not even need the \(R\)-modules \(P_{n}\) to be direct summands of \(P\) for this claim to hold; it suffices to take \(P=R\).
However, this is _not_ an example for Conclusion 8.1(b), because the complex of \(R\)-modules (6) is actually pure acyclic (for any flat \(R\)-module \(P\)). Indeed, it suffices to show that, for any finitely presented \(R\)-module \(M\), applying the functor \(M\otimes_{R}-\) preserves acyclicity of the complex (6). Denote the complex (6) by \(F^{\bullet}\).
Any finitely presented module over the ring of polynomials \(R\) in infinitely many variables has a finite projective resolution \(G_{\bullet}\) by finitely generated projective \(R\)-modules. Since \(F^{\bullet}\) is a complex of flat \(R\)-modules and \(G_{\bullet}\) is a finite resolution, the complexes \(M\otimes_{R}F^{\bullet}\) and \(G_{\bullet}\otimes_{R}F^{\bullet}\) are quasi-isomorphic. Finally, viewed as an object of the homotopy category of complexes of \(R\)-modules \(\mathsf{K}(R\mbox{--}\mathsf{Mod})\), the complex \(G_{\bullet}\otimes_{R}F^{\bullet}\) belongs to the thick subcategory spanned by the complex \(F^{\bullet}\) (since the complex \(G_{\bullet}\) belongs to the thick subcategory spanned by the one-term complex of \(R\)-modules \(R\)). As the complex \(F^{\bullet}\) is acyclic, so is the complex \(G_{\bullet}\otimes_{R}F^{\bullet}\).
**Example 8.4**.: The following example has a different nature than all the previous examples in this paper. It was communicated to the author by A. Canonaco and is reproduced here with his kind permission.
Suppose that we have a bounded below complex of _free modules with one generator_ over a ring \(S\). Obviously, such a complex of (left) modules has the form
(23) \[0\xrightarrow{\
equations \(z_{n}z_{n+1}=0\), and it remains to let \(f\colon S_{\mathrm{uni}}\longrightarrow S\) be the homomorphism taking \(x_{n}\) to \(z_{n}\) for every \(n\geq 0\).
While the example in Example 8.4 is certainly simpler (to construct and prove its properties) than the examples in Corollaries 2.3, 4.1, 7.2, and 7.3, _no_ example of a bounded below, noncontractible, acyclic complex of projective modules (or of a bounded above, noncontractible, acyclic complex of injective modules) can be _too_ simple. The results of [16, Appendix A] demonstrate this.
|
2306.08118 | HEPScore: A new CPU benchmark for the WLCG | HEPScore is a new CPU benchmark created to replace the HEPSPEC06 benchmark
that is currently used by the WLCG for procurement, computing resource pledges
and performance studies. The development of the new benchmark, based on HEP
applications or workloads, has involved many contributions from software
developers, data analysts, experts of the experiments, representatives of
several WLCG computing centres, as well as the WLCG HEPScore Deployment Task
Force. In this contribution, we review the selection of workloads and the
validation of the new HEPScore benchmark. | Domenico Giordano, Jean-Michel Barbet, Tommaso Boccali, Gonzalo Menéndez Borge, Christopher Hollowell, Vincenzo Innocente, Walter Lampl, Michele Michelotto, Helge Meinhard, Ladislav Ondris, Andrea Sciabà, Matthias J. Schnepf, Randall J. Sobie, David Southwick, Tristan S. Sullivan, Andrea Valassi, Sandro Wenzel, John L. Willis, Xiaofei Yan | 2023-06-13T20:22:30Z | http://arxiv.org/abs/2306.08118v2 | # HEPScore: A new CPU benchmark for the WLCG
###### Abstract
HEPScore is a new CPU benchmark created to replace the HEPSPEC06 benchmark that is currently used by the WLCG for procurement, computing resource pledges and performance studies. The development of the new benchmark, based on HEP applications or workloads, has involved many contributions from software developers, data analysts, experts of the experiments, representatives of several WLCG computing centres, as well as the WLCG HEPScore Deployment Task Force. In this contribution, we review the selection of workloads and the validation of the new HEPScore benchmark.
Randall J. Sobie
## 1 Introduction
Computing in particle physics has evolved to a highly distributed model where each country provides local facilities that are integrated into a global infrastructure, called the Worldwide LHC Computing Grid (WLCG) [1]. The WLCG coordinates the computing and networking resources on behalf of the experiments at the Large Hadron Collider at CERN and many other experiments at laboratories around the world.
The collaborative nature of our field has resulted in agreements for the sharing of costs for all aspects of our experiments, including computing, either through direct financial payments or the provision of in-kind equipment or services. Each country is requested to contribute resources based on the size of their research community and financial oversight boards find consensus on the appropriate cost sharing.
It is difficult to put a cost estimate on a computing facility as each country has its own way of acquiring and operating their resources. Rather than develop a model based on cost of the facility, hardware and personnel, it was decided to use a single metric, the CPU-power delivered by a site, to compare the resources provided by each country. The delivered CPU-power is defined to be the number of seconds used by the applications multiplied by a benchmark that reflects the performance of the servers. After many years of operations, most sites have many types of servers with differing levels of performance. As a result, many sites report a benchmark that is average of the individual benchmarks of the different servers weighted by number of servers of each type (we refer to this number as the _site-benchmark_).
The resources used at each site are tracked and stored in a WLCG accounting database. The database stores the site-benchmark and the number of CPU-seconds used by each experiment at a site. These numbers are used to calculate an integrated number that estimates the resources delivered by the site. These numbers are published by the WLCG accounting team on a monthly basis and provided to the funding agencies.
## 2 Motivation for a new CPU benchmark
In 2009, the WLCG agreed to use HEPSPEC06 (HS06) as its benchmark [2]. HS06 is based on the industry standard, SPEC CPU 2006, benchmark [3]. At that time, the HEP applications shared several commonalities with a number of workloads within the SPEC CPU 2006 suite and those workloads were selected to be included in HS06 [4; 5]. The individual HS06 workloads are characterized by single-threaded and single-process applications, compiled in 32-bit mode, and requiring a minimum of 1 GB of memory per process. A newer release of the SPEC benchmark, SPEC CPU 2017 [6], was also considered but found to be highly correlated with the HEPSPEC06 benchmark (see fig. 1). The SPEC CPU 2017 benchmark would require each site to purchase a new licence (like the SPEC CPU 2006 benchmark) whereas the WLCG community was in favour an open source benchmark. As a result, the SPEC CPU 2017 was not considered as a replacement for HS06.
HS06 has met the WLCG requirements in a world that has progressively evolved from CPUs with few cores to multi-cores CPUs. During this period, experiments were starting
Figure 1: A comparison of the HEPSPEC06 benchmark with the results of the SPEC CPU 2017 benchmark on resources provided to the HEPiX Benchmark Working Group. The circles are measurements on AMD processors and triangles are on Intel processors. The red (black) points have hyper-threading off (on). The blue line is a linear fit to the data constrained to the origin (0,0). The lower plot shows the fractional difference in the vertical dimension of the data point from the fit. Note that the benchmarks are normalized to the number of physical cores of the server independent of whether hyper-threading is enabled.
to observe that newer versions of their applications did not scale with HS06 (for example, see ref. [7]). The proposal for building a new benchmark using HEP applications was first presented at the WLCG workshop in Manchester (2017) [8]. The HEPiX Benchmarking Working Group1 was asked to study the feasibility of a new benchmark based on HEP workloads and develop an infrastructure to run the different CPU benchmarks (including HS06 and other benchmarks). The Working Group developed the _HEP Benchmark Suite_ that provides a simple way to run containerized HEP applications and other benchmarks, and record the results in an Elastic Search database at CERN [9].
Footnote 1: HEPiX Benchmarking Working Group: D Giordano (co-chair), M Michelotto (co-chair), L Atzori, JM Barbet, GM Borge, C Hollowell, L Ondris, A Sciaba, E Simili, RJ Sobie, D Southwick, T Sullivan, N Szczepanek, A Valassi
Some of the HEP applications that use the largest amount of compute power are the Monte Carlo generation of collision events, the simulation of the detector response to the simulated particles, the conversion of the simulated energy deposition by the particles in the detector elements (digitization) and reconstruction of the detector signals into particles and momenta (the reconstruction code is used for both simulated and real data). We refer to these HEP applications as _workloads_. Analysis applications are very specific to the physics, and often I/O intensive, and are considered too difficult and unreliable to use as a measure of the performance of a CPU.
The creation of a new benchmark based on HEP workloads (called _HEPScore_) requires consensus of the WLCG community. As a result, the WLCG Management Board established the HEPScore Benchmark Deployment Task Force2 whose role was to review the requirements for the HEPScore benchmark and to help select the HEP workloads that are used in the HEPScore benchmark. The Task Force was also asked to review a transition plan for the migration from the current HS06 benchmark to the new HEPScore benchmark.
Footnote 2: WLCG HEPScore Deployment Task Force: D Giordano (co-chair), RJ Sobie (co-chair), J Andreeva, G Andronico, GM Arevalo, T Boccali, GM Borge, C Bozzi, S Campana, I Collier, A Di Girolamo, M Jouvin, W Lampl, JR Letts, Z Marshall, H Meinhard, AM Melo, B Panzer-Steindel, S Piano, D Piparo, F Qi, MJ Schnepf, O Smirnova, J Templon, A Valassi, JL Willis, T Wong
In September 2022, a 2-day workshop at CERN devoted to the HEPScore benchmark brought together members of the Working Group, Task Force, computing coordinators of the experiments and many site representatives to discuss the composition of the new benchmark [10]. A presentation at the 2022 ACAT conference in Bari, Italy provided an interim report on the status of the HEPScore benchmark [11]. The proposal for the first version of HEPScore was presented at the WLCG Workshop in Lancaster in November 2022 [12] where a number of recommendations were proposed (discussed in the next section). A meeting of the WLCG Management Board reviewed the status of the new benchmark in December 2022 and set a milestone of April 2023 for the release of HEPScore.
Concurrent to the development of the HEPScore benchmark, the issue of power consumption and environmental impact of computing resources has become a topical area of interest [13]. A study, using a beta version of HEPScore, evaluated the processing capabilities and power consumption of x86 and ARM processors, and showed that ARM processors use less power while still being highly performant for HEP workloads [14]. As a result, there was strong community interest in a benchmark for both x86 and ARM processors, and a concerted effort by the experiments resulted in all workloads being ready for both processors for the April 2023 milestone.
## 3 HEP Workloads
In 2021-2022, the first set of workloads were provided to the Working Group by all four large LHC experiments (ALICE, ATLAS, CMS and LHCb) and other WLCG experiments (Belle II, JUNO and Gravity Wave Project).
Each workload is encapsulated into a container with the software and input data needed to run the application. The software of the experiment is stored in the CVMFS file system [15] and then exported to a local folder inside a container. The set of containers of workloads is stored in a Gitlab repository at CERN. Each container includes a configuration file with a parameter for the number of events that allows one to adjust the duration of the execution (more details can be found in ref. [9]). Each workload is run three times and the geometric mean is taken as the benchmarks (typically in units of events per second); this is identical to the method used for each workload component in HS06.
Each workload was validated on a set of dedicated servers at CERN to check the reliability and reproducibility, and in all cases, the results were found to be consistent at a level better than 1%. Once validated, the workloads were run on a diverse set of server systems provided by many WLCG sites and the results were used to make comparisons with HS06 and SPEC CPU 2017. The benchmarks of the individual workloads were compared with each other to determine the correlations between applications. The time to run the each workload and the studies of the correlations provided valuable input that was used to help select the workloads that are included in HEPScore.
At the Benchmark Workshop, the criteria for selecting workload candidates for HEPScore was discussed. The conclusion was that HEPScore should be representative of the computing usage of the experiments (e.g. ATLAS and CMS use over 50% of the total computational resources), it should run in a timely manner (3-6 hours), and provide complementary workloads (e.g. avoid the selection of highly correlated workloads). Further, the workloads need to be valid for at least one LHC run period.
Seven workloads were selected to be part of the HEPScore23 benchmark (the benchmark is called HEPScore23 to indicate the year the benchmark was created). HEPScore23 includes two workloads from CMS (reconstruction and generation-simulation) and ATLAS (Sherpa-generation, reconstruction), and workloads from ALICE (digitization-reconstruction), Belle II (generation-simulation-reconstruction), and LHCb (simulation). The workloads were chosen to be complementary with diverse types of applications and acceptable run times; many use some of their most complex event topologies. The time to run each workload ranges from 300 to 900 seconds on the reference server at CERN (Intel(R) Xeon(R) Gold 6326 CPU @
Figure 2: Histograms of the HEPScore23 benchmark on an Intel, AMD and ARM processor. The fits to the histogram use a Gaussian distribution.
2.90GHz). HEPScore23 follows the methods used for HEPSPEC06 and runs each workload three times and takes the geometric mean of the three measurements. The total time to run the HEPScore23 is approximately 3.5 hours.
The workloads in HEPScore23 are equally weighted. We studied different weighting schemes but found little difference from the nominal HEPScore23 benchmark (where the workloads are equally weighted). The choice of equal weighting the workloads gives enhanced impact to ATLAS and CMS as they each provide two workloads and this gives a benchmark that is similar to the CPU usage on the WLCG. Other studies included the removal of one workload of the seven workloads and changes were typically less than 5%.
As part of the validation process, the HEPScore23 benchmark was measured on an Intel, AMD and ARM servers at CERN. The results, shown in fig. 2, demonstrates the reproducibility of the HEPScore23 benchmark on the different architectures to better than 1%.
Figure 3: The blue line is a linear fit to the data points constrained to the origin and the dashed line has unity slope. The circles are measurements on Intel processors, the triangle points are measurements on AMD processors and the box point are on ARM processors. The points in red (black) where taken with hyper-threading off (on). The measurement on the single ARM process is with hyper-threading off (ARM process are not designed to operate with hyper-threading).
After the validation, the HEPScore23 benchmark was run it on a wider set of servers with both hyper-threading on and off. In fig. 3, the HEPScore23 benchmark is plotted against the HEPSPEC06 (32-bit version). The blue line is a linear fit to the data points constrained to the origin. The circles are measurements on Intel processors, the triangle points are measurements on AMD processors and the box point are on ARM processors. The points in red (black) where taken with hyper-threading off (on). The measurement on the single ARM process is with hyper-threading off (ARM process are not designed to operate with hyper-threading). The plot on the lower half of fig. 3 shows the fractional difference of the points relative to the fit (blue line).
In fig. 4, we show the ratio of the HEPScore23 to HEPSPEC06 as a function of the year in which the processor was released (the colours of the points are identical to those used in fig. 3). The measurement of HEPScore23 was normalized to the value HEPSPEC06 on the reference machine (the data point for the reference machine is one of the points in 2021). It is observed that the ratio HEPScore23:HEPSPEC06 is less than unity for older machines and increases with time for newer servers.
## 4 Summary
The HEPScore23 benchmark was released for use by WLCG sites in April 2023. HEPScore23 will be normalized to HEPSPEC06 as measured on the reference machine to facilitate an easy transition for the sites and WLCG accounting group. Sites are being asked to use HEPScore23 (or both benchmarks) in 2023 to evaluate newly procured hardware. Existing hardware will not need to be benchmarked with HEPScore23 and sites can continue to use their current benchmarks based on HEPSPEC06. The WLCG Management Board will review the situation and decide whether to make HEPScore23 the required benchmark for future years. Further, any changes to the experiment workloads in HEPScore23 must be approved the WLCG Management Board.
|
2302.13971 | LLaMA: Open and Efficient Foundation Language Models | We introduce LLaMA, a collection of foundation language models ranging from
7B to 65B parameters. We train our models on trillions of tokens, and show that
it is possible to train state-of-the-art models using publicly available
datasets exclusively, without resorting to proprietary and inaccessible
datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks,
and LLaMA-65B is competitive with the best models, Chinchilla-70B and
PaLM-540B. We release all our models to the research community. | Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample | 2023-02-27T17:11:15Z | http://arxiv.org/abs/2302.13971v1 | # LLaMA: Open and Efficient Foundation Language Models
###### Abstract
We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community1.
Footnote 1: [https://github.com/facebookresearch/llama](https://github.com/facebookresearch/llama)
## 1 Introduction
Large Languages Models (LLMs) trained on massive corpora of texts have shown their ability to perform new tasks from textual instructions or from a few examples Brown et al. (2020). These few-shot properties first appeared when scaling models to a sufficient size Kaplan et al. (2020), resulting in a line of work that focuses on further scaling these models Chowdhery et al. (2022); Rae et al. (2021). These efforts are based on the assumption that more parameters will lead to better performance. However, recent work from Hoffmann et al. (2022) shows that, for a given compute budget, the best performances are not achieved by the largest models, but by smaller models trained on more data.
The objective of the scaling laws from Hoffmann et al. (2022) is to determine how to best scale the dataset and model sizes for a particular _training_ compute budget. However, this objective disregards the _inference_ budget, which becomes critical when serving a language model at scale. In this context, given a target level of performance, the preferred model is not the fastest to train but the fastest at inference, and although it may be cheaper to train a large model to reach a certain level of performance, a smaller one trained longer will ultimately be cheaper at inference. For instance, although Hoffmann et al. (2022) recommends training a 10B model on 200B tokens, we find that the performance of a 7B model continues to improve even after 1T tokens.
The focus of this work is to train a series of language models that achieve the best possible performance at various inference budgets, by training on more tokens than what is typically used. The resulting models, called _LLaMA_, ranges from 7B to 65B parameters with competitive performance compared to the best existing LLMs. For instance, LLaMA-13B outperforms GPT-3 on most benchmarks, despite being 10\(\times\) smaller. We believe that this model will help democratize the access and study of LLMs, since it can be run on a single GPU. At the higher-end of the scale, our 65B-parameter model is also competitive with the best large language models such as Chinchilla or PaLM-540B.
Unlike Chinchilla, PaLM, or GPT-3, we only use publicly available data, making our work compatible with open-sourcing, while most existing models rely on data which is either not publicly available or undocumented (e.g. "Books - 2TB" or "Social media conversations"). There exist some exceptions, notably OPT Zhang et al. (2022), GPT-NeoX Black et al. (2022), BLOOM Scao et al. (2022) and GLM Zeng et al. (2022), but none that are competitive with PaLM-62B or Chinchilla.
In the rest of this paper, we present an overview of the modifications we made to the transformer architecture Vaswani et al. (2017), as well as our training method. We then report the performance of our models and compare with others LLMs on a set of standard benchmarks. Finally, we expose some of the biases and toxicity encoded in our models, using some of the most recent benchmarks from the responsible AI community.
Approach
Our training approach is similar to the methods described in previous work Brown et al. (2020); Chowdhery et al. (2022), and is inspired by the Chinchilla scaling laws Hoffmann et al. (2022). We train large transformers on a large quantity of textual data using a standard optimizer.
### Pre-training Data
Our training dataset is a mixture of several sources, reported in Table 1, that cover a diverse set of domains. For the most part, we reuse data sources that have been leveraged to train other LLMs, with the restriction of only using data that is publicly available, and compatible with open sourcing. This leads to the following mixture of data and the percentage they represent in the training set:
English CommonCrawl [67%].We preprocess five CommonCrawl dumps, ranging from 2017 to 2020, with the CCNet pipeline Wenzek et al. (2020). This process deduplicates the data at the line level, performs language identification with a fastText linear classifier to remove non-English pages and filters low quality content with an n-gram language model. In addition, we trained a linear model to classify pages used as references in Wikipedia _v.s._ randomly sampled pages, and discarded pages not classified as references.
C4 [15%].During exploratory experiments, we observed that using diverse pre-processed CommonCrawl datasets improves performance. We thus included the publicly available C4 dataset Raffel et al. (2020) in our data. The preprocessing of C4 also contains deduplication and language identification steps: the main difference with CCNet is the quality filtering, which mostly relies on heuristics such as presence of punctuation marks or the number of words and sentences in a webpage.
Github [4.5%].We use the public GitHub dataset available on Google BigQuery. We only kept projects that are distributed under the Apache, BSD and MIT licenses. Additionally, we filtered low quality files with heuristics based on the line length or proportion of alphanumeric characters, and removed boilerplate, such as headers, with regular expressions. Finally, we deduplicate the resulting dataset at the file level, with exact matches.
Wikipedia [4.5%].We add Wikipedia dumps from the June-August 2022 period, covering 20 languages, which use either the Latin or Cyrillic scripts: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. We process the data to remove hyperlinks, comments and other formatting boilerplate.
Gutenberg and Books3 [4.5%].We include two book corpora in our training dataset: the Gutenberg Project, which contains books that are in the public domain, and the Books3 section of ThePile Gao et al. (2020), a publicly available dataset for training large language models. We perform deduplication at the book level, removing books with more than 90% content overlap.
ArXiv [2.5%].We process arXiv Latex files to add scientific data to our dataset. Following Lewkowycz et al. (2022), we removed everything before the first section, as well as the bibliography. We also removed the comments from the.tex files, and inline-expanded definitions and macros written by users to increase consistency across papers.
Stack Exchange [2%].We include a dump of Stack Exchange, a website of high quality questions and answers that covers a diverse set of domains, ranging from computer science to chemistry. We kept the data from the 28 largest websites, removed the HTML tags from text and sorted the answers by score (from highest to lowest).
Tokenizer.We tokenize the data with the byte-pair encoding (BPE) algorithm Sennrich et al. (2015), using the implementation from Sentence-Piece Kudo and Richardson (2018). Notably, we split all numbers into individual digits, and fallback to bytes to decompose unknown UTF-8 characters.
\begin{table}
\begin{tabular}{l r r r} \hline \hline Dataset & Sampling prop. & Epochs & Disk size \\ \hline CommonCrawl & 67.0\% & 1.10 & 3.3 TB \\ C4 & 15.0\% & 1.06 & 783 GB \\ Github & 4.5\% & 0.64 & 328 GB \\ Wikipedia & 4.5\% & 2.45 & 83 GB \\ Books & 4.5\% & 2.23 & 85 GB \\ ArXiv & 2.5\% & 1.06 & 92 GB \\ StackExchange & 2.0\% & 1.03 & 78 GB \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Pre-training data.** Data mixtures used for pre-training, for each subset we list the sampling proportion, number of epochs performed on the subset when training on 1.4T tokens, and disk size. The pre-training runs on 1T tokens have the same sampling proportion.
Overall, our entire training dataset contains roughly 1.4T tokens after tokenization. For most of our training data, each token is used only once during training, with the exception of the Wikipedia and Books domains, over which we perform approximately two epochs.
### Architecture
Following recent work on large language models, our network is based on the transformer architecture (Vaswani et al., 2017). We leverage various improvements that were subsequently proposed, and used in different models such as PaLM. Here are the main difference with the original architecture, and where we were found the inspiration for this change (in bracket):
Pre-normalization [GPT3].To improve the training stability, we normalize the input of each transformer sub-layer, instead of normalizing the output. We use the RMSNorm normalizing function, introduced by Zhang and Sennrich (2019).
SwiGLU activation function [PaLM].We replace the ReLU non-linearity by the SwiGLU activation function, introduced by Shazeer (2020) to improve the performance. We use a dimension of \(\frac{2}{3}4d\) instead of \(4d\) as in PaLM.
Rotary Embeddings [GPTNe0].We remove the absolute positional embeddings, and instead, add rotary positional embeddings (RoPE), introduced by Su et al. (2021), at each layer of the network.
The details of the hyper-parameters for our different models are given in Table 2.
### Optimizer
Our models are trained using the AdamW optimizer (Loshchilov and Hutter, 2017), with the following hyper-parameters: \(\beta_{1}=0.9,\beta_{2}=0.95\). We use a cosine learning rate schedule, such that the final learning rate is equal to 10% of the maximal learning rate. We use a weight decay of \(0.1\) and gradient clipping of \(1.0\). We use \(2,000\) warmup steps, and vary the learning rate and batch size with the size of the model (see Table 2 for details).
### Efficient implementation
We make several optimizations to improve the training speed of our models. First, we use an efficient implementation of the causal multi-head attention to reduce memory usage and runtime. This implementation, available in the xformers library,2 is inspired by Rabe and Staats (2021) and uses the backward from Dao et al. (2022). This is achieved by not storing the attention weights and not computing the key/query scores that are masked due to the causal nature of the language modeling task.
Footnote 2: [https://github.com/facebookresearch/xformers](https://github.com/facebookresearch/xformers)
To further improve training efficiency, we reduced the amount of activations that are recomputed during the backward pass with checkpointing. More precisely, we save the activations that are expensive to compute, such as the outputs of linear layers. This is achieved by manually implementing the backward function for the transformer layers, instead of relying on the PyTorch autograd. To fully benefit from this
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline params & dimension & \(n\) heads & \(n\) layers & learning rate & batch size & \(n\) tokens \\ \hline
6.7B & 4096 & 32 & 32 & \(3.0e^{-4}\) & 4M & 1.0T \\
13.0B & 5120 & 40 & 40 & \(3.0e^{-4}\) & 4M & 1.0T \\
32.5B & 6656 & 52 & 60 & \(1.5e^{-4}\) & 4M & 1.4T \\
65.2B & 8192 & 64 & 80 & \(1.5e^{-4}\) & 4M & 1.4T \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Model sizes, architectures, and optimization hyper-parameters.**
Figure 1: **Training loss over train tokens for the 7B, 13B, 33B, and 65 models. LLaMA-33B and LLaMA-65B were trained on 1.4T tokens. The smaller models were trained on 1.0T tokens. All models are trained with a batch size of 4M tokens.**
reduce the memory usage of the model by using model and sequence parallelism, as described by Korthikanti et al. (2022). Moreover, we also overlap the computation of activations and the communication between GPUs over the network (due to all_reduce operations) as much as possible.
When training a 65B-parameter model, our code processes around 380 tokens/sec/GPU on 2048 A100 GPU with 80GB of RAM. This means that training over our dataset containing 1.4T tokens takes approximately 21 days.
## 3 Main results
Following previous work Brown et al. (2020), we consider zero-shot and few-shot tasks, and report results on a total of 20 benchmarks:
* **Zero-shot.** We provide a textual description of the task and a test example. The model either provides an answer using open-ended generation, or ranks the proposed answers.
* **Few-shot.** We provide a few examples of the task (between 1 and 64) and a test example. The model takes this text as input and generates the answer or ranks different options.
We compare LLaMA with other foundation models, namely the non-publicly available language models GPT-3 Brown et al. (2020), Gopher Rae et al. (2021), Chinchilla Hoffmann et al. (2022) and PaLM Chowdhery et al. (2022), as well as the open-sourced OPT models Zhang et al. (2022), GPT-J Wang and Komatsuzaki (2021), and GPT-Neo Black et al. (2022). In Section 4, we also briefly compare LLaMA with instruction-tuned models such as OPT-IMI Iyer et al. (2022) and Flan-PaLM Chung et al. (2022).
We evaluate LLaMA on free-form generation tasks and multiple choice tasks. In the multiple choice tasks, the objective is to select the most appropriate completion among a set of given options, based on a provided context. We select the completion with the highest likelihood given the provided context. We follow Gao et al. (2021) and use the likelihood normalized by the number of characters in the completion, except for certain datasets (OpenBookQA, BoolQ), for which we follow Brown et al. (2020), and select a completion based on the likelihood normalized by the likelihood of the completion given "Answer:" as context: \(P(\texttt{completion}|\texttt{context})/P(\texttt{completion}|\texttt{``Answer:''})\).
### Common Sense Reasoning
We consider eight standard common sense reasoning benchmarks: BoolQ Clark et al. (2019), PIQA Bisk et al. (2020), SIQA Sap et al. (2019),
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline & \multicolumn{2}{c}{0-shot} & 1-shot & 5-shot & 64-shot \\ \hline GPT-3 & 175B & 14.6 & 23.0 & - & 29.9 \\ Gopher & 280B & 10.1 & - & 24.5 & 28.2 \\ Chinchilla & 70B & 16.6 & - & 31.5 & 35.5 \\ \hline \multirow{3}{*}{PaLM} & 8B & 8.4 & 10.6 & - & 14.6 \\ & 62B & 18.1 & 26.5 & - & 27.6 \\ & 540B & 21.2 & 29.3 & - & 39.6 \\ \hline \multirow{3}{*}{LLaMA} & 7B & 16.8 & 18.7 & 22.0 & 26.1 \\ & 13B & 20.1 & 23.4 & 28.1 & 31.9 \\ \cline{1-1} & 33B & **24.9** & 28.3 & 32.9 & 36.0 \\ \cline{1-1} & 65B & 23.8 & **31.0** & **35.0** & **39.9** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **NaturalQuestions.** Exact match performance.
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline \hline & \multicolumn{2}{c}{BoolQ} & PIQA & SIQA & HellaSwag & WinoGrande & ARC-e & ARC-c & OBQA \\ \hline GPT-3 & 175B & 60.5 & 81.0 & - & 78.9 & 70.2 & 68.8 & 51.4 & 57.6 \\ Gopher & 280B & 79.3 & 81.8 & 50.6 & 79.2 & 70.1 & - & - & - \\ Chinchilla & 70B & 83.7 & 81.8 & 51.3 & 80.8 & 74.9 & - & - & - \\ PaLM & 62B & 84.8 & 80.5 & - & 79.7 & 77.0 & 75.2 & 52.5 & 50.4 \\ PaLM-cont & 62B & 83.9 & 81.4 & - & 80.6 & 77.0 & - & - & - \\ PaLM & 540B & **88.0** & 82.3 & - & 83.4 & **81.1** & 76.6 & 53.0 & 53.4 \\ \hline \multirow{3}{*}{LLaMA} & 7B & 76.5 & 79.8 & 48.9 & 76.1 & 70.1 & 72.8 & 47.6 & 57.2 \\ & 13B & 78.1 & 80.1 & 50.4 & 79.2 & 73.0 & 74.8 & 52.7 & 56.4 \\ \cline{1-1} & 33B & 83.1 & 82.3 & 50.4 & 82.8 & 76.0 & **80.0** & **57.8** & 58.6 \\ \cline{1-1} & 65B & 85.3 & **82.8** & **52.3** & **84.2** & 77.0 & 78.9 & 56.0 & **60.2** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Zero-shot performance on Common Sense Reasoning tasks.**
HellaSwag (Zellers et al., 2019), WinoGrande Sakaguchi et al. (2021), ARC easy and challenge Clark et al. (2018) and OpenBookQA (Mihaylov et al., 2018). These datasets include Cloze and Winograd style tasks, as well as multiple choice question answering. We evaluate in the zero-shot setting as done in the language modeling community.
In Table 3, we compare with existing models of various sizes and report numbers from the corresponding papers. First, LLaMA-65B outperforms Chinchilla-70B on all reported benchmarks but BoolQ. Similarly, this model surpasses PaLM-540B everywhere but on BoolQ and WinoGrande. LLaMA-13B model also outperforms GPT-3 on most benchmarks despite being 10\(\times\) smaller.
### Closed-book Question Answering
We compare LLaMA to existing large language models on two closed-book question answering benchmarks: Natural Questions Kwiatkowski et al. (2019) and TriviaQA Joshi et al. (2017). For both benchmarks, we report exact match performance in a closed book setting, i.e., where the models do not have access to documents that contain evidence to answer the question. In Table 4, we report performance on NaturalQuestions, and in Table 5, we report on TriviaQA. On both benchmarks, LLaMA-65B achieve state-of-the-arts performance in the zero-shot and few-shot settings. More importantly, the LLaMA-13B is also competitive on these benchmarks with GPT-3 and Chinchilla, despite being 5-10\(\times\) smaller. This model runs on a single V100 GPU during inference.
### Reading Comprehension
We evaluate our models on the RACE reading comprehension benchmark Lai et al. (2017). This dataset was collected from English reading comprehension exams designed for middle and high school Chinese students. We follow the evaluation setup from Brown et al. (2020) and report results in Table 6. On these benchmarks, LLaMA-65B is competitive with PaLM-540B, and, LLaMA-13B outperforms GPT-3 by a few percents.
### Mathematical reasoning
We evaluate our models on two mathematical reasoning benchmarks: MATH Hendrycks et al. (2021) and GSM8k Cobbe et al. (2021). MATH is a dataset of 12K middle school and high school mathematics problems written in LaTeX. GSM8k is a set of middle school mathematical problems. In Table 7, we compare with PaLM and Minerva Lewkowycz et al. (2022). Minerva is a series of PaLM models finetuned on 38.5B tokens extracted from ArXiv and Math Web Pages, while neither PaLM or LLaMA are finetuned on mathematical data. The numbers for PaLM and Minerva are taken from Lewkowycz et al. (2022), and we compare with and without maj1@k. maj1@k denotes evaluations where we generate \(k\) samples for each problem and perform a majority voting Wang et al. (2022). On GSM8k, we observe that LLaMA-65B outperforms Minerva-62B, although it has not been fine-tuned on mathematical data.
### Code generation
We evaluate the ability of our models to write code from a natural language description on two benchmarks: HumanEval Chen et al. (2021) and MBPP Austin et al. (2021). For both tasks, the model receives a description of the program in a few sentences, as well as a few input-output examples. In HumanEval, it also receives a function signature, and the prompt is formatted as natural code with the textual description and tests in a
\begin{table}
\begin{tabular}{l r r r} \hline \hline & \multicolumn{2}{c}{RACE-middle} & \multicolumn{1}{c}{RACE-high} \\ \hline GPT-3 & 175B & 58.4 & 45.5 \\ \hline \multirow{3}{*}{PaLM} & 8B & 57.9 & 42.3 \\ & 62B & 64.3 & 47.5 \\ & 540B & **68.1** & 49.1 \\ \hline \multirow{3}{*}{LLaMA} & 7B & 61.1 & 46.9 \\ & 13B & 61.6 & 47.2 \\ \cline{1-1} & 33B & 64.1 & 48.3 \\ \cline{1-1} & 65B & 67.9 & **51.6** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Reading Comprehension.** Zero-shot accuracy.
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline & & 0-shot & 1-shot & 5-shot & 64-shot \\ \hline Gopher & 280B & 43.5 & - & 57.0 & 57.2 \\ Chinchilla & 70B & 55.4 & - & 64.1 & 64.6 \\ \hline \multirow{3}{*}{LLaMA} & 7B & 50.0 & 53.4 & 56.3 & 57.6 \\ & 13B & 56.6 & 60.5 & 63.1 & 64.0 \\ \cline{1-1} & 33B & 65.1 & 67.9 & 69.9 & 70.4 \\ \cline{1-1} & 65B & **68.2** & **71.6** & **72.6** & **73.0** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **TriviaQA.** Zero-shot and few-shot exact match performance on the filtered dev set.
docstring. The model needs to generate a Python program that fits the description and satisfies the test cases. In Table 8, we compare the pass@1 scores of our models with existing language models that have not been finetuned on code, namely PaLM and LaMDA Thoppilan et al. (2022). PaLM and LLaMA were trained on datasets that contain a similar number of code tokens.
As show in Table 8, for a similar number of parameters, LLaMA outperforms other general models such as LaMDA and PaLM, which are not trained or finetuned specifically for code. LLaMA with 13B parameters and more outperforms LaMDA 137B on both HumanEval and MBPP. LLaMA 65B also outperforms PaLM 62B, even when it is trained longer. The pass@1 results reported in this table were obtained by sampling with temperature 0.1. The pass@100 and pass@80 metrics were obtained with temperature 0.8. We use the same method as Chen et al. (2021) to obtain unbiased estimates of the pass@k.
It is possible to improve the performance on code by finetuning on code-specific tokens. For instance, PaLM-Coder Chowdhery et al. (2022) increases the pass@1 score of PaLM on HumanEval from 26.2% for PaLM to 36%. Other models trained specifically for code also perform better than general models on these tasks Chen et al. (2021); Nijkamp et al. (2022); Fried et al. (2022). Finetuning on code tokens is beyond the scope of this paper.
### Massive Multitask Language Understanding
The massive multitask language understanding benchmark, or MMLU, introduced by Hendrycks et al. (2020) consists of multiple choice questions covering various domains of knowledge, including humanities, STEM and social sciences. We evaluate our models in the 5-shot setting, using the examples provided by the benchmark, and report results in Table 9. On this benchmark, we observe that the LLaMA-65B is behind both Chinchilla-70B and PaLM-540B by a few percent in average, and across most domains. A potential explanation is that we have used a limited amount of books and academic papers in our pre-training data, i.e., ArXiv, Gutenberg and Books3, that sums up to only 177GB, while these models were trained on up to 2TB of books. This large quantity of books used by Gopher, Chinchilla and PaLM may also explain why Gopher outperforms GPT-3 on this benchmark, while it is comparable on other benchmarks.
### Evolution of performance during training
During training, we tracked the performance of our models on a few question answering and common sense benchmarks, and report them in Figure 2. On most benchmarks, the performance improves steadily, and correlates with the training perplexity of the model (see Figure 1). The exceptions are SIQA and WinoGrande. Most notably, on SIQA, we observe a lot of variance in performance,
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline & \multicolumn{3}{c}{MATH +maj1@k} & GSM8k +maj1@k \\ \hline \multirow{4}{*}{PaLM} & 8B & 1.5 & - & 4.1 & - \\ & 62B & 4.4 & - & 33.0 & - \\ & 540B & 8.8 & - & 56.5 & - \\ \hline \multirow{4}{*}{Minerva} & 8B & 14.1 & 25.4 & 16.2 & 28.4 \\ & 62B & 27.6 & 43.4 & 52.4 & 68.5 \\ & 540B & **33.6** & **50.3** & **68.5** & **78.5** \\ \hline \multirow{4}{*}{LLaMA} & 7B & 2.9 & 6.9 & 11.0 & 18.1 \\ & 13B & 3.9 & 8.8 & 17.8 & 29.3 \\ \cline{1-1} & 33B & 7.1 & 15.2 & 35.6 & 53.1 \\ \cline{1-1} & 65B & 10.6 & 20.5 & 50.9 & 69.7 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Model performance on quantitative reasoning datasets.** For majority voting, we use the same setup as Minerva, with \(k=256\) samples for MATH and \(k=100\) for GSM8k (Minerva 540B uses \(k=64\) for MATH and and \(k=40\) for GSM8k). LLaMA-65B outperforms Minerva 62B on GSM8k, although it has not been fine-tuned on mathematical data.
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline & Params & \multicolumn{2}{c}{HumanEval} & MBPP \\ pass@ & & @1 & @100 & @1 & @80 \\ \hline LaMDA & 137B & 14.0 & 47.3 & 14.8 & 62.4 \\ PaLM & 8B & 3.6\({}^{*}\) & 18.7\({}^{*}\) & 5.0\({}^{*}\) & 35.7\({}^{*}\) \\ PaLM & 62B & 15.9 & 46.3\({}^{*}\) & 21.4 & 63.2\({}^{*}\) \\ PaLM-cont & 62B & 23.7 & - & 31.2 & - \\ PaLM & 540B & **26.2** & 76.2 & 36.8 & 75.0 \\ \hline \multirow{4}{*}{LLaMA} & 7B & 10.5 & 36.5 & 17.7 & 56.2 \\ & 13B & 15.8 & 52.5 & 22.0 & 64.0 \\ \cline{1-1} & 33B & 21.7 & 70.7 & 30.2 & 73.4 \\ \cline{1-1} & 65B & 23.7 & **79.3** & **37.7** & **76.8** \\ \hline \hline \end{tabular}
\end{table}
Table 8: **Model performance for code generation.** We report the pass@ score on HumanEval and MBPP. HumanEval generations are done in zero-shot and MBBP with 3-shot prompts similar to Austin et al. (2021). The values marked with \({}^{*}\) are read from figures in Chowdhery et al. (2022).
that may indicate that this benchmark is not reliable. On WinoGrande, the performance does not correlate as well with training perplexity: the LLaMA-33B and LLaMA-65B have similar performance during the training.
## 4 Instruction Finetuning
In this section, we show that briefly finetuning on instructions data rapidly leads to improvements on MMLU. Although the non-finetuned version of LLaMA-65B is already able to follow basic instructions, we observe that a very small amount of finetuning improves the performance on MMLU, and further improves the ability of the model to follow instructions. Since this is not the focus of this paper, we only conducted a single experiment following the same protocol as Chung et al. (2022) to train an instruct model, LLaMA-I.
In Table 10, we report the results of our instruct model LLaMA-I on MMLU and compare with existing instruction finetuned models of moderate sizes, namely, OPT-IML (Iyer et al., 2022) and the Flan-PaLM series (Chung et al., 2022). All the reported numbers are from the corresponding papers. Despite the simplicity of the instruction finetuning approach used here, we reach 68.9% on MMLU. LLaMA-I (65B) outperforms on MMLU existing instruction finetuned models of moderate sizes, but are still far from the state-of-the-art, that is 77.4 for GPT code-davinci-002 on MMLU (numbers taken from Iyer et al. (2022)). The details of the performance on MMLU on the 57 tasks can be found in Table 16 of the appendix.
## 5 Bias, Toxicity and Misinformation
Large language models have been showed to reproduce and amplify biases that are existing in the training data (Sheng et al., 2019; Kurita et al., 2019), and to generate toxic or offensive content (Gehman et al., 2020). As our training dataset contains a large proportion of data from the Web, we believe that it is crucial to determine the potential for our models to generate such content. To understand the potential harm of LLaMA-65B, we evaluate on different benchmarks that measure toxic content production and stereotypes detection. While we have selected some of the standard benchmarks that are used by the language model community to indicate some of the issues with these models, these evaluations are not sufficient to fully understand the risks associated with these models.
\begin{table}
\begin{tabular}{l c c} \hline \hline OPT & 30B & 26.1 \\ GLM & 120B & 44.8 \\ PaLM & 62B & 55.1 \\ PaLM-cont & 62B & 62.8 \\ Chinchilla & 70B & 67.5 \\ LLaMA & 65B & 63.4 \\ \hline OPT-IML-Max & 30B & 43.2 \\ Flan-T5-XXL & 11B & 55.1 \\ Flan-PaLM & 62B & 59.6 \\ Flan-PaLM-cont & 62B & 66.1 \\ LLaMA-I & 65B & **68.9** \\ \hline \hline \end{tabular}
\end{table}
Table 10: **Instruction finetuning – MMLU (5-shot). Comparison of models of moderate size with and without instruction finetuning on MMLU.**
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Humanities} & STEM & Social Sciences & Other & Average \\ \hline GPT-NeoX & 20B & 29.8 & 34.9 & 33.7 & 37.7 & 33.6 \\ GPT-3 & 175B & 40.8 & 36.7 & 50.4 & 48.8 & 43.9 \\ Gopher & 280B & 56.2 & 47.4 & 71.9 & 66.1 & 60.0 \\ Chinchilla & 70B & 63.6 & 54.9 & 79.3 & **73.9** & 67.5 \\ \hline \multirow{4}{*}{PaLM} & 8B & 25.6 & 23.8 & 24.1 & 27.8 & 25.4 \\ & 62B & 59.5 & 41.9 & 62.7 & 55.8 & 53.7 \\ & 540B & **77.0** & **55.6** & **81.0** & 69.6 & **69.3** \\ \hline \multirow{4}{*}{LLaMA} & 7B & 34.0 & 30.5 & 38.3 & 38.1 & 35.1 \\ & 13B & 45.0 & 35.8 & 53.8 & 53.3 & 46.9 \\ \cline{1-1} & 33B & 55.8 & 46.0 & 66.7 & 63.4 & 57.8 \\ \cline{1-1} & 65B & 61.8 & 51.7 & 72.9 & 67.4 & 63.4 \\ \hline \hline \end{tabular}
\end{table}
Table 9: **Massive Multitask Language Understanding (MMLU). Five-shot accuracy.**
### RealToxicityPrompts
Language models can generate toxic language, e.g., insults, hate speech or threats. There is a very large range of toxic content that a model can generate, making a thorough evaluation challenging. Several recent work [22, 20] have considered the RealToxicityPrompts benchmark [1] as an indicator of how toxic is their model. RealToxicityPrompts consists of about \(100\)k prompts that the model must complete; then a toxicity score is automatically evaluated by making a request to PerspectiveAPI 3. We do not have control over the pipeline used by the third-party PerspectiveAPI, making comparison with previous models difficult.
Footnote 3: [https://perspectiveapi.com/](https://perspectiveapi.com/)
For each of the \(100\)k prompts, we greedily generate with our models, and measure their toxicity score. The score per prompt ranges from 0 (non-toxic) to 1 (toxic). In Table 11, we report our averaged score on basic and respectful prompt categories of RealToxicityPrompts. These scores are "comparable" with what we observe in the literature (e.g., 0.087 for Chinchilla) but the methodologies differ between these work and ours (in terms of sampling strategy, number of prompts and time of API). We observe that toxicity increases with the size of the model, especially for Respectful prompts. This was also observed in previous work [22], with the notable exception of Hoffmann et al. (2022) where they do not see a difference between Chinchilla and Gopher, despite different sizes. This could be explained by the fact that the larger model, Gopher, has worse performance than Chinchilla, suggesting that the relation between toxicity and model size may only apply within a model family.
\begin{table}
\begin{tabular}{l r r r} \hline \hline & & Basic & Respectful \\ \hline \multirow{4}{*}{LLaMA} & 7B & 0.106 & 0.081 \\ & 13B & 0.104 & 0.095 \\ & 33B & 0.107 & 0.087 \\ & 65B & 0.128 & 0.141 \\ \hline \hline \end{tabular}
\end{table}
Table 11: **RealToxicityPrompts.** We run a greedy decoder on the 100k prompts from this benchmark. The “respectful” versions are prompts starting with “Complete the following sentence in a polite, respectful, and unbiased manner:”, and “Basic” is without it. Scores were obtained using the PerplexityAPI, with higher score indicating more toxic generations.
Figure 2: **Evolution of performance on question answering and common sense reasoning during training.**
### CrowS-Pairs
We evaluate the biases in our model on the CrowS-Pairs (Nangia et al., 2020). This dataset allows to measure biases in 9 categories: gender, religion, race/color, sexual orientation, age, nationality, disability, physical appearance and socioeconomic status. Each example is composed of a stereotype and an anti-stereotype, we measure the model preference for the stereotypical sentence using the perplexity of both sentences in a zero-shot setting. Higher scores thus indicate higher bias. We compare with GPT-3 and OPT-175B in Table 12.
LLaMA compares slightly favorably to both models on average. Our model is particularly biased in the religion category (+10% compared to OPT-175B), followed by age and gender. We expect these biases to come from CommonCrawl despite multiple filtering steps.
### WinoGender
To further investigate the biases of our model on the gender category, we look at the WinoGender benchmark (Rudinger et al., 2018), a co-reference resolution dataset. WinoGender is made of Wino-grad schema, and biases are evaluated by determining if a model co-reference resolution performance is impacted by the gender of the pronoun.
More precisely, each sentence has three mentions: an "occupation", a "participant", and a "pronoun" where the pronoun is co-referencing either the occupation or participant. We prompt the model to determine the co-reference relation and measure if it does so correctly according to the context of the sentence. The goal is to reveal if societal biases associated with occupations have been captured by the model. For example, a sentence in the WinoGender dataset is "The nurse notified the patient that his shift would be ending in an hour.", which is followed by 'His' refers to. We then compare the perplexity of the continuations the nurse and the patient to perform co-reference resolution with the model. We evaluate the performance when using 3 pronouns: "her/her/she", "his/him/he" and "their/them/someone" (the different choices corresponding to the grammatical function of the pronoun.
In Table 13, we report the co-reference scores for the three different pronouns contained in the dataset. We observe that our model is significantly better at performing co-reference resolution for the "their/them/someone" pronouns than for the "her/her/she" and "his/him/he" pronouns. A similar observation was made in previous work (Rae et al., 2021; Hoffmann et al., 2022), and is likely indicative of gender bias. Indeed, in the case of the "her/her/she" and "his/him/he" pronouns, the model is probably using the majority gender of the occupation to perform co-reference resolution, instead of using the evidence of the sentence.
To further investigate this hypothesis, we look at the set of "gotcha" cases for the "her/her/she" and "his/him/he" pronouns in the WinoGender dataset. Theses cases correspond to sentences in which the pronoun does not match the majority gender of the occupation, and the occupation is the correct answer. In Table 13, we observe that our model, LLaMA-65B, makes more errors on the gotcha examples, clearly showing that it capture societal biases related to gender and occupation. The drop of performance exists for "her/her/she" and "his/him/he" pronouns, which is indicative of biases regardless of gender.
### TruthfulQA
TruthfulQA (Lin et al., 2021) aims to measure the truthfulness of a model, i.e., its ability to identify when a claim is true. Lin et al. (2021) consider the definition of "true" in the sense of "literal truth about the real world", and not claims that are only true in the context of a belief system or tradition. This benchmark can evaluate the risks of a model to generate misinformation or false claims. The questions are written in diverse style, cover 38 categories and are designed to be adversarial.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & LLaMA & GPT3 & OPT \\ \hline Gender & 70.6 & **62.6** & 65.7 \\ Religion & 79.0 & 73.3 & **68.6** \\ Race/Color & **57.0** & 64.7 & 68.6 \\ Sexual orientation & 81.0 & **76.2** & 78.6 \\ Age & 70.1 & **64.4** & 67.8 \\ Nationality & 64.2 & **61.6** & 62.9 \\ Disability & **66.7** & 76.7 & 76.7 \\ Physical appearance & 77.8 & **74.6** & 76.2 \\ Socioeconomic status & **71.5** & 73.8 & 76.2 \\ \hline Average & **66.6** & 67.2 & 69.5 \\ \hline \hline \end{tabular}
\end{table}
Table 12: **CrowS-Pairs.** We compare the level of biases contained in LLaMA-65B with OPT-175B and GPT3-175B. Higher score indicates higher bias.
In Table 14, we report the performance of our models on both questions to measure truthful models and the intersection of truthful and informative. Compared to GPT-3, our model scores higher in both categories, but the rate of correct answers is still low, showing that our model is likely to hallucinate incorrect answers.
## 6 Carbon footprint
The training of our models have consumed a massive quantity of energy, responsible for the emission of carbon dioxide. We follow the recent literature on the subject and breakdown both the total energy consumption and the resulting carbon footprint in Table 15. We follow a formula for Wu et al. (2022) to estimate the Watt-hour, Wh, needed to train a model, as well as the tons of carbon emissions, tCO\({}_{2}\)eq. For the Wh, we use the formula:
\[\text{Wh}=\text{GPU-h}\times(\text{GPU power consumption})\times\text{PUE},\]
where we set the Power Usage Effectiveness (PUE) at \(1.1\). The resulting carbon emission depends on the location of the data center used to train the network. For instance, BLOOM uses a grid that emits 0.057 kg CO\({}_{2}\)eq/KWh leading to 27 tCO\({}_{2}\)eq and OPT a grid that emits 0.231 kg CO\({}_{2}\)eq/KWh, leading to 82 tCO\({}_{2}\)eq. In this study, we are interested in comparing the cost in carbon emission of training of these models if they were trained in the same data center. Hence, we do not take the location of data center in consideration, and use, instead, the US national average carbon intensity factor of 0.385 kg CO\({}_{2}\)eq/KWh. This leads to the following formula for the tons of carbon emissions:
\[\text{tCO}_{2}\text{eq}=\text{MWh}\times 0.385.\]
We apply the same formula to OPT and BLOOM for fair comparison. For OPT, we assume training required 34 days on 992 A100-80B (see their logs4). Finally, we estimate that we used 2048 A100-80GB for a period of approximately 5 months to develop our models. This means that developing these models would have cost around 2,638 MWh under our assumptions, and a total emission of 1,015 tCO\({}_{2}\)eq. We hope that releasing these models will help to reduce future carbon emission since the training is already done, and some of the models are relatively small and can be run on a single GPU.
Footnote 4: [https://github.com/facebookresearch/metaseq/tree/main/projects/OPT/chronicles](https://github.com/facebookresearch/metaseq/tree/main/projects/OPT/chronicles)
## 7 Related work
Language modelsare probability distributions over sequences of words, tokens or characters Shannon (1948, 1951). This task, often framed as next token prediction, has long been considered a core problem in natural language processing Bahl et al. (1983); Brown et al. (1990). Because Turing (1950) proposed to measure machine intelligence by using language through the "imitation game", language modeling has been proposed as a benchmark to measure progress toward artificial intelligence Mahoney (1999).
Architecture.Traditionally, language models were based on \(n\)-gram count statistics Bahl et al. (1983), and various smoothing techniques were proposed to improve the estimation of rare events Katz (1987); Kneser and Ney (1995). In the past two decades, neural networks have been successfully applied to the language modelling task,
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & & Truthful & Truthful*Inf \\ \hline \multirow{3}{*}{GPT-3} & 1.3B & 0.31 & 0.19 \\ & 6B & 0.22 & 0.19 \\ & 175B & 0.28 & 0.25 \\ \hline \multirow{3}{*}{LLaMA} & 7B & 0.33 & 0.29 \\ & 13B & 0.47 & 0.41 \\ \cline{1-1} & 33B & 0.52 & 0.48 \\ \cline{1-1} & 65B & 0.57 & 0.53 \\ \hline \hline \end{tabular}
\end{table}
Table 14: **TruthfulQA.** We report the fraction of truthful and truthful*informative answers, as scored by specially trained models via the OpenAI API. We follow the QA prompt style used in Ouyang et al. (2022), and report the performance of GPT-3 from the same paper.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & 7B & 13B & 33B & 65B \\ \hline All & 66.0 & 64.7 & 69.0 & 77.5 \\ \hline her/her/she & 65.0 & 66.7 & 66.7 & 78.8 \\ his/him/he & 60.8 & 62.5 & 62.1 & 72.1 \\ their/them/someone & 72.1 & 65.0 & 78.3 & 81.7 \\ \hline her/her/she (_gotcha_) & 64.2 & 65.8 & 61.7 & 75.0 \\ his/him/he (_gotcha_) & 55.0 & 55.8 & 55.8 & 63.3 \\ \hline \hline \end{tabular}
\end{table}
Table 13: **WinoGender.** Co-reference resolution accuracy for the LLaMA models, for different pronouns (“her/her/she” and “his/him/he”). We observe that our models obtain better performance on “their/them/some-one’ pronouns than on “her/her/she” and “his/him/he’, which is likely indicative of biases.
starting from feed forward models (Bengio et al., 2000), recurrent neural networks (Elman, 1990; Mikolov et al., 2010) and LSTMs (Hochreiter and Schmidhuber, 1997; Graves, 2013). More recently, transformer networks, based on self-attention, have led to important improvements, especially for capturing long range dependencies (Vaswani et al., 2017; Radford et al., 2018; Dai et al., 2019).
Scaling.There is a long history of scaling for language models, for both the model and dataset sizes. Brants et al. (2007) showed the benefits of using language models trained on 2 trillion tokens, resulting in 300 billion \(n\)-grams, on the quality of machine translation. While this work relied on a simple smoothing technique, called _Stupid Backoff_, Heafield et al. (2013) later showed how to scale Kneser-Ney smoothing to Web-scale data. This allowed to train a 5-gram model on 975 billions tokens from CommonCrawl, resulting in a model with 500 billions \(n\)-grams (Buck et al., 2014). Chelba et al. (2013) introduced the _One Billion Word_ benchmark, a large scale training dataset to measure the progress of language models.
In the context of neural language models, Jozefowicz et al. (2016) obtained state-of-the-art results on the Billion Word benchmark by scaling LSTMs to 1 billion parameters. Later, scaling transformers lead to improvement on many NLP tasks. Notable models include BERT (Devlin et al., 2018), GPT-2 (Radford et al., 2019), Megatron-LM (Shoeybi et al., 2019), and T5 (Raffel et al., 2020). A significant breakthrough was obtained with GPT-3 (Brown et al., 2020), a model with 175 billion parameters. This lead to a series of _Large Language Models_, such as Jurassic-1 (Lieber et al., 2021), Megatron-Turing NLG (Smith et al., 2022), Gopher (Rae et al., 2021), Chinchilla (Hoffmann et al., 2022), PaLM (Chowdhery et al., 2022), OPT (Zhang et al., 2022), and GLM (Zeng et al., 2022). Hestness et al. (2017) and Rosenfeld et al. (2019) studied the impact of scaling on the performance of deep learning models, showing the existence of power laws between the model and dataset sizes and the performance of the system. Kaplan et al. (2020) derived power laws specifically for transformer based language models, which were later refined by Hoffmann et al. (2022), by adapting the learning rate schedule when scaling datasets. Finally, Wei et al. (2022) studied the effect of scaling on the abilities of large language models.
## 8 Conclusion
In this paper, we presented a series of language models that are released openly, and competitive with state-of-the-art foundation models. Most notably, LLaMA-13B outperforms GPT-3 while being more than 10\(\times\) smaller, and LLaMA-65B is competitive with Chinchilla-70B and PaLM-540B. Unlike previous studies, we show that it is possible to achieve state-of-the-art performance by training exclusively on publicly available data, without resorting to proprietary datasets. We hope that releasing these models to the research community will accelerate the development of large language models, and help efforts to improve their robustness and mitigate known issues such as toxicity and bias. Additionally, we observed like Chung et al. (2022) that finetuning these models on instructions lead to promising results, and we plan to further investigate this in future work. Finally, we plan to release larger models trained on larger pretraining corpora in the future, since we have seen a constant improvement in performance as we were scaling.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & GPU Type & \begin{tabular}{c} GPU Power \\ consumption \\ \end{tabular} & GPU-hours & \begin{tabular}{c} Total power \\ consumption \\ \end{tabular} &
\begin{tabular}{c} Carbon emitted \\ (tCO\({}_{2}\)eq) \\ \end{tabular} \\ \hline OPT-175B & A100-80GB & 400W & 809,472 & 356 MWh & 137 \\ BLOOM-175B & A100-80GB & 400W & 1,082,880 & 475 MWh & 183 \\ \hline LLaMA-7B & A100-80GB & 400W & 82,432 & 36 MWh & 14 \\ LLaMA-13B & A100-80GB & 400W & 135,168 & 59 MWh & 23 \\ LLaMA-33B & A100-80GB & 400W & 530,432 & 233 MWh & 90 \\ LLaMA-65B & A100-80GB & 400W & 1,022,362 & 449 MWh & 173 \\ \hline \hline \end{tabular}
\end{table}
Table 15: **Carbon footprint of training different models in the same data center.** We follow Wu et al. (2022) to compute carbon emission of training OPT, BLOOM and our models in the same data center. For the power consumption of a A100-80GB, we take the thermal design power for NVLink systems, that is 400W. We take a PUE of 1.1 and a carbon intensity factor set at the national US average of 0.385 kg CO\({}_{2}\)e per KWh.
## Acknowledgements
We thank Daniel Haziza, Francisco Massa, Jeremy Reizenstein, Artem Korenev, and Patrick Labatut from the xformers team. We thank Susan Zhang and Stephen Roller for their support on data deduplication. We thank Luca Wehrstedt, Vegard Mella, and Pierre-Emmanuel Mazare for their support on training stability. We thank Shubho Sengupta, Kalyan Saladi, and all the AI infra team for their support. We thank Jane Yu for her input on evaluation. We thank Yongyi Hu for his help on data collection.
|
2305.18883 | About the aftershocks, the Omori law, and the Utsu formula | After the main shock of an earthquake, a stream of aftershocks that does not
subside for a long time is usually observed. Fusakichi Omori found that the
frequency of aftershocks decreases hyperbolically with time. It has recently
been observed that Omori's law can be viewed as a solution to a differential
equation describing the evolution of aftershocks. An alternative way of
describing is based on Utsu law, which states that the frequency of aftershocks
decreases with time according to a power law. The presented paper is polemical.
We discuss the issue of the applicability of each of the three alternative ways
of describing aftershocks. The Omori law has a limited scope. The law is valid
only in the so-called Omori epoch, after which the earthquake source undergoes
a bifurcation. In the Omori epoch, the Utsu law is also valid, but it does not
differ in this epoch from the Omori law. The general conclusion is that the
existence of the Omori epoch and the phenomenon of bifurcation exclude the
possibility of describing by a continuous smooth function. At the same time,
the differential evolution equation is applicable both before and after the
bifurcation point. Key words: earthquake, main shock, evolution equation,
deactivation factor, Omori epoch, bifurcation. | Anatol Guglielmi, Alexey Zavyalov, Oleg Zotov, Boris Klain | 2023-05-30T09:27:10Z | http://arxiv.org/abs/2305.18883v1 | # About the aftershocks, the Omori law, and the Utsu formula
###### Abstract
After the main shock of an earthquake, a stream of aftershocks that does not subside for a long time is usually observed. Fusakichi Omori found that the frequency of aftershocks decreases hyperbolically with time. It has recently been observed that Omori's law can be viewed as a solution to a differential equation describing the evolution of aftershocks. An alternative way of describing is based on Utsu law, which states that the frequency of aftershocks decreases with time according to a power law. The presented paper is polemical. We discuss the issue of the applicability of each of the three alternative ways of describing aftershocks. The Omori law has a limited scope. The law is valid only in the so-called Omori epoch, after which the earthquake source undergoes a bifurcation. In the Omori epoch, the Utsu law is also valid, but it does not differ in this epoch from the Omori law. The general conclusion is that the existence of the Omori epoch and the phenomenon of bifurcation exclude the possibility of describing by a continuous smooth function. At the same time, the differential evolution equation is applicable both before and after the bifurcation point.
earthquake, main shock, evolution equation, deactivation factor, Omori epoch, bifurcation.
## 1 Introduction
This year marks 100 years since the death of Fusakichi Omori [1, 2, 3, 4]. He made an outstanding contribution to the physics of earthquakes. In 1894, while still a young man, he
discovered that after a strong earthquake, the aftershock frequency \(n\big{(}t\big{)}\) decreases on average hyperbolically with time:
\[n\big{(}t\big{)}=\frac{k}{c+t}\,. \tag{1}\]
Here \(k>0\,,\ c>0\), and \(t\geq 0\)[5]. Not long ago [6], it was noticed that the Omori formula (1) is the general solution of the nonlinear differential equation
\[\frac{dn}{dt}+\sigma n^{2}=0\,. \tag{2}\]
Here \(\sigma\) is the deactivation coefficient of the source cooling down after the main shock of the earthquake. We see that the evolution equation (2) has a unique solution
\[n\big{(}t\big{)}=\frac{n_{0}}{1+n_{0}t}\,, \tag{3}\]
which coincides with the Omori formula (1) up to notation. Thus, formula (1) and equation (2) are equivalent methods for describing the evolution of aftershocks.
Another way of describing aftershocks is widespread in the geophysical literature. Namely, instead of the Omori formula (1), a function
\[n\big{(}t\big{)}=\frac{k}{\big{(}c+t\big{)}^{p}} \tag{4}\]
is used in which \(p\) is a dimensionless additional parameter [7, 8, 9]. Formula (4) is often called the Utsu law (see, for example, [10] and references therein).
This paper is a polemical one. We will discuss a rather subtle issue regarding the scope of applicability of each of the three laws of aftershock evolution, (1), (2) and (4). In Section 2, we indicate the range of applicability of the hyperbolic law (1). In section 3, we describe the bifurcation phenomenon revealed using equation (2) and show that the bifurcation point limits the applicability region of (1). We present arguments in favor of the idea that formula (4) is inapplicable for describing aftershocks.
So, we consider the source as a dynamic system. Our phenomenological theory (2) contains a parameter \(\sigma\) that characterizes the system, and this parameter can, generally speaking, depend on time. The \(\sigma\left(t\right)\) dependence reflects the non-stationarity of the geological environment in the source. Non-stationarity arises due to the fact that after formation of the main rupture, the transition of rocks from one state of quasi-equilibrium to another begins. In other words, the source relaxes, which manifests itself in a series of aftershocks.
Let us introduce the so-called proper time of the source
\[\tau=\int_{0}^{t}\sigma\left(t^{\prime}\right)dt^{\prime}\,. \tag{5}\]
The master equation will take the form
\[\frac{dn}{d\tau}+n^{2}=0\,. \tag{6}\]
The general solution of equation has the form
\[n\left(\tau\right)=\frac{n_{0}}{1+n_{0}\tau}\,. \tag{7}\]
Solution (7) preserves the hyperbolic structure of Omori's law (1), with the difference that time in a non-stationary source flows unevenly.
Let us pose the inverse problem of source physics: Calculate the deactivation coefficient from experimental data on the frequency of aftershocks. We introduce an auxiliary function \(g\left(t\right)=1/\,n\left(t\right)\). The solution of the inverse problem is
\[\sigma=\frac{d}{dt}\left\langle g\right\rangle, \tag{8}\]
where the angle brackets denote the operation of smoothing the auxiliary function [11].
The practical solution of the inverse problem made it possible to reveal the existence of the Omori epoch, during which the source deactivation coefficient remains unchanged [12, 13, 14, 15]. The duration of the Omori epoch varies from case to case from a few days to several months. Proper time in the Omori epoch is proportional to world time: \(\tau=\sigma t\).
An example of solving the inverse problem is shown on the left. The event occurred in Northern California on November 23, 1984. The magnitude of the main shock M = 6. The dotted curve shows the variation in the deactivation factor. The solid broken line shows the fitting function.
On the right, a jump in the time derivative of the deactivation coefficient is shown schematically.
The dimension of \(\theta\), plotted along the vertical axis, is \([\,\theta\,]=1/\text{day}\).
Consider the figure [14, 15]. We see that evolution begins from the Omori epoch, with \(\sigma=\text{const}\) over 20 days. At the end of the Omori epoch, there is a jump in the value of \(\theta=d\sigma/dt\).
## 3 Discussion
The existence of the Omori epoch indicates that the Omori law (1) is applicable to describe the evolution of aftershocks. Its applicability, however, is limited in time. At the end of the epoch of Omori, there is a jump in the time derivative of the function \(\sigma\big{(}t\big{)}\). Neither the hyperbolic law (1) nor the Utsu power law (4) describe the entire evolution of aftershocks as an integral process. Utsu law is applicable in the Omori epoch, but at this stage of evolution it does not differ from Omori law.
In [4, 16], the inverse problem was solved for eight aftershock series. Based on the analysis of solutions, the authors formulated a hypothesis that at the end of the Omori epoch, a bifurcation of the earthquake source occurs. The bifurcation phenomenon is well known in the theory of critical phenomena, in the theory of phase transitions, and in the theory of catastrophes [17, 18, 19]. Judging by the right side of the figure, we may be dealing with a first-order phase transition. However, it is difficult to talk about this at this stage of the study. We have yet to understand the mechanism of
bifurcation within the framework of one or another model of the source, considered as a dynamic system.
## 4 Conclusion
The general conclusion is that the existence of the Omori epoch and the phenomenon of source bifurcation exclude the possibility of describing the evolution of aftershocks by a continuous smooth function. Hence it follows that neither the Omori law nor the Utsu formula can be used to describe the aftershock flow as an integral process. Omori's law is applicable only in the Omori epoch. In this epoch, the Utsu power function is also applicable, but in this epoch it coincides with Omori hyperbolic formula. At the end of the Omori epoch, the time derivative of the deactivation coefficient experiences a jump (rupture), after which another phase of aftershock evolution begins. At the same time, the nonlinear differential equation (2) describes evolution both in the Omori epoch and after it.
**Acknowledgments:** We are deeply grateful to A.L. Buchachenko for his attention to this study and for his valuable comments. We are grateful to the compilers of the earthquakes catalogs of Southern ([https://scedc.caltech.edu](https://scedc.caltech.edu)) and Northern ([http://www.ncedc.org](http://www.ncedc.org)) California, data from which were used in our study. The work was carried out according to the plan of state assignments of IPE RAS.
|
2306.17038 | Comparison of Single- and Multi- Objective Optimization Quality for
Evolutionary Equation Discovery | Evolutionary differential equation discovery proved to be a tool to obtain
equations with less a priori assumptions than conventional approaches, such as
sparse symbolic regression over the complete possible terms library. The
equation discovery field contains two independent directions. The first one is
purely mathematical and concerns differentiation, the object of optimization
and its relation to the functional spaces and others. The second one is
dedicated purely to the optimizational problem statement. Both topics are worth
investigating to improve the algorithm's ability to handle experimental data a
more artificial intelligence way, without significant pre-processing and a
priori knowledge of their nature. In the paper, we consider the prevalence of
either single-objective optimization, which considers only the discrepancy
between selected terms in the equation, or multi-objective optimization, which
additionally takes into account the complexity of the obtained equation. The
proposed comparison approach is shown on classical model examples -- Burgers
equation, wave equation, and Korteweg - de Vries equation. | Mikhail Maslyaev, Alexander Hvatov | 2023-06-29T15:37:19Z | http://arxiv.org/abs/2306.17038v1 | # Comparison of Single- and Multi- Objective Optimization Quality for Evolutionary Equation Discovery
###### Abstract.
Evolutionary differential equation discovery proved to be a tool to obtain equations with less a priori assumptions than conventional approaches, such as sparse symbolic regression over the complete possible terms library. The equation discovery field contains two independent directions. The first one is purely mathematical and concerns differentiation, the object of optimization and its relation to the functional spaces and others. The second one is dedicated purely to the optimizaioal problem statement. Both topics are worth investigating to improve the algorithm's ability to handle experimental data a more artificial intelligence way, without significant pre-processing and a priori knowledge of their nature. In the paper, we consider the prevalence of either single-objective optimization, which considers only the discrepancy between selected terms in the equation, or multi-objective optimization, which additionally takes into account the complexity of the obtained equation. The proposed comparison approach is shown on classical model examples - Burgers equation, wave equation, and Korteweg - de Vries equation.
symbolic regression, dynamic system modeling, interpretable learning, differential equations, sparse regression +
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
Footnote †: journal: Computer graphics
+
structure, so increased diversity of the population can benefit the resulting quality.
## 2. Algorithm Description
The data-driven differential equation identification operates on problems of selecting a model for dynamics of the variable \(u=u(t,\mathbf{x})\) in a spatio-temporal domain \((0,T)\bigtimes\Omega\), that is implicitly described by differential equation Eq. 1 with corresponding initial and boundary conditions. It can be assumed, that the order of the unknown equation can be arbitrary, but rather low (usually of second or third order).
\[F(t,\mathbf{x},u,\frac{\partial u}{\partial t},\frac{\partial u}{\partial \mathbf{x}_{1}},\...\ \frac{\partial u}{\partial\mathbf{x}_{n}})=0 \tag{1}\]
Both multi-objective and single-objective approaches have the same core of "graph-like" representation of a differential equation (encoding) and similar evolutionary operators that will be described further.
### Differential equation representation
To represent the candidate differential equation the computational graph structure is employed. A fixed three-layer graph structure is employed to avoid the infeasible structures, linked to unconstrained graph construction and overtraining issues, present in symbolic regression. The lowest level nodes contain tokens, middle nodes and the root are multiplication and summation operations. The data-driven equations take the form of a linear combination of product terms, represented by the multiplication of derivatives, other functions and a real-valued coefficient Eq. 2.
\[\begin{cases}F^{\prime}(t,\mathbf{x},u,\frac{\partial u}{\partial t},\frac{ \partial u}{\partial x_{1}},\...\ \frac{\partial u}{\partial x_{n}})=\sum_{i}\alpha_{i}\prod_{j}f_{ij}=0\\ G^{\prime}(u)|_{\Gamma}=0\end{cases} \tag{2}\]
Here, the factors \(f_{ij}\) are selected from the user-defined set of elementary functions, named tokens. The problem of an equation search transforms into the task of detecting an optimal set of tokens to represent the dynamics of the variable \(u(t,\mathbf{x})\), and forming the equation by evaluating the coefficients \(\alpha=(\alpha_{1},\...\ \alpha_{m})\).
During the equation search, we operate with tensors of token values, evaluated on grids \(u_{\gamma}=u(t_{\gamma},\mathbf{x}_{\gamma})\) in the processed domain \((0,T)\bigtimes\Omega\).
Sparsity promotion in the equation operates by filtering out nominal terms with low predicting power and is implemented with LASSO regression. For each individual, a term (without loss of generality, we can assume that it is the \(m\)-th term) is marked to be a "right-hand side of the equation" for the purposes of term filtering and coefficient calculation. The terms \(T_{i}=\prod_{j}f_{ij}\) are paired with real-value coefficients obtained from the optimization subproblem of Eq. 3. Finally, the equation coefficients are detected by linear regression.
\[\alpha^{\prime}=\arg\min_{\alpha}(||\sum_{i,\ \overline{i}\neq m}\alpha_{i}^{ \prime}\prod_{j}f_{ij}-\prod_{j}f_{mj}||_{2}+\lambda||\alpha^{\prime}||_{1}) \tag{3}\]
In the initialization of the algorithm equation graphs are randomly constructed for each individual from the sets of user-defined tokens with a number of assumptions about the structures of the "plausible equations".
### Mechanics of implemented evolutionary operators
To direct the search for the optimal equations, standard evolutionary operators of mutation and cross-over have been implemented. While the mechanics of single- and multi-objective optimization in the algorithm differ, they work similarly on the stage of applying equation structure-changing operators. With the graph-like encoding of candidate equations, the operators can be represented as changes, introduced into its subgraphs.
The algorithm properties to explore structures are provided by mutation operators, which operate by random token and term exchanges. The number of terms to change has no strict limits. For tokens with parameters \((p_{k+1},\...\ p_{n})\in\mathbb{R}^{n-k}\), such as a parametric representation of an unknown external dependent variable, parameters are also optimized: the mutation is done with a random Gaussian increment.
In order to combine structural elements of better equations, the cross-over operator is implemented. The interactions between parent equations are held on a term-level basis. The sets of terms pairs from the parent equation are divided into three groups: terms identical in both equations, terms that are present in both equations but have different parameters or only a few tokens inside of them are different, and the unique ones. The cross-over occurs for the two latter groups. For the second group it manifests as the parameter exchange between parents: the new parameters are selected from the interval between the parents' values.
Cross-over between unique terms works as the complete exchange between them. The construction of exchange pairs between these tokens works entirely randomly.
### Optimization of equation quality metric
The selection of the optimized functional distinguishes multiple approaches to the differential equation search. First of all, a more trivial optimization problem can be stated as in Eq. 4, where we assume the identity of the equation operator \(F^{\prime}(\overline{u})=0\) to zero as in Eq. 2.
\[Q_{op}(F^{\prime}(u))=||F^{\prime}(\overline{u})||_{n}=||\sum_{i}\alpha_{i} \prod_{j}f_{ij}||_{n}\longrightarrow\min_{\alpha_{i}\ t_{ij}} \tag{4}\]
An example of a more complex optimized functional is the norm of a discrepancy between the input values of the modelled variable and the solution proposed by the algorithm differential equation, estimated on the same grid. Classical solution techniques can not be applied here due to the inability of a user to introduce the partitioning of the processed domain, form finite-difference schema without a priori knowledge of an equation, proposed by evolutionary algorithm. An automatic solving method for candidate equation (viewed as in Eq. 6) quality evaluation is introduced in (Birir and Bir, 2016) to work around this issue.
\[Q_{sol}(F^{\prime}(u))=||u-\overline{u}||_{n}\longrightarrow\min_{\alpha_{i}\ t_{ij}} \tag{5}\]
\[F^{\prime}(\overline{u})=0:F^{\prime}(\overline{u})=\sum_{i}\alpha_{i}\prod_{j} f_{ij}=0 \tag{6}\]
While both quality metrics Eq. 4 and Eq. 5 in ideal conditions provide decent convergence of the algorithm, in the case of the noisy data, the errors in derivative estimations can make differential operator discrepancy from the identity (as in problem in Eq. 4) an unreliable metric. Applying the automatic solving algorithm has high computational cost due to training a neural network to satisfy the discretized equation and boundary operators.
As the single-objective optimization method for the study, we have employed a simple evolutionary algorithm with a strategy that minimizes one of the aforementioned quality objective functions. Due to the purposes of experiments on synthetic noiseless data, the discrepancy-based approach has been adopted.
### Multi-objective optimization application
As we stated earlier, in addition to process representation, the conciseness is also a valuable for regulating the interpretability of the model. Thus the metric of this property can be naturally introduced as Eq. 7, with an adjustment of counting not the total number of active terms but the total number of tokens (\(k_{i}\) for \(i-th\) term).
\[C(F^{\prime}(u))=\#(F^{\prime})=\sum_{i}k_{i}*\mathbf{1}_{\alpha_{i}\neq 0} \tag{7}\]
In addition to evaluating the quality of the proposed solution from the point of the equation simplicity, multi-objective enables the detection of systems of differential equations, optimizing qualities of modeling of each variable.
While there are many evolutionary multi-objective optimization algorithms, MOEADD (Multi-objective evolutionary algorithm based on dominance and decomposition) (Golovolovolov et al., 2016) algorithm has proven to be an effective tool in applications of data-driven differential equations construction. We employ baseline version of the MOEADD from the aforementioned paper with the following parameters: PBI penalty factor \(\theta=1.0\), probability of parent selection inside the sector neighbourhood \(\delta=0.9\) (4 nearest sector are considered as "neighbouring") with 40% of individuals selected as parents. Evolutionary operator parameters are: crossover rate (probability of affecting individual terms): 0.3 and mutation rate of 0.6.The result of the algorithm is the set of equations, ranging from the most simplistic constructions (typically in forms of \(\frac{\partial^{2}u}{\partial x_{i}}=0\)) to the highly complex equations, where extra terms probably represents the noise components of the dynamics.
## 3. Experimental Study
This section of the paper is dedicated to studying equation discovery framework properties. As the main object of interest, we designate the difference of derived equations between single- and multi-objective optimization launches. The validation was held on the synthetic datasets, where modelled dependent variable is obtained from solving an already known and studied equation.
The tests were held on three cases: wave, Burgers and Korteweg-de Vries equations due to unique properties of each equation. The algorithms were tested in the following pattern: 64 evolutionary iterations for the single-objective optimization algorithm and 8 iterations of multi-objective optimization for the populations of 8 candidate equations, which resulted in roughly similar resource consumption.10 independent runs are conducted with each setup. The main equation quality indicator in our study is the statistical analysis of the objective function mean (\(\mu=\mu(Q(F^{\prime}))\)) and variance \(\sigma^{2}=(\sigma(Q(F^{\prime})))^{2}\) among the different launches.
The first equation was the wave equation as on Eq. 8 with the necessary boundary and initial conditions. The equation is solved with the Wolfram Mathematica software in the domain of \((x,t)\in[0,1]\times[0,1]\) on a grid of \(101\times 101\). Here, we have employed numerical differentiation procedures.
\[\frac{\partial^{2}u}{\partial t^{2}}=0.04\frac{\partial^{2}u}{\partial x^{2}} \tag{8}\]
The algorithm's convergence due to the relatively simple structure was ensured in the case of both algorithms: the algorithm proposes the correct structure during the initialization or in the initial epochs of the optimization. However, such a trivial case can be a decent indicator of the "ideal" algorithm behaviour. The values of examined metrics for this experiment and for the next ones are presented on Tab. 1.
The statistical analysis of the algorithm performance on each equation is provided in Fig. 1.
Another examination was performed on the solution of Burgers' equation, which has a more complex, non-linear structure. The problem was set as in Eq. 9, for a case of a process without viscosity, thus omitting term \(\nu\frac{\partial u}{\partial t^{2}}\). As in the previous example, the equation was solved with the Wolfram Mathematica toolkit.
\[\frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}=0 \tag{9}\]
Derivatives used during the equation search were computed analytically due to the function not being constant only on small domain.
The presence of other structures that have relatively low optimized function values, such as \(u_{x}^{\prime}u_{t}^{\prime}=u_{tt}^{\prime\prime}\), makes this case of data rather informative. Thus, the algorithm has a local optimum that is far from the correct structure from the point of error metric.
The final set-up for an experiment was defined with a non-homogeneous, Korteweg-de Vries equation, presented in Eq. 10. The presence of external tokens in separate terms in the equation makes the search more difficult.
\[\frac{\partial u}{\partial t}+6u\frac{\partial u}{\partial x}+\frac{\partial^ {3}u}{\partial x^{3}}=\cos t\sin t \tag{10}\]
The experiment results indicate that the algorithm may detect the same equation in multiple forms. Each term of the equation may be chosen as the "right-hand side" one, and the numerical error with different coefficient sets can also vary.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline metric & method & wave & Burgers & KdV \\ \hline \(\mu\) & single-objective & 5.72 & 2246.38 & 0.162 \\ & multi-objective & 2.03 & 1.515 & 16.128 \\ \(\sigma^{2}\) & single-objective & 18.57 & \(4.41*10^{7}\) & \(8.9*10^{-3}\) \\ & multi-objective & 0 & 20.66 & \(\approx 10^{-13}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Results of the equation discovery
## 4. Conclusion
This paper examines the prospects of using multi-objective optimization for the data-driven discovery of partial differential equations. While initially introduced for handling problems of deriving systems of partial differential equations, the multi-objective view of the problem improves the overall quality of the algorithm. The improved convergence, provided by higher candidate individual diversity, makes the process more reliable in cases of equations with complex structures, as was shown in the examples of Burgers' and Korteweg-de Vries equations.
The previous studies have indicated the algorithm's reliability, converging to the correct equation, while this research has proposed a method of improving the rate at which the correct structures are identified. This property is valuable for real-world applications because incorporating large and complete datasets improves the noise resistance of the approach.
The further development of the proposed method involves introducing techniques for incorporating expert knowledge into the search process. This concept can help generate preferable candidates or exclude infeasible ones even before costly coefficient calculation and fitness evaluation procedures.
## 5. Code and Data Availability
The numerical solution data and the Python scripts, that reproduce the experiments, are available at the GitHub repository 1.
Footnote 1: [https://github.com/TMO-NS-team/EDDE_GECCO_experiments](https://github.com/TMO-NS-team/EDDE_GECCO_experiments)
## Acknowledgements
This research is financially supported by the Ministry of Science and Higher Education, agreement FSER-2021-0012.
|
2304.01023 | Self-Supervised learning for Neural Architecture Search (NAS) | The objective of this internship is to propose an innovative method that uses
unlabelled data, i.e. data that will allow the AI to automatically learn to
predict the correct outcome. To reach this stage, the steps to be followed can
be defined as follows: (1) consult the state of the art and position ourself
against it, (2) come up with ideas for development paths, (3) implement these
ideas, (4) and finally test them to position ourself against the state of the
art, and then start the sequence again. During my internship, this sequence was
done several times and therefore gives the tracks explored during the
internship. | Samuel Ducros | 2023-04-03T14:21:42Z | http://arxiv.org/abs/2304.01023v1 | # Master Thesis Report
###### Abstract
The main goal of this paper is to develop a new class of master Thesis Report, which is a new class of master Thesis Report, which is a new class of master Thesis Report, is a new class of master Thesis Report, which is a new class of master Thesis Report, is a new class of master Thesis Report, which is a new class of master Thesis Report, is a new class of master Thesis Report, is a new class of master Thesis Report, which is a new class of master Thesis Report, is a new class of master Thesis Report, which is a new class of master Thesis Report, is a new class of master Thesis Report, which is a new class of master Thesis Report, is a new class of master Thesis Report, which is a new class of master Thesis Report, is a new class of master Thesis Report, which is a new class of master Thesis Report, is a new class of master Thesis Report, which is a new class of master Thesis Report, is a new class of master Thesis Report, which is a new class of master Thesis Report, is a new class of master Thesis Report, which is a new class of master Thesis Report, is a new class of master Thesis Report, is a new class of master Thesis Report, which is a new class of master Thesis Report, is a new class of
###### Contents
* 1 Introduction
* 1.1 Subject presentation
* 2 Subject development
* 2.1 Technologies
* 2.1.1 Python
* 2.1.2 Pytorch
* 2.1.3 OpenCV
* 2.2 State of the art
* 2.2.1 Semantic Segmentation
* 2.2.2 Semi-Supervised Learning
* 2.2.3 Transfer Learning
* 2.2.4 Multi-Task Learning
* 2.3 Comprehension and getting started with the code
* 2.3.1 Inpainting
* 2.3.2 Noise
* 2.3.3 Colorization
* 2.3.4 Jigsaw
* 2.4 Tuning the Hyper parameters
* 2.4.1 Normalization techniques
* 2.4.2 Losses
* 2.4.3 Learning rate / Optimizer
* 3 Train Protocol
* 3.1 mIoU (mean Intersection over Union)
## 1 Introduction
The topic of this internship is related to Self-Supervised Learning, with the main idea of finding innovative methods to train a neural network in order to make a step forward in this field. A major problem that constrains our research is the use of the smallest possible amount of annotated data to obtain good final results. The aim is to enable new AIs to understand their environment and task more efficiently and with the least amount of data possible, so that they become accessible to companies that do not have the billions of data available to Google for example.
The objective of this internship is to propose an innovative method that uses unlabelled data, i.e. data that will allow the AI to automatically learn to predict the correct outcome. To reach this stage, the steps to be followed can be defined as follows: (1) consult the state of the art and position ourself against it, (2) come up with ideas for development paths, (3) implement these ideas, (4) and finally test them to position ourself against the state of the art, and then start the sequence again. During my internship, this sequence was done several times and therefore gives the tracks explored during the internship.
The first sequence allowed me to get into the swing of things and to understand the subject with a large section on the state of the art. The idea that came out of it was first to speed up the execution of the code, to allow us to do tests more quickly, and at the same time to familiarise myself with the code.
After that, and apart from the (many) incompatible computer/connection issues, I wanted to better understand how the different hyperparameters played on the results rather than doing blind tests. And there are many, which allowed me to learn a lot about the role of learning rate, norms, connections between neurons, or losses.
### Subject presentation
The internship focuses mainly on object segmentation on an image, i.e. distinguish the different shapes and groups of shapes on an image (see Semantic Segmentation). Important constraints are imposed on us:
* At the data level, when we have a dataset of images, it is very expensive to label all the images in order to train the neural network in a supervised way. We therefore want to take advantage of this dataset by keeping the major part unannotated.
* Still at the data level, it is difficult to find a dataset of real images that corresponds to our problem. However, it is easy to obtain a large number of synthetic images, extracted from a recent and realistic video game for example.
* At the hardware level, we have access to several GPUs and CPUs that will allow us to accelerate our training.
Considering these constraints and our goal, we base our work on the one hand on the research of innovative methods of Semi-Supervised Learning to take into account unlabeled data, and on the other hand on the Domain Adaptation, that is to say the fact of training our network for the segmentation of images on synthetic images, of which we have a lot of data.
This report is based on the first part of the problematic on the research in Semi-Supervised Learning for the consideration of unlabeled data. A major part concerns the knowledge of the state of the art around Semi-Supervised Learning techniques and around Semantic Segmentation.
## 2 Subject development
### 2.1 Technologies
The technologies presented below were used for all the work I was able to do. When I joined the team, the project has already been developed with the Python language and various libraries, such as Pytorch and OpenCV. So these are the technologies I continued with.
#### 2.1.1 Python
Python1 is the most popular language in the world of data analysis (data science) and artificial intelligence (machine learning, deep learning). This craze is reflected in the large number of libraries that allow the manipulation of mathematical objects or concepts with a high level of positioning. Among these, we can note the use of NumPy which allows manipulating matrices, or Scikit-learn which proposes machine learning bricks and matplotlib for all the aspects concerning the visualization.
Footnote 1: [https://www.python.org](https://www.python.org)
As the language is simple to use, untyped, and does not have a long compilation time, we can quickly iterate by changing different parameters and approaches. This is ideal, as the goal is not to have an optimized version of the work since it is in the research state.
#### 2.1.2 Pytorch
PyTorch2 is an open source Python machine learning software library based on Torch developed by Facebook. PyTorch allows the tensor calculations necessary for deep learning to be performed. These calculations are optimized and carried out either by the processor (CPU) or, where possible, by a graphics processor (GPU) supporting CUDA. It comes from the research teams at Facebook, and before that from Ronan Collobert in Samy Bengio's team at IDIAP. PyTorch is derived from an earlier software, Torch, which was used with the Lua language. PyTorch is independent of Lua and is programmed in Python.
Footnote 2: [https://pytorch.org](https://pytorch.org)
#### 2.1.3 OpenCV
OpenCV3 is a library for image and video processing. It is a library written in C++ that also supports CUDA calls and offers a Python API to facilitate development. It includes many of the algorithms we use, such as object tracking within a video, image modification when preprocessing our data, or simply capturing or displaying video content to highlight our system's detections.
Footnote 3: [https://opencv.org](https://opencv.org)
### State of the art
A great part of my internship was to understand the relationships between all the possible components of a neural network, with all the parameters and hyperparameters. Each advance in understanding brought up a new questioning on another point. So a lot of research has been done. First of all, I had to understand the subject and research the state of the art in general on Deep Learning and Semantic Segmentation, which are the very first basis of the subject. More precisely, what we are interested in is to use as little annotated data as possible, and therefore we are interested in Self-Supervised Learning and Semi-Supervised Learning. Here are my research results concerning Semantic Segmentation, Self and Semi-Supervised Learning.
#### 2.2.1 Semantic Segmentation
Semantic segmentation, or image segmentation, is the task of clustering parts of an image together which belong to the same object class. It is a form of pixel-level prediction because each pixel in an image is classified according to a category. Some example benchmarks for this task are Cityscapes4 (see **Figure 1**), PASCAL VOC5 and ADE20K6. Models are usually evaluated with the Mean Intersection-Over-Union (mIoU, see Train Protocol).
Footnote 4: [https://www.cityscapes-dataset.com](https://www.cityscapes-dataset.com)
Footnote 5: [http://host.robots.ox.ac.uk/pascal/VOC/](http://host.robots.ox.ac.uk/pascal/VOC/)
Footnote 6: [https://groups.csail.mit.edu/vision/datasets/ADE20K/](https://groups.csail.mit.edu/vision/datasets/ADE20K/)
The basic architecture in image segmentation consists of an encoder and a decoder (see **Figure 2**). The encoder extracts features from the image through filters. The decoder is responsible for generating the final output which is usually a segmentation mask containing the outline of the object. Most of the architectures have this architecture or a variant of it.
#### 2.2.2 Semi-Supervised Learning
Supervised learning usually requires a large amount of labelled data. Obtaining good quality labelled data is a costly and time-consuming task, especially for a complex task such as object detection, instance segmentation and more detailed annotations are desired. On the other hand, unlabelled data is readily available in abundance.
Semi-supervised learning (SSL), also known as learning with partially labelled data, refers to the process of learning a prediction function from labelled and unlabelled training samples. In this situation, the labelled instances are expected to be few in number, resulting in an inefficient supervised
Figure 1: Semantic segmentation of a scene from the Cityscapes dataset by Cordts et al. (2016) recorded in Zurich.
model, but the unlabelled training examples contain useful information about the prediction problem at hand, which can be exploited to produce an efficient prediction function. We assume that a collection of labelled training examples derived from a joint probability distribution and a collection of unlabelled training examples derived from the marginal distribution are both accessible in this case. The problem arises in supervised learning if the unlabelled data set is empty. The opposite extreme example is when the labelled training set is empty, in which case the problem is reduced to unsupervised learning.
**Smoothness** is a fundamental assumption of semi-supervised learning, which states that two instances in a high density region must have identical class labels. This means that if two points are part of the same group or cluster, their class labels will most likely be the same. On the other hand, if they are separated by a low density area, their desired labels should be different.
Suppose that the instances of the same class form a partition; the unlabelled training data could help to determine the partition boundary more efficiently than if only labelled training examples were used. Therefore, searching for partitions using a mixture model and then assigning class labels to groups using the labelled data they comprise is a technique for using unlabelled data to train a model. If two instances are in the same group, it is likely that they belong to the same class, according to the underlying assumption, known as the **cluster assumption**. This assumption can be explained as follows: if a group is created by a large number of instances, it is rare that they all belong to the same class. This does not imply that a class consists of only one group of instances, but rather that two instances of distinct classes are unlikely to be in the same group. If we consider the example partitions as high density regions, another version of the cluster assumption is that the decision boundary passes through low density regions, according to the previous smoothing assumption.
Density estimation is often based on a notion of distance, which may not make sense in high-dimensional vector spaces. To resolve this difficulty, a third assumption known as the **manifold assumption**, which is supported by a number of semi-supervised models, holds that instances in high-dimensional spaces exist on low-dimensional topological spaces that are locally Euclidean (or geometric manifolds).
Self-training is one of the first wraparound techniques for learning a supervised classifier using partially labelled data. A supervised method is first trained on the labelled training set, and its predictions are then used to assign pseudo-labels to a portion of the unlabelled training samples. The supervised classifier is then retrained on the augmented training set (labelled and pseudo-labelled), and the procedure is repeated until there are no unlabelled observations left to pseudo-label. Despite its simplicity, self-training is difficult to analyse in general. Some studies have proposed bounds on the error of majority-vote classifiers, used in the envelope, on unlabelled training data. When the
Figure 2: Example of architecture for image segmentation of road scenes [1]
majority-voting classifier makes most of its errors on low-density regions, this bound is shown to be tight [2; 3; 4; 5].
The unsupervised learning method we used to take into account the unlabelled data is called the Self-supervised Learning [7]. It consists in creating the input data and the target data from the unlabelled data to provide the supervision. This task could be as simple as given the upper-half of the image, predict the lower-half of the same image [8], or given the grayscale version of the colored image, predict the LAB channels (for the CIELAB Color Space [9]) of the same image [10].
In [11], the authors show the advantage of using self-supervised learning in the context of semi-supervised learning, by introducing the Self-Supervised Semi-Supervised Learning (S4L) framework to derive two new methods for semi-supervised image classification.
Lately, in natural language processing, Transformer models [12] have achieved a lot of success. Transformers like Bert [13] or T5 [14] applied the idea of self-supervision to NLP (Natural Language Processing) tasks. They first train the model with large unlabelled data and then fine-tuning the model with few labelled data examples.
On this basis, I further refine my research by orienting it towards Multi-Task Learning and Transfer Learning. The topic leads us to think about the use of unannotated data to allow the learning of the Segmentation task. The idea is indeed to train our network with unsupervised pretext-tasks in order to transfer knowledge for the learning of our target task.
Figure 4: Illustration of self-supervised learning by solving jigsaw puzzle (Source: [15])
Figure 3: SSL toy example. The decision boundaries obtained on two-moons dataset, with a supervised and different SSL approaches using 6 labeled examples, 3 for each class, and the rest of the points as unlabeled data. (Source: [6])
#### 2.2.3 Transfer Learning
Transfer learning [16] is one of the research fields in machine learning that aims to transfer knowledge from one or more source tasks to one or more target tasks. It can be seen as the ability of a system to recognize and apply knowledge and skills, learned from previous tasks, to new tasks or domains sharing similarities.
#### 2.2.4 Multi-Task Learning
Multi-task learning (MTL) [17] is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately. Early versions of MTL were called "hints".
In a widely cited 1997 paper, Rich Caruana gave the following characterization:
_"Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better."_
Figure 5: Intuitive examples about transfer learning. (Source: [16])
Figure 6: Architecture for TCDCN [18]. The base feature extractor is made of a series of convolutional layers which are shared between all tasks, and the extracted features are used as input to task-specific output heads. (Source: [17])
Multi-task learning works because regularization induced by requiring an algorithm to perform well on a related task can be superior to regularization that prevents overfitting by penalizing all complexity uniformly. One situation where MTL may be particularly helpful is if the tasks share significant commonalities and are generally slightly under sampled.
### Comprehension and getting started with the code
The first code I did on this project was mainly optimization. Indeed, as it was, the code allowed to launch experiments using only one pretext-task among Inpainting and Denoising, with 3 supervised and 3 unsupervised images. By understanding more in depth each part of the code, I could find pieces of code repeated at each iteration unnecessarily, or (and especially) data stored unnecessarily or which accumulated. From a program that took about 10 days to run with a configuration C that accepted at most 3 supervised and 3 unsupervised images, we went to a program that runs for 3 days that accepts up to 8 supervised and 8 unsupervised images for the same configuration C. At the same time, I rewrote some of the code to make it more automatic and general, and to anticipate future improvements, for example with for loops on pretext-tasks instead of the if statement, to generalise all pretext-task possibilities.
Now taking these optimizations as a base, I improved the code so that it can take into account several pretext-tasks at the same time for the training. Thus the input images undergo different transformations depending on the pretext-task.
#### 2.3.1 Inpainting
Existing works for image inpainting [8] can be mainly divided into two groups. The first group represents traditional diffusion-based or patch-based methods with low-level features. The second group, that we use, attempts to solve the inpainting problem by a learning-based approach, e.g. training deep convolutional neural networks to predict pixels for the missing regions.
As described above, the network that learns the inpainting task is constituted of an common encoder with the other tasks, and a decoder head of its own. The input image is the grand truth image from which we erase a square. After having got through the encoder and the decoder, the prediction is compared with the grand truth image using Mean Squared Error Loss (see Losses).
Figure 7: Example inpainting results of the method of [8] on images of natural scene, face and texture. Missing regions are shown in white. In each pair, the left is input image and right is the direct output of the trained generative neural networks without any post-processing. (Source: [8])
#### 2.3.2 Noise
Many different ways of denoising methods exist [19]. From filters to deep learning, many researches have been done in this area. Here we use deep learning and, as the inpainting task, we build an encoder network and a decoder network specific for the denoising task. The input image is the grand truth which we added a virtual noise. After having got through the network, the prediction is compared to this grand truth using Mean Squared Error Loss (see Losses).
#### 2.3.3 Colorization
Colorization [10] is the process of adding plausible color information to monochrome photographs or videos. As the denoising problem, there are several ways to resolve this problem [22]. Here we need to train the colorization with our network. As before, a decoder head is specific to the colorization task. To learn it in a unsupervised way, we give as input a colored image which we transformed in a grayscale image. The prediction colors are compared with the grand truth with
#### 2.3.4 Jigsaw
The Jigsaw problem [15] is the process of solving a puzzle of an image, i.e. find the original order between the different tiles extracted from an image (see **Figure 9**). Its decoder head is of the same type as the decoder head of a classification problem, because the problem here is to well classify every part of the puzzle.
### Tuning the Hyper parameters
Now that we have these new features and the code works well and is optimized, we want to run tests and get the best possible results. But before that, we have to choose the right hyperparameters to test. It took a lot of research to understand their impact and to know the different possible arrangements. Here I summarize my research results on the main hyperparameters.
Figure 8: Denoised images of the real noisy image Nikon D800 ISO 6400 1 [20] by different methods (Source: [21])
#### 2.4.1 Normalization techniques
Normalization techniques can decrease your model's training time by a huge factor. It normalizes each feature so that they maintains the contribution of every feature, as some feature has higher numerical value than others. This way our network can be unbiased (to higher value features). Batch Norm for example makes loss surface smoother (i.e. it bounds the magnitude of the gradients much more tightly) [23]. It makes the Optimization faster because normalization does not allow weights to explode all over the place and restricts them to a certain range. A last unintended benefit of Normalization is that it helps network in Regularization (only slightly, not significantly). Therefore getting Normalization right can be a crucial factor in getting your model to train effectively.
Let's dive into details of each normalization technique one by one.
* **Batch Normalization:** Batch normalization [24] is a method that normalizes activations in a network across the mini-batch of definite size. For each feature, batch normalization computes the mean and variance of that feature in the mini-batch. It then subtracts the mean and divides the feature by its mini-batch standard deviation. We can add \(\gamma\) and \(\beta\) as scale and shift learnable parameters respectively, in order to take into account a greater magnitude of the weights if necessary. This all can be summarized as: Let \(\mathcal{B}=\{x_{1..m}\}\) be the mini-batch, containing the features of each data of the batch. Let \(\gamma\) and \(\beta\) two parameters to learn. We have, with \(\epsilon\) is the stability constant in the equation: \[\mu_{\mathcal{B}} =\frac{1}{m}\sum_{i=1}^{m}x_{i}\] \[\sigma_{\mathcal{B}}^{2} =\frac{1}{m}\sum_{i=1}^{m}(x_{i}-\mu_{\mathcal{B}})^{2}\] \[\hat{x}_{i} =\frac{x_{i}-\mu_{\mathcal{B}}}{\sqrt{\sigma_{\mathcal{B}}^{2}+ \epsilon}}\] \[BN_{\gamma,\beta}(x_{i}) =\gamma\hat{x}_{i}+\beta\] SSL for NAS
Figure 9: Learning image representations by solving Jigsaw puzzles. (a) The image from which the tiles (marked with green lines) are extracted. (b) A puzzle obtained by shuffling the tiles. Some tiles might be directly identifiable as object parts, but others are ambiguous (e.g., have similar patterns) and their identification is much more reliable when all tiles are jointly evaluated. In contrast, with reference to (c), determining the relative position between the central tile and the top two tiles from the left can be very challenging. (Source: [15])
* **Layer Normalization:** Layer Normalization differs from the Batch Normalization because it normalizes input across the features instead of normalizing input features across the batch dimension.
* **Instance(or Contrast) Normalization:** Layer normalization and Instance normalization are very similar to each other but the difference between them is that Instance normalization normalizes across each channel in each training example instead of normalizing across input features in an training example.
* **Group Normalization:** Group Normalization normalizes over group of channels for each training example.
We summarize here these norms in the schema in **Figure 10**[25]. For the project we mainly employed **Switchable Normalization (SN)[26], which is a normalization method that uses a weighted average of different mean and variance statistics from batch normalization, instance normalization, and layer normalization. Switch Normalization can outperform Batch normalization on tasks such as image classification and object detection. [26] shows that the instance normalization is used more often in earlier layers, batch normalization is preferred in the middle and layer normalization being used in the last more often. Smaller batch sizes lead to a preference towards layer normalization and instance normalization.
#### 2.4.2 Losses
In the context of an optimization algorithm, the function used to evaluate a candidate solution (i.e. a set of weights) is referred to as the objective function. We may seek to maximize or minimize the objective function, meaning that we are searching for a candidate solution that has the highest or lowest score respectively. Typically, with neural networks, we seek to minimize the error. As such, the objective function is often referred to as a cost function or a loss function and the value calculated by the loss function is referred to as simply "loss."
The cost function reduces all the various good and bad aspects of a possibly complex system down to a single number, a scalar value, which allows candidate solutions to be ranked and compared. In calculating the error of the model during the optimization process, a loss function must be chosen. This can be a challenging problem as the function must capture the properties of the problem and be motivated by concerns that are important to the project and stakeholders. It is important, therefore, that the function faithfully represent our design goals. If we choose a poor error function and obtain unsatisfactory results, the fault is ours for badly specifying the goal of the search.
Here are the different loss functions we use for our project, depending on the task that is trained.
Figure 10: Normalization methods. Each subplot shows a feature map tensor, with N as the batch axis, C as the channel axis, and (H, W ) as the spatial axes. The pixels in blue are normalized by the same mean and variance, computed by aggregating the values of these pixels. (Source: [25])
* **Cross-Entropy:** When modeling a classification problem where we are interested in mapping input variables to a class label, we can model the problem as predicting the probability of an example belonging to each class. In a binary classification problem, there would be two classes, so we may predict the probability of the example belonging to the first class. In the case of multiple-class classification, we can predict a probability for the example belonging to each of the classes. In the training dataset, the probability of an example belonging to a given class would be 1 or 0, as each sample in the training dataset is a known example from the domain. We know the answer. Therefore, under maximum likelihood estimation, we would seek a set of model weights that minimize the difference between the model's predicted probability distribution given the dataset and the distribution of probabilities in the training dataset. This is called the cross-entropy. For the project, the cross-entropy is used as loss function for tasks as Jigsaw, Colorization and Semantic Segmentation.
* **Mean Squared Error (MSE):** Mean squared error (MSE) is the most commonly used loss function for regression. The loss is the mean overseen data of the squared differences between true and predicted values. MSE is sensitive towards outliers and given several examples with the same input feature values, the optimal prediction will be their mean target value. MSE is thus good to use if you believe that your target data, conditioned on the input, is normally distributed around a mean value, and when it's important to penalize outliers extra much. We use MSE when doing regression, believing that your target, conditioned on the input, is normally distributed, and want large errors to be significantly (quadratically) more penalized than small ones. For the project then, MSE is used as loss function for tasks as Inpainting and Denoising.
#### 2.4.3 Learning rate / Optimizer
The learning rate [27] is a hyperparameter that controls how much to change the model in response to the estimated error each time the model weights are updated. Choosing the learning rate is challenging as a value too small may result in a long training process that could get stuck, whereas a value too large may result in learning a sub-optimal set of weights too fast or an unstable training process (see **Figure 11**).
The stochastic gradient descent (SGD) algorithm is a common optimizer algorithm for the training. SGD is an optimization algorithm that estimates the error gradient for the current state of the model using examples from the training dataset, then updates the weights of the model using the
Figure 11: Consequences of the learning rate (Source: [27])
back-propagation of errors algorithm, referred to as simply backpropagation. Other learning rate optimization algorithms can be used as ADAM [28].
The learning rate controls how quickly the model is adapted to the problem. Smaller learning rates require more training epochs given the smaller changes made to the weights each update, whereas larger learning rates result in rapid changes and require fewer training epochs. A learning rate that is too large can cause the model to converge too quickly to a suboptimal solution, whereas a learning rate that is too small can cause the process to get stuck. The challenge of training deep learning neural networks involves carefully selecting the learning rate. It may be the most important hyperparameter for the model.
To best chose the learning, we use a learning rate schedule, that changes the learning rate during learning and is most often changed between epochs/iterations. This is mainly done with two parameters: decay and momentum. Decay serves to settle the learning in a nice place and avoid oscillations, a situation that may arise when a too high constant learning rate makes the learning jump back and forth over a minimum, and is controlled by a hyperparameter. Momentum is analogous to a ball rolling down a hill; we want the ball to settle at the lowest point of the hill (corresponding to the lowest error). Momentum both speeds up the learning (increasing the learning rate) when the error cost gradient is heading in the same direction for a long time and also avoids local minima by 'rolling over' small bumps. Momentum is controlled by a hyper parameter analogous to a ball's mass which must be chosen manually--too high and the ball will roll over minima which we wish to find, too low and it will not fulfil its purpose.
For our project we use mainly the SGD optimization algorithm.
## 3 - Train Protocol
Finally we have a functional code and we understand the role of hyperparameters. Now it's time to launch tests. Our goal is to compare ourselves to the state of the art regarding the segmentation results, and to make our method exceed these results. Let's explain how we compare.
### mIoU (mean Intersection over Union)
There are several neural network models working on different platforms, and different unique approaches for object detection and semantic image segmentation, so we need to know how to choose one among all in order to have better results in our field. There has to be a criterion based on which such decision can be made. The best one is by checking the degree of similarity of the output produced by such methods with the ground truth and that can be done in a mathematical way by calculating IoU (Intersection over Union) between the two. This method takes into account the region common to both (ground truth and predicted output) and computes to what percentage it has similarity with the actual one.
It's quite simple in case of "Single-class based Semantic Image Segmentation" but not in the case of other "Multiple-class based Semantic Image Segmentation", as in the case of Pascal VOC challenge (with 21 classes) where there can be objects belonging to different classes in the same image. In such cases each object has to be given a different label and has to be treated accordingly during IoU computation. A method which can take into account such cases and find out the overall IoU for multiple classes present in an image is the calculation of the mean value of IoUs corresponding to different classes which would match with the actual degree of similarity. This mean value is regarded as mean IoU (mIoU).
Here is a common approx for the computation of the mIoU, which we use in our code. For that we need the labelled matrix of both predicted result and expected one (ground truth). Let two matrices \(GT\) and \(Pred\), one representing the actual segmented output and the other predicted by any neural
network or model. The elements of these matrices are the labels representing different classes to which pixels, at that particular location on the image belong. Then here are the steps to follow:
* Finding out the frequency count of each class for both the matrix, \(F_{GT}\) and \(F_{Pred}\),
* Converting the matrices to 1D format, \(GT_{1D}\) and \(Pred_{1D}\)
* Finding out the category matrix, of size \((nb_{classes}\times nb_{classes})\). The category matrix is one that will have the elements as the category numbers to which the pixels at that particular location belong. \(Categ=(nb_{classes}\times GT_{1D})+Pred_{1D}\)
* Constructing the confusion matrix. A confusion matrix is a \((nb_{classes}\times nb_{classes})\) size matrix which stores the information about the number of pixels belonging to a particular category. The frequency count of the 'category' array gives a linear array which on reshaping to \((nb_{classes}\times nb_{classes})\) gives us the confusion matrix. \(CM=Categ.reshape((nb_{classes}\times nb_{classes}))\)
* Calculating IoU for individual classes. The diagonal of the confusion matrix represents the common region. So, these elements are the intersection values of the predicted output and ground truth. \(I=diag(CM)\)\(U=GT_{1D}+Pred_{1D}-I\)
* Calculating MIoU for the actual-predicted pair. \(IoU=\dfrac{I}{U}\) is a matrix of size \(nb_{classes}\). \(mIoU=mean(IoU)\) is the mean over all values of \(IoU\).
In case of multiple classes, the mIoU has to be calculate rather than just calculating IoU by treating all the different classes as a single one. So, considering all the classes mIoU has to be calculated for validation.
There are several other methods to find out the similarity between an actual image and predicted result, most popular among them is the bounding box method [29] but the instances involving fine edge detection along with segmentation where higher accuracy would be required, this method proves to be reliable. |
2301.04785 | Phase-shifted Adversarial Training | Adversarial training has been considered an imperative component for safely
deploying neural network-based applications to the real world. To achieve
stronger robustness, existing methods primarily focus on how to generate strong
attacks by increasing the number of update steps, regularizing the models with
the smoothed loss function, and injecting the randomness into the attack.
Instead, we analyze the behavior of adversarial training through the lens of
response frequency. We empirically discover that adversarial training causes
neural networks to have low convergence to high-frequency information,
resulting in highly oscillated predictions near each data. To learn
high-frequency contents efficiently and effectively, we first prove that a
universal phenomenon of frequency principle, i.e., \textit{lower frequencies
are learned first}, still holds in adversarial training. Based on that, we
propose phase-shifted adversarial training (PhaseAT) in which the model learns
high-frequency components by shifting these frequencies to the low-frequency
range where the fast convergence occurs. For evaluations, we conduct the
experiments on CIFAR-10 and ImageNet with the adaptive attack carefully
designed for reliable evaluation. Comprehensive results show that PhaseAT
significantly improves the convergence for high-frequency information. This
results in improved adversarial robustness by enabling the model to have
smoothed predictions near each data. | Yeachan Kim, Seongyeon Kim, Ihyeok Seo, Bonggun Shin | 2023-01-12T02:25:22Z | http://arxiv.org/abs/2301.04785v3 | # Phase-shifted Adversarial Training
###### Abstract
Adversarial training has been considered an imperative component for safely deploying neural network-based applications to the real world. To achieve stronger robustness, existing methods primarily focus on how to generate strong attacks by increasing the number of update steps, regularizing the models with the smoothed loss function, and injecting the randomness into the attack. Instead, we analyze the behavior of adversarial training through the lens of response frequency. We empirically discover that adversarial training causes neural networks to have low convergence to high-frequency information, resulting in highly oscillated predictions near each data. To learn high-frequency contents efficiently and effectively, we first prove that a universal phenomenon of frequency principle, i.e., _lower frequencies are learned first_, still holds in adversarial training. Based on that, we propose phase-shifted adversarial training (PhaseAT) in which the model learns high-frequency components by shifting these frequencies to the low-frequency range where the fast convergence occurs. For evaluations, we conduct the experiments on CIFAR-10 and ImageNet with the adaptive attack carefully designed for reliable evaluation. Comprehensive results show that PhaseAT significantly improves the convergence for high-frequency information. This results in improved adversarial robustness by enabling the model to have smoothed predictions near each data.
## 1 Introduction
Despite the remarkable success, deep neural networks are known to be susceptible to crafted imperceptible noise called _adversarial attacks_(Szegedy et al., 2013), which can have severe consequences when deployed in critical applications such as self-driving cars, medical diagnosis, and surveillance systems. In response to such negative implications, there has been a recent surge in research aimed at preventing adversarial attacks, such as adversarial training (Madry et al., 2018; Wong et al., 2020; Sriramanan et al., 2020; Gupta et al., 2021), data augmentation (Gong et al., 2021; Wang et al., 2021; Rebuffi et al., 2021), and regularization (Qin et al., 2019). Adversarial training is considered one of the most effective ways to achieve adversarial robustness. The early attempt generates the attack by only a single gradient descent on the given input, which is known as fast gradient sign method (FGSM) (Goodfellow et al., 2015). However, it was later shown to be ineffective against strong multiple-step attacks (Kurakin et al., 2016), e.g., projected gradient descent (PGD). Hence a myriad of defense strategies are introduced to build the robust models against the strong attacks by injecting the regularization to the perturbations (Sriramanan et al., 2020, 2021), randomly initializing the perturbations (Wong et al., 2020; Tramer and Boneh, 2019), and increasing the number of update steps for the perturbation to approximate the strong attack (Madry et al., 2018; Zhang et al., 2019).
Instead of introducing new attack and defense strategies in this work, we analyze adversarial training through the lens of the frequency of the general mapping between inputs and outputs (e.g., neural networks). For this purpose, we calculate the errors of the frequency components between the dataset
Figure 1: Errors of frequency components (high, low) between the training dataset and neural networks. Here, we use CIFAR-10 dataset for validation.
and neural networks to observe how the networks converge to the mapping function of the training dataset in terms of the frequencies. In comparison to standard training (blue dots in Figure 1), we find that adding adversarial examples (red inverted triangles in Figure 1) causes the model to slowly converge in the case of high-frequency components (Figure 1(b)). 1 This indicates that adversarial robustness comes at significantly increased training time compared to standard training.
Footnote 1: We use the filtering method (Xu et al., 2020) which explicitly splits the frequency spectrum into the high and low components and calculates the frequencies by applying the Fourier transform of the Gaussian function (please refer to Section A in Supplementary material for more detailed information)
To learn high-frequency contents of the dataset efficiently, phase shift deep neural networks (PhaseDNN) (Cai, Li, and Liu, 2020) is proposed based on a universal phenomenon of frequency principle (F-principle) (Xu, Zhang, and Xiao, 2019; Xu et al., 2020; Rahaman et al., 2019; Luo et al., 2021), that is, _deep neural networks often fit target functions from low to high frequencies during the training_. PhaseDNN learns high-frequency contents by shifting these frequencies to the range of low-frequency to exploit the faster convergence in the low-frequencies of the DNN. However, it is challenging to apply PhaseDNN to adversarial training because PhaseDNN was optimized to solve a single dimensional data (e.g., electromagnetic wave propagation, seismic waves).
In this work, we propose phase-shifted adversarial training (PhaseAT) to achieve adversarial robustness in an efficient and effective manner. To this end, we theoretically prove that the F-principle holds not only in standard training but also in adversarial training. We then extend the phaseDNN to adopt high-dimensional data and learn the adversarial data effectively. In summary, our contributions include the following:
* We provide a novel perspective on adversarial training by analyzing its loss in the frequency domain.
* We present a mathematical foundation for how adversarial training behaves by considerably extending a universal phenomenon of frequency principle.
* We propose a new phase-shifted adversarial training algorithm based on our proposed theory, outperforming other strong baselines by large margin in many different settings.
## 2 Background of Adversarial Training
Adversarial training is a method for learning networks that are robust to adversarial attacks. Given a network \(\mathcal{T}\) parameterized by \(\theta\), a dataset \(\{x_{j},y_{j}\}_{j=0}^{N-1}\) where \(N\) is the size of dataset, a loss function \(\ell\) and a threat model \(\Delta\), the learning problem can be cast to the following robust optimization problem.
\[\min_{\theta}\sum_{j=0}^{N-1}\max_{\delta\in\Delta}\ell(\mathcal{T}(x_{j}+ \delta),y_{j}) \tag{2.1}\]
where \(\delta\) is the adversarial perturbation. As we consider the adversarial robustness against \(L_{\infty}\) constrained adversaries, the treat model \(\Delta\) takes the perturbation \(\delta\) such that \(\|\delta\|_{\infty}\leq\epsilon\) for some \(\epsilon>0\). For adversarial training, it is common to use adversarial attack to approximate the inner maximization over \(\Delta\), followed by some variation of gradient descent on the model parameters \(\theta\). For example, FGSM attack (Goodfellow, Shlens, and Szegedy, 2015) has the following approximation.
\[\delta=\epsilon\cdot\text{sign}(\nabla_{x}\ell(\mathcal{T}(x_{j}),y_{j})) \tag{2.2}\]
## 3 F-principle in Adversarial Training
This section is devoted to theoretically demonstrate the F-principle in adversarial training as well as standard training (i.e., \(\delta=0\)). We first represent the total loss \(L(\theta)\) in the frequency domain and then quantify the rate of change of \(L(\theta)\) contributed by high frequencies. We provide a detailed proof in the supplementary material.
### The total loss in the frequency domain
Given a dataset \(\{x_{j},y_{j}\}_{j=0}^{N-1}\), \(\mathcal{T}_{\theta}\) is the DNN output2 and \(g(x)\) is the target function (also known as labeling function) such that \(g(x_{j})=y_{j}\). Then the total loss \(L\) is generally defined by
Footnote 2: Here we write \(\mathcal{T}_{\theta}\) instead of \(\mathcal{T}\) to explicitly denote the dependency on parameter \(\theta\).
\[L(\theta)=\frac{1}{N}\sum_{j=0}^{N-1}\ell(\mathcal{T}_{\theta},g)(x_{j}).\]
Here, for example, \(\ell(\mathcal{T}_{\theta},g)(x)=\|\mathcal{T}_{\theta}(x)-g(x)\|^{2}\) for mean-squared error loss function, and for cross-entropy loss function \(\ell(\mathcal{T}_{\theta},g)(x)=-g(x)\cdot\log\mathcal{T}_{\theta}(x)\) where the log function acts on each vector component of \(\mathcal{T}_{\theta}\). In adversarial training, we define an _adversarial function_\(\mathcal{A}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) by \(\mathcal{A}(x)=x+\delta(x)\) with the adversarial perturbation \(\delta\) and the corresponding output is given by \(\mathcal{T}_{\theta}\circ\mathcal{A}\). In reality, it may be considered that \(\mathcal{T}_{\theta}\) and \(g\) are bounded in a compact domain containing \(\{x_{j}\}_{j=0}^{N-1}\). Then for the two common examples, \(\ell(\mathcal{T}_{\theta}\circ\mathcal{A},g)\) is absolutely integrable and \(\ell(\mathcal{T}_{\theta}\circ\mathcal{A},g)\) is differentiable with respect to the first argument. In this regard, these properties are considered to be possessed by a loss function generally.
**Theorem 3.1**.: _Let \(\widehat{f}\) denote the Fourier transform of \(f\). Then, in the frequency domain_
\[L(\theta)=\lim_{\varepsilon\to 0}\frac{1}{N}\sum_{j=0}^{N-1} \int_{\mathbb{R}^{d}}e^{2\pi ix_{j}\cdot\xi}e^{-\pi\|\varepsilon\|^{2}} \overline{\ell(\mathcal{T}_{\theta}\circ\mathcal{A},g)}(\xi)d\xi. \tag{3.1}\]
This representation is inspired by a convergence property, \(G_{\varepsilon}*L(\theta)\to L(\theta)\), of convolution with approximate identities \(G_{\varepsilon}(x):=\varepsilon^{-d}e^{-\pi\|\varepsilon^{-1}x\|^{2}}\). See Lemma D.1 in the supplementary material.
Now we split the integral in Eq. 3.1 into the region \(\|\xi\|\leq\eta\) and \(\|\xi\|\geq\eta\) for any \(\eta>0\) to separate the total loss into
two parts \(L_{\leq}(\theta,\eta)\) and \(L_{\geq}(\theta,\eta)\) contributed by low and high frequencies, respectively:
\[L(\theta)=\lim_{\varepsilon\to 0}\big{(}L_{\leq}(\theta,\eta)+L_{\geq}(\theta, \eta)\big{)}. \tag{3.2}\]
### Quantifying the rate of change in total loss
Let \(W^{s,\infty}(\mathbb{R}^{d})\) denote the Sobolev spaces3 which consist of bounded functions having bounded derivatives up to a given order \(s\), and thus the index \(s\) reflects the degree of regularity.
Footnote 3: For a vector-valued \(f\), we also write \(f\in W^{s,\infty}(\mathbb{R}^{d})\) to represent all of its component functions are contained in the space.
**Theorem 3.2**.: _Consider a DNN with multi-layer in adversarial training. If an activation function \(\sigma\) and the target function \(g\) satisfy \(\sigma\in W^{s,\infty}(\mathbb{R})\) and \(g\in W^{2s,\infty}(\mathbb{R}^{d})\) for some \(s\in\mathbb{N}\), then there exists a constant \(C>0\) independent of \(s,\eta\) such that for any \(\eta>0\)_
\[\|\nabla_{\theta}L(\theta)-\nabla_{\theta}L_{\leq}(\theta,\eta)\|\] \[\approx\|\nabla_{\theta}L_{\geq}(\theta,\eta)\|\leq C\max\{N,d^{ d}\}\eta^{-2s}. \tag{3.3}\]
Theorem 3.2 implies that the rate of decrease in the corresponding high-frequency region loss (\(L_{\geq}(\theta,\eta)\)) is much greater than the rate of increase in \(\eta\). In other words, a model tends to fit target functions from low to high frequency information during the adversarial training. The mathematical insight of this theorem is that the regularity of a network converts into the decay of the total loss in the frequency domain. Specifically, a network with a more regular activation function (bigger s) converges more slowly, according to \(\eta^{-2s}\) in Eq. 3.3. As the frequency increases, this becomes more evident. For example, if \(\sigma\) is ReLU or eLu then \(s=1\) or \(s=2\), respectively. For \(\tanh\) and sigmoid activation functions, \(s\) may be taken arbitrarily large.
The approximation in Eq. 3.3 becomes more accurate as \(\varepsilon\) diminishes in Eq. 3.2, and \(\varepsilon=\min\{1/\sqrt[4]{N},1/d\}\) is inversely proportional to the size of dataset (\(N\)) or the dimension of input data (\(d\)). Therefore, when \(N\) or \(d\) are large, \(\varepsilon\) decreases. This is a common phenomenon in real-world datasets, which are typically high-dimensional and contain a large number of data samples (e.g., images and languages).
Our innovative theory differs from its predecessors(Xu et al., 2020; Luo et al., 2021) in two ways. First, we generalize F-principle by showing that it holds for the cross-entropy loss. Second, we provide a faster decay rate with \(2s\) in Eq. 3.3, which serves as one of the motivations for using phase-shifting, particularly in adversarial training. Finally, we provide the mathematical justification for the F-principle in adversarial training settings.
## 4 Phase-shifted Adversarial Training
In this section, we detail the proposed method, coined phase-shifted adversarial training (PhaseAT). We first present the existing PhaseDNN (Cai, Li, and Liu, 2020) and its limitations (Section 4.1). We then redesign the original PhaseDNN to make it more practical and suitable for adversarial training (Section 4.2). Finally, we elaborate PhaseAT by optimizing PhaseDNN through adversarial training (Section 4.3).
### Phase Shift Deep Neural Networks
To learn highly oscillatory functions efficiently, Cai, Li, and Liu (2020) propose PhaseDNN which shifts the high-frequency components of the dataset to the range of the low frequency range for fast convergence because according to F-principle, neural networks learn low-frequency information first before learning high-frequency components. On the phase shifted dataset, the neural networks learn the target function with low-frequency contents. Unfortunately, this requires that the frequency extraction of the original training data have to be done numerically using convolutions with a frequency selection kernel, which requires huge memory footprint, i.e., a storage of \(O(\text{N}\times\text{N})\). Alternatively, the neural networks can be learned on the data from all range of frequencies while the phase shift is included in the makeup of the PhaseDNN. This version of PhaseDNN is called _coupled PhaseDNN_, and we adopt this version to avoid the cost of decomposing the frequency components on the original dataset.
Since the phase-shift is performed on the networks rather than the dataset, PhaseDNN consists of an ensemble of networks, each of which is dedicated to a specific frequency. To learn higher frequency components, phase-shift is applied to each networks separately. The output of PhaseDNN can be represented as follows:
\[\mathcal{T}(x)=\sum_{m=0}^{M-1}e^{i\omega_{m}x}\cdot\mathcal{T}_{m}(x) \tag{4.1}\]
where \(M\) is the size of ensemble, \(T_{m}(x)\) represents one of the networks in the ensemble, and \(\omega_{m}\) is the specific frequency for the networks \(T_{m}\). Let the labeling function of the dataset be \(g(\cdot)\), the phaseDNN is optimized by following the least square errors:
\[\sum_{j=0}^{N-1}|g(x_{j})-\mathcal{T}(x_{j})|^{2}=\sum_{j=0}^{N-1}\left|g(x_{j })-\sum_{m=0}^{M-1}e^{i\omega_{m}x_{j}}\cdot\mathcal{T}_{m}(x_{j})\right|^{2} \tag{4.2}\]
Note that we always include the lowest frequency in the selected frequency, i.e., \(\omega_{0}=0\), because the low frequencies typically dominates in real datasets (Xu et al., 2020).
### Multi-headed PhaseDNN
Applying the previous phaseDNN to adversarial training has a few challenges. First, PhaseDNN was designed to learn single-dimensional data; the number of all possible frequencies grows exponentially with the dimension of the input, which prevents the use of high-dimensional inputs (e.g., images and languages). Second, PhaseDNN requires multiple networks to perform the phase shift of the different frequencies, causing the large memory footprint.
To address the first challenge, we project the given inputs to the first principal component and use the projected scalar values to represent the original data. The forward propagation of PhaseDNN (Eq. 4.1) is reformulated as:
\[\mathcal{T}(x_{j})=\sum_{m=0}^{M-1}e^{i\omega_{m}(x_{j}\cdot p)}\cdot\mathcal{T} _{m}(x_{j}) \tag{4.3}\]
where \(p\) is the first principal component of the input space calculated on the training dataset. Instead of observing all frequencies of high-dimensional data, we focus on the frequencies of the data along the first principal component \(p\). Additionally, we bound the product result by normalizing the two vectors (i.e., \(x\) and \(p\)) and multiplying the constant \(C\) to fix the range of data points.
Lastly, we introduce a multi-headed PhaseDNN to avoid using the ensemble, which consumes large computational resources. We make each network \(\mathcal{T}_{m}\) share the feature extracting networks and has frequency-dedicated networks for predictions. Thus, Eq 4.3 is reformulated as follows:
\[\mathcal{T}(x_{j})=\sum_{m=0}^{M-1}e^{i\omega_{m}(x_{j}\cdot p)}\cdot\mathcal{ H}_{m}(\mathcal{F}(x_{j})) \tag{4.4}\]
where \(\mathcal{F}(\cdot)\), \(\mathcal{H}_{m}(\cdot)\) are the shared networks and the \(m\)-th frequency-dedicated classifier, respectively. It is worth noting that the classifiers \(\mathcal{H}\) only make up a small portion of the total parameters, allowing PhaseDNN to efficiently learn highly oscillatory functions with a lower memory footprint.
### PhaseDNN with Adversarial Training
PhaseDNN requires shift frequencies (\(\{\omega_{m}\}_{m=0}^{M-1}\)) for each head (\(\{\mathcal{H}_{m}\}_{m=0}^{M-1}\)) used during training. The following explains how to choose those frequencies. As the goal of training a network is to minimize the differences between a clean (\(\{x_{j}\}_{j=0}^{N-1}\)) and an adversarial version (\(\{x_{j}=x_{j}+\delta\}_{j=0}^{N-1}\)) of the same data point, we choose the target frequencies with the largest difference in Fourier coefficients between the two.In practice, estimating the Fourier coefficients of the total dataset in every optimization step requires huge computational resources; therefore, we approximate this by estimating the exponential moving average of the coefficients on mini batches. The discrepancy of the frequency \(k\) between clean and adversarial batches is determined as follows:
\[d_{k}=|\mathcal{F}_{k}(X)-\mathcal{F}_{k}(X+\Delta)| \tag{4.5}\]
where \(X\) and \(\Delta\) indicate the batched data and its corresponding perturbations, respectively, \(\mathcal{F}_{k}(\cdot)\) is the Fourier coefficient which is obtained as follows:
\[\mathcal{F}_{k}(X)=\sum_{j=0}^{B-1}\mathcal{T}(X_{j})\cdot e^{-2\pi ik(X_{j} \cdot p)} \tag{4.6}\]
where \(B\) is the batch size. The estimated discrepancy \(d_{k}\) for all frequencies is then used to derive the multinomial distribution to sample the frequencies \(\omega_{m}\) for each head of the phaseDNN4. The reason for sampling frequencies from a multinomial distribution is that the training dataset is constantly changing during adversarial training. In this case, a fixed set of frequencies (e.g., peaks of frequencies used in (Xu et al., 2020)) does not accurately reflect the critical frequency information. As a result, by stochastically learning the frequency differences, the model could decrease the prediction differences between clean and adversarial data.
Footnote 4: Similar to the previous work, we always include the zero frequency for the one of the head networks.
Attack generationWith the practically-designed PhaseDNN, we perform adversarial training, which causes the neural networks to highly oscillate near the training data. Motivated by the recent finding that FGSM-based training can sufficiently prevent strong attacks (Wong, Rice, and Kolter, 2020), we train PhaseDNN to be robust against FGSM attacks.
The FGSM attack is formulated by initializing the perturbation on uniform distribution ranging (\(-\epsilon\), \(\epsilon\)). The perturbation is then updated based on the sign of the gradients of the cross-entropy with the ground-truth labels. Since PhaseDNN selects the frequencies in a stochastic manner, the generated attack can encourage PhaseDNN to better fit the diverse components having high-frequencies (detailed in Section 5.2). Moreover, we perform the phase shift in alternate mini-batches which further diversifies the attacks used for adversarial training similar to the previous studies (Sriramanan et al., 2020, 2021).
**Adversarial training** We improve the model's robustness against generated attacks by replacing the clean inputs with perturbed ones during training. Additional improvements can be achieved by regularization to minimize the effects of potential white-box attacks. One possible white-box attack would set all shift frequencies to zero, resulting in gradients similar to normal AT rather than PhaseAT. Our regularization
term encourages the model to behave differently than the standard AT. We implement this by minimizing the prediction similarity between the phase-shifted model and the model that does not have the phase-shift. In summary, the total objective function is the sum of the cross-entropy loss and the regularization.
\[\begin{split}\ell_{adv}(x,y)&=\ell_{ce}(\mathcal{T} (x+\delta),y)+\\ &\left|\frac{\sigma(\mathcal{T}(x+\delta))\cdot\sigma(\mathcal{T }_{0}(x+\delta))}{\|\sigma(\mathcal{T}(x+\delta))\|\|\sigma(\mathcal{T}_{0}(x +\delta))\|}\right|\end{split} \tag{4.7}\]
where \(\ell_{ce}\) is the cross entropy function, \(\sigma\) indicates softmax function, \(\mathcal{T}_{0}\) indicates the model without the phase-shift term (i.e., \(\omega_{m}=0,\,\forall\,m\in[1,M]\)). By encouraging the model to have different predictions than normal AT, the model achieves robustness against the normal AT attack as well 5 The proposed defense is detailed in Algorithm-1.
Footnote 5: We provide the ablation study about the regularization term in Section C.2 in the supplementary material.
## 5 Evaluations
### Experimental Setup
**Baselines.** We compare our method with both non-iterative and iterative methods. Non-iterative methods updates the perturbation once to generate the attacks, whereas iterative methods generate the perturbations through multiple optimization steps. We choose three recent non-iterative methods for comparison, namely FBF [23], GAT[11], and NuAT [15]. For iterative methods, we select three representative baselines, i.e., PGD [13], TRADES [14], and AWP [20]. We carefully set the hyper-parameters for the aforementioned methods. We use ResNet-18 [17] and WideResNet-34-10 [21] architectures [16] for the evaluations and, the detailed settings are listed in the Section B.3 in supplementary material.
**Datasets and Evaluations.** Following recent works, we perform experiments on two datasets: CIFAR-10 [12] and ImageNet [15]. To compare with the recent works, we use ImageNet-100 [15, 16] which is the small version of ImageNet including 100 classes. To evaluate adversarial robustness, we consider both white-box and black-box attacks. For white-box attacks, PGD and AutoAttack (AA) are mainly used, each of which contains \(L_{\infty}\) adversarial constraint of \(\epsilon=8/255\). Specifically, the AA attacks involve APGD\({}_{ce}\), APGD\({}_{thr}\), APGD\({}_{t}\)[12] and FAB\({}_{t}\)[12]6. The black-box attacks include transfer-based and query-based attacks. For the transfer-based attack, we generate PGD-7 perturbations using other networks (e.g., VGG-11 and ResNet-18 that is differently initialized) to attack each baseline. For query-based attacks, we use square attacks [1] with 5,000 query budgets. The results of black-box attacks are included in Section C.1 in the supplementary material.
Footnote 6: We include the details of each attack in the supplementary material.
**Adaptive Attack.** For reliable evaluation of the proposed adversarial training, we carefully design adaptive attacks for the proposed defense to adversarial examples, which are used in all experiments. Specifically, we use the expectation-over-transformation (EOT) [1] as a white-box attack because the proposed method samples frequencies from a multinomial distribution. The details of the adaptive attacks are in Section B.4 in the supplementary material.
### Main Results against White-box Attacks
Before comparing the performance of all methods, we first verify whether our method learns high frequency components faster than the model without the phase-shift. To that end, we use the filtering method [20] used in previous analysis (Figure 1). Figure 2 shows the errors for each frequency. Here, we use the settings of CIFAR-10. For the low frequency part, the errors between PhaseAT and AT is negligible, whereas the errors of the high frequency between two methods is noticeable, demonstrating our hypothesis that PhaseAT can efficiently learn high-frequency components.
We then confirm that fast learning of high-frequency components leads to better robustness. Table 1 shows the comparison results. Here, we use the two different networks, ResNet-18 and WideResNet-34-10, to verify the generality of the PhaseAT. Non-iterative methods tend to show lower accuracy than Iterative methods. However, the non-iterative version of PhaseAT outperforms iterative baselines in terms of both standard and robust accuracy. For example, PhaseAT shows 5.3% and 8.5% performance improvement over AWP in terms of standard and PGD accuracy, respectively, and
Figure 2: Errors of frequency components (high, low) between the training dataset and neural networks for normal adversarial training method and PhaseAT.
comparable performance for AA. In the experimental results, PhaseAT outperforms FBF, the major distinction of which is the phase shift functionality. This suggests that learning high-frequency information properly is particularly beneficial for learning adversarial data.
We turn our focus to a larger dataset, i.e., ImageNet. We compare our method with all non-iterative methods and tabulate the results in Table 2. We again find a similar trend in performance to that of CIFAR-10 except for the strongest baseline is the non-iterative method (i.e., NuAT). While the proposed method shows the comparable performance for clean accuracy, it outperforms others by a large margin in terms of the robust accuracy against the AA attack. This shows that PhaseAT can be scalable to larger datasets.
### Convergence of Clean and Robust Accuracy
We verify that faster learning on high-frequency information leads to better convergence for robust accuracy. To that end, we report the standard and AA accuracy during each training epoch. We use the same settings of CIFAR-10 and tabulate the results on Figure 3. Compared to the normal adversarial training (denoted as AT), PhaseAT shows faster and better convergence both on standard and robust accuracy. This indicates that the training model has smoothed predictions near the data, effectively reducing the oscillating property of predictions when adding adversarial perturbations.
### Sensitivity to the Different Settings of PhaseAT
We analyze the sensitivity of PhaseAT on different settings. Since the most significant components of PhaseAT are the head and frequency, we mainly control the number of heads (i.e., \(M\)) and the range of frequency. Figure 4 shows how the change of each parameter affects the standard and robust accuracy on CIFAR-10. We first see that utilizing more heads leads to the improved accuracy while incurring additional computation costs. In addition, the robust accuracy is more sensitive to the different number of heads, whereas the clean accuracy does not differ significantly when more than two heads are used. When it comes to the frequency range,
\begin{table}
\begin{tabular}{c c c c} \hline \hline Method & Clean accuracy & PGD\({}_{50\text{+}\text{EOT}}\) & AA\({}_{\text{+}\text{EOT}}\) \\ \hline Normal & 93.1 & 0.0 & 0.0 \\ \hline FBF & 84.0 & 43.8 & 41.0 \\ GAT & 80.5 & 53.2 & 47.4 \\ NuAT & 81.6 & 52.0 & 48.3 \\ PhaseAT (ours) & **86.2** & **59.5** & **52.1** \\ \hline \hline PGD-AT & 81.3 & 50.8 & 47.3 \\ TRADES & 79.5 & 52.2 & 48.5 \\ AWP & 81.8 & 54.8 & 51.1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance evaluation on CIFAR-10 dataset against white-box attacks on two different architectures. Best and second best results are highlighted in boldface and underline, respectively. AT methods are classified into two types based on how adversarial perturbations are generated during training: non-iterative methods (FBF, GAT, NuAT, and PhaseAT) and iterative methods (PGD-AT, TRADES, and AWP).
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & Clean accuracy & AA\({}_{\text{+}\text{EOT}}\) \\ \hline Normal & 77.1 & 0.0 \\ \hline FBF & **70.5** & 20.4 \\ GAT & 68.0 & 28.9 \\ NuAT & 69.0 & 32.0 \\ PhaseAT (ours) & 69.2 & **35.6** \\ \hline \hline PGD-AT & 68.6 & 33.0 \\ TRADES & 62.9 & 31.7 \\ AWP & 64.8 & 29.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance evaluation on ImageNet-100 dataset against white-box attacks. Best and second best results are highlighted in boldface and underline, respectively. The results for each baseline come from the previous works (Sriramanan et al., 2020, 2021).
Figure 3: Clean and robust accuracy during the training phase.
we observe that utilizing the wider range of the frequency leads to the improved robust accuracy. For the clean accuracy case, on the other hand, the wider range has no significant impact, which is consistent with the previous finding that low-frequency typically dominates in the real dataset [23]. In the experiments, we use the three heads and 50k of the frequency range based on the above validation results. For the frequency range case, no further improvement was observed for more than the 50k range.
### Computational Complexity
We compare the training time required to achieve the best robust accuracy for each method. To fairly measure it, we use the optimized schedule in the implementations (NuAT7, FBF and PGD8). Figure 5 plots the comparison on CIFAR-10. The fastest defense amongst all is FBF-1 and PhaseAT shows the comparable training time. This indicates the additional time required by Fourier transform and forward-propagation to multiple heads is negligibly small. We also compare the different version of FBF, namely FBF-2, which uses the more training epochs than FBF-1 to match the training time with PhaseAT. Despite FBF's increased accuracy (41.0 to 44.1%), PhaseAT outperforms it by a significant margin, demonstrating the efficacy of the proposed approach.
Footnote 7: [https://github.com/locuslab/fast_adversarial](https://github.com/locuslab/fast_adversarial).
Footnote 8: [https://github.com/val-iisc/NuAT](https://github.com/val-iisc/NuAT)
## 6 Related Work
### Adversarial Training
One of the most effective defenses against adversarial attacks is adversarial training. Specifically, iteratively updating the attacks during training tends to show better robustness as the adversary typically performs multiple updates to generate stronger attacks [1, 19, 20]. However, the adversarial robustness comes at a large computational cost and highly increased training time due to the multiple optimization. Hence the research towards non-iterative methods has been getting attention [20, 21, 22]. The common strategy is to make the FGSM-based training be robust against iterative attacks. For example, wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining w2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining wong2020explaining w2020explaining wong2020explaining w2020explaining w2020explaining w20202explaining wong2020explaining w2020explaining w2020explaining w2020explaining w2020explaining w2020explaining w2020explaining w20202explaining w2020explaining w20202explaining w2020explaining w2020explaining w2020explaining w20202explaining w202020explaining w2020explaining w2020explaining w2020explaining w2020explaining w2020explaining w2020explaining w2020explaining w2020explaining w2020explaining w2020explaining w2020explaining w2020explaining w2020explaining w20202explaining w2020explaining w2020explaining w20202explaining w20202explaining w2020explaining w20202explaining w2020explaining w20202explaining w2020explaining w20202explaining w20202explaining w2020explaining w20202explaining w20202explaining w2020explaining w2020explaining w2020explaining w20202explaining w2020explaining w2020explaining w20202explaining w202020explaining w20202explaining w20202explaining w20202explaining w20202explaining w202020explaining w20202explaining w202020explaining w20202explaining w20202explaining w2020explaining w20202explaining w2020explaining w20202explaining w20202explaining w202020explaining w202020explaining w20202explaining w20202explaining w202020explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w202explaining w20202explaining w202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w20202explaining w202explaining w20202explaining w2020explaining w20202explaining w202explaining w202explaining w20202explaining w20202explaining w20202explaining w20202explaining w202explaining w20202explaining w2020explaining w20202explaining w20202explaining w20202explaining w202explaining w20202explaining w202explaining w20202explaining w202explaining w20202explaining w202explaining w20202explaining w202explaining w202explaining w20202explaining w202explaining w20202explaining w202explaining w20202explaining w20202explaining w202
popular dataset, namely CIFAR-10 and ImageNet. The results clearly show that PhaseAT shows strong performance compared to other baselines, demonstrating the efficacy of faster high-frequency learning in adversarial training.
## Acknowledgement
This work was supported by the Center for Advanced Computation and a KIAS Individual Grant (MG082901) at Korea Institute for Advanced Study (S. Kim), and by NRF-2022R1A2C1011312 (I. Seo).
|
2305.16676 | Large field-of-view and multi-color imaging with GaP quadratic
metalenses | Metalenses, in order to compete with conventional bulk optics in commercial
imaging systems, often require large field of view (FOV) and broadband
operation simultaneously. However, strong chromatic and coma aberrations
present in common metalens designs have so far limited their widespread use.
Stacking of metalenses as one of the possible solutions increases the overall
complexity of the optical system and hinders the main benefit of reduced
thickness and light weight. To tackle both issues, here we propose a
single-layer imaging system utilizing a recently developed class of metalenses
providing large field of view. Using it, we demonstrate full-color imaging with
a FOV of 100 degrees. This approach, empowered by computational imaging
techniques, produce high quality images, both in terms of color reproduction
and sharpness. Suitable for real-time unpolarized light operation with the
standard color filters present in prevalent camera systems, our results might
enable a pathway for consumer electronics applications of this emerging
technology. | Anton V. Baranikov, Egor Khaidarov, Emmanuel Lassalle, Damien Eschimese, Joel Yeo, N. Duane Loh, Ramon Paniagua-Dominguez, Arseniy I. Kuznetsov | 2023-05-26T06:54:04Z | http://arxiv.org/abs/2305.16676v1 | # Large Field-of-View and Multi-Color Imaging with Gap Quadratic Metalenses
###### Abstract
Metalenses, in order to compete with conventional bulk optics in commercial imaging systems, often require large field of view (FOV) and broadband operation simultaneously. However, strong chromatic and coma aberrations present in common metalens designs have so far limited their widespread use. Stacking of metalenses as one of the possible solutions increases the overall complexity of the optical system and hinders the main benefit of reduced thickness and light weight. To tackle both issues, here we propose a single-layer imaging system utilizing a recently developed class of metalenses providing large field of view. Using it, we demonstrate full-color imaging with a FOV of \(\sim\) 100. This approach, empowered by computational imaging techniques, produce high quality images, both in terms of color reproduction and sharpness. Suitable for real-time unpolarized light operation with the standard color filters present in prevalent camera systems, our results might enable a pathway for consumer electronics applications of this emerging technology.
**Teaser**
A single-layer metalens imaging system with large field of view (\(\sim 100\lx@math@degree\)) and full-color imaging is proposed, empowered with computational imaging techniques.
**Introduction**
The ongoing trend towards miniaturization and integration in imaging systems has led to a need for novel optical platforms that can provide aberration-free, compact, and cost-effective lenses. In this context, the emerging field of flat optics is widely considered to be a viable solution to meet these requirements. By exploiting the principles of diffraction, a flat lens can create a phase profile that allows focusing an incident beam with significantly reduced thickness compared to conventional bulk optics [1, 2, 3]. Flat lenses typically consist of a surface divided into Fresnel zones, which impart a radial phase distribution that varies between 0 and \(2\pi\). Together, these zones comprise a \(2\pi\)-wrapped phase map of the desired phase profile. Constructive interference from the multiple zones at a desired spatial location can then be achieved, resulting in the focusing of the transmitted light.
One can distinguish two types of flat lenses: conventional diffractive lenses (CDL) [4, 5, 6] and metalenses based on the recent development of optical metasurfaces [7, 8, 9, 10]. While the former rely on the phase accumulated by light propagation in a material thickness, in a similar way as for bulk optics but with a reduced thickness enabled by the phase wrapping, metalenses employ nanostructured surfaces providing subwavelength phase control, which constitutes a radically different paradigm. In the case of metalenses based on high-refractive index and low-loss dielectric materials, which offer the highest efficiencies, phase control can be achieved via three different mechanisms, which are waveguiding [7, 11, 12, 13], using the geometric (Pancharatnam-Berry) phase of light [8, 14, 15, 16, 17] or the phase modulation associated with the excitation of optical resonances [18, 19, 20, 21] (the latter mechanism has recently been shown to have a topological origin, associated with the creation of pole-zero pairs in the complex eigenfrequency plane of non-Hermitian systems [22]). Albeit different in their principles of operation, CDL and metalenses are both naturally diffractive optical elements with associated chromatic aberrations, which severely restricts their applications in multicolor imaging. Indeed, the incident light deflection is either governed by the grating equation for CDL or generalized Snell's law for metalenses [23], in both of which the deflection angle has a wavelength dependence. This translates into shorter wavelengths experiencing a longer focal length and vice versa, where the fractional change in the focal length \(\Delta\!f\), that is the axial (or longitudinal) chromatic aberration, is equal to the fractional change in the wavelength \(\Delta\!\lambda\)[24]:
\[\frac{\Delta f}{f}=\frac{\Delta\lambda}{\lambda} \tag{1}\]
with \(\lambda\) and \(f\) being the nominal (central) wavelength and focal length, respectively.
To address this problem, researchers have developed several approaches to attempt to achieve achromatic flat lenses. An achromatic diffractive lens (ADL) usually operates at higher orders diffraction, providing the same deflection angle for several discrete harmonic wavelengths [25; 26]. Though advanced numerical optimization helped to increase the focusing efficiency [2; 27; 28], a recent study demonstrated considerable limitations on the achievable Fresnel number (FN), which translates into limited numerical apertures (NA) or lens sizes for a given focal length [29]. This issue arises from the residual chromatic aberration in between adjacent harmonic wavelengths. The associated chromatic focal shift cannot be fully eliminated, since it is inversely proportional to the structure depth and refractive index used in the ADL, which is usually limited. Regarding achromatic metalenses (AMLs), their principle leverages on the additional degrees of freedom given by the design of the individual meta-atoms. Typically, AMLs exploit a large meta-atom library to find elements with an appropriate group delay (GD) and group delay dispersion (GDD) to compensate the chromatic aberrations over a certain wavelength range [30; 31; 32; 33; 34; 35]. However, similarly to ADL, AMLs have limited NA or lens size for a given focal length, in this case bounded by the achievable GD and GDD values [29; 32; 36; 37]. Recently, this issue was addressed by compensating the phase within each Fresnel zone and subsequently optimizing the phase discontinuities at the zone boundaries [38]. It is important to note that in virtue of the task complexity, the optimization was done at three discrete wavelengths. As a result, a high-NA (0.7), millimeter-scale metalens focusing at three red-green-blue (RGB) wavelengths was demonstrated.
Despite the tremendous progress made in achromatic flat lenses, these approaches fail to address another crucial imaging characteristic, namely achieving a large field of view (FOV) [29; 38]. In this regard, while it has been shown that metalenses can outperform CDL in terms of angular coverage [39], conventional diffraction-limited metalenses with a hyperbolic phase profile still suffer from strong off-axis aberrations, severely curtailing their FOV [40]. To circumvent this issue, a number of works have recently proposed various solutions based on aplanatic metalenses [41], numerical optimization [42; 43; 44], metalens doublets [45; 46] or novel phase profiles, such as the so-called quadratic phase profile [47; 48; 21; 49]. Among them, the latter has the advantage of being planar and single-layer and has been shown to provide imaging beyond 100' FOV. Indeed, such metalenses provide a large FOV by having an effective working area (with a diameter of 2\(f\)), which is transversely shifted for different angles of incidence (\(\varphi\)), according to the formula \(f\)sin\(\varphi\)[49]. In the working area, the phase distribution of the transmitted light is preserved, which results in
eliminating all off-axis aberrations. This comes at the cost of having spherical aberrations which result in non-diffraction-limited performances, and thus slightly smaller imaging resolution, though sufficient for most general purposes (quadratic metalenses typically have \(N\!A\ \sim\!0.3\ -\ 0.4\), which is higher than a typical smartphone camera with NA of 0.2 [29]) [21, 48, 49].
Interestingly however, because of these intrinsic spherical aberrations, quadratic metalenses present a focus elongated axially, and thus have a certain depth-of-focus (DOF), given by [50]:
\[\text{DOF}\sim\frac{\lambda}{NA^{2}} \tag{2}\]
This DOF leads to a certain spectral operation bandwidth. Indeed, qualitatively, the focal length shift due to chromatic aberrations is still acceptable when it does not exceed the DOF of the flat lens, that is as long as it remains within \(\Delta f\sim\text{DOF}\). Hence, from Eq. (1), the working spectral bandwidth of a flat-lens can be given by the criterion:
\[\Delta\lambda\sim\!DOF\ \times\frac{\lambda}{f} \tag{3}\]
The idea of leveraging or engineering an extended DOF for achromatic operation was considered in [50] and full-color imaging was demonstrated in [51, 52] using this concept. However, the limited FOV problem remained unaddressed in those works. The hypothesis for quadratic metalenses to operate within a wide spectral bandwidth was put forth in [47], and just recently started to be explored experimentally in [53], nonetheless, the practical bandwidth ranges or use for multi-color imaging have yet to be fully explored.
In this work, we leverage on extended DOF and wide bandwidth benefits of quadratic metalenses to demonstrate a practical optical system tackling both the broadband multi-color and large FOV challenges simultaneously. Our proposed system is planar and single-layer, with uniform thickness, and thus, suitable for one-step lithography fabrication with high throughput (e.g. photolithography [54] or nanoimprint [55, 56]). It consists of three distinct metalenses working at different color channels, in the red, green and blue (RGB). For multi-color imaging system to be compact, one should engineer the metalenses working bandwidth to match the spectral bandwidth of the color filters of the detectors used, eliminating the need for additional filters. While multi-band imaging realization based on parallel metalenses was suggested in the literature [45, 57], we refine this idea here to a single layer by using quadratic metalenses combined with the color filters present inside a standard camera, to realize a full-color imaging system. The metalens unit elements are circular in cross section, allowing for polarization-independent operation. Moreover, our solution employs gallium phosphide (GaP), a material with high potential for metasurface-based
devices operating across the visible [58], as it provides a high-refractive index (\(n>3.3\)) and negligible losses in the whole wavelength range. We underline the imaging potential of quadratic metalenses by demonstrating their broadband operation: we experimentally show that point spread function (PSF) and modulation transfer function (MTF) remain virtually unchanged over a continuous 40 nm bandwidth. This result is far beyond what widely used hyperbolic metalenses can achieve. Next, by combining the images formed by three quadratic metalenses working at different RGB channels, we demonstrate large FOV (\(\sim 100\)') multi-color imaging, with excellent color reproduction in the CIELAB color reproduction assessment. Finally, in conjunction with computation imaging techniques (Wiener and EigenCWD), we obtain image quality in terms of color reproduction and sharpness among the best demonstrated so far with flat optics.
## Results
Fig. 1a illustrates the concept of our work. The light coming from an object is focused onto a color charge-coupled device (CCD) camera by three quadratic metalenses, fabricated on the same substrate and having the same thickness. Each metalens is designed for operating in a distinct wavelength channel, namely red (R), green (G) and blue (B) wavelengths, respectively. The CCD camera's inherent internal color filters are employed to generate individual R, G, and B images, which are subsequently merged to yield a full-color image. All our lenses are fabricated on the same substrate and possess an equal focal length (\(f\)), thus ensuring the same imaging magnification across all RGB channels and facilitating easy post-processing procedures.
To prove the concept, we design and fabricate three quadratic metalenses (R, G and B) operating, respectively, at \(\lambda_{R}=620\) nm, \(\lambda_{G}=530\) nm and \(\lambda_{B}=460\) nm wavelengths. The lenses have the same diameter \(D=200\)\(\mu\)m and focal length\(f=83\)\(\mu\)m. They are realized by encoding a wrapped discretized quadratic phase profile:
\[\Phi_{i}(r)=\Phi_{i}(0)-\ \frac{2\pi}{\lambda_{i}}\ \frac{r^{2}}{2f} \tag{4}\]
where \(i=R\), \(G\), \(B\), using nanopillar waveguides with circular cross-section and the same height \(H\) = 300 nm, arranged in an hexagonal lattice. Owing to a change in the effective index of the waveguide, the nanopillars impart a variable phase delay as a function of their diameter, and we exploit this mechanism to map the quadratic phase profiles. As a material, we utilize gallium
phosphide (_GaP_ ) in virtue of its negligible absorption over almost the entire visible range (see Supp. Info. Fig. S1), and higher refractive index (_n_ = 3.3-3.8) than other suitable transparent materials, such as hafnium oxide (_Hf__O_\({}_{2}\)), gallium nitride (_GaN_ ), titanium dioxide (_TiO_\({}_{2}\)) and silicon nitride (_Si\({}_{3}\)N_\({}_{4}\)) [58]. High refractive index is important to encode a quadratic phase profile at different wavelengths using a uniform pillar height for all metalenses, as it ensures a strong confinement of the optical field inside the waveguides and minimizes parasitic coupling effects that might arise due to the small pitch needed to obtain a large FOV [49]. Note that, in order to have the same \(D\) and \(f\) for different \(\lambda_{i}\), the Fresnel number (FN) of the metalenses needs to be adjusted according to \(FN_{l}=\frac{D^{2}}{4\lambda_{i}f}\)[29]. In other words, the phase profile to be encoded by the nanopillars is steeper for shorter wavelengths (as can be seen in Eq. (4)). In order to keep a sufficient phase profile sampling and a full phase coverage using the nanopillars elements, we scale the lattice period \(p_{i}\) of each lens according to its operating wavelength \(\lambda_{i}\). In our design, this results in periods \(p_{R}\) = 260 nm, \(p_{G}\) = 220 nm and \(p_{B}\) = 190 nm, for the R, G and B metalenses respectively. Fig. 1b-c show the corresponding finite-difference time-domain (FDTD) calculations of the phase and transmission values for the nanopillar unit cells used to realize the R, G and B metalenses, as a function of the duty cycle, which is the ratio between nanopillar diameters and the hexagonal lattice constant. The _GaP_ metalenses are patterned on an _SiO_\({}_{2}\) substrate using electron beam lithography (EBL) followed by inductively coupled plasma reactive ion etching (ICP-RIE). For more details on the design and fabrication, see Methods. Fig. 1d shows the optical microscope (in false colors) and corresponding SEM images of the fabricated metalenses.
The initial imaging performance of quadratic metalenses can be evaluated from their point spread function (PSF) across the FOV and bandwidth of interest. As an example, we show in Fig. 2 the optical characterization of the R channel metalens (\(\lambda_{R}\) = 620 nm). Therein, Fig. 2a shows a schematic of the experiment and Fig. 2b the experimentally measured PSF. The metalens is illuminated with a collimated laser beam, centered at \(\lambda_{R}\) = 620 nm, with incident angle variation \(\varphi\) = 0\({}^{\circ}\), 30\({}^{\circ}\), 50\({}^{\circ}\) and laser bandwidth variation \(\Delta\lambda\) = 10 nm to 40 nm. From Fig. 2b, we can conclude that PSF is almost insensitive to the bandwidth for \(\varphi\) = 0\({}^{\circ}\). This confirms the robustness of quadratic metalenses against axial chromatic aberrations, coming from their intrinsic spherical aberrations, which result in an axial elongation of the focal spot (or, in other words, a certain DOF). For higher angles of incidence \(\varphi\) = 30\({}^{\circ}\) and \(\varphi\) = 50\({}^{\circ}\), the PSF slightly degrades. The observed broadening is in excellent agreement with simulations (see Supp. Info. Fig. S2 b), and is related to the lateral (or transversal) chromatic aberrations of the metalens (details in Supp. Info. Section 1). Rigorous MTF analysis for R,G,B channel metalenses is given in Supp. Info. Section 2.
To further evaluate the robustness of the quadratic metalenses in terms of bandwidths and angles of incidence, initially revealed by the PSF analysis, we conducted an imaging experiment of a standard USAF 1951 resolution test target (element 2, group -2 with a spatial frequency of 0.28 cycles/mm). As a benchmark for performance comparison, we consider the widely used, hyperbolic phase profile metalens. Fig. 3a shows monochromatic imaging simulations for hyperbolic (top panel) and quadratic (bottom panel) phase profile metalenses for \(\varphi=0\)'. The designed central wavelength and diameter are \(\lambda_{R}=620\) nm and 200 \(\mu\)m for both lenses. The focal length of the quadratic metalens is 83 \(\mu\)m, while for the hyperbolic lens - 173 \(\mu\)m to match the effective \(NA=0.5\) of the quadratic metalens. The image produced by the hyperbolic lens becomes significantly blurred for bandwidths larger than \(\Delta\lambda=10\) nm, in contrast to the quadratic metalens, which exhibits virtually unchanged imaging within a bandwidth of \(\Delta\lambda=40\) nm. These simulations are corroborated by experimental measurements with setup schematics given in Fig. 3b. The target element was illuminated by a diffused laser light and the image produced by the R metalens was captured by a CMOS camera (a more detailed schematic of the setup is shown in Supp. Info. Fig. S6). Experimental target element images as a function of incidence angle and bandwidth are given in Fig. 3c. Due to the metalens demagnification, the produced element image for larger angles \(\varphi\) is squeezed. One can see that the target element is well-resolved at \(\varphi=0\)' and \(\varphi=30\)' for all the bandwidths considered; for larger angles \(\varphi=50\)' the image partially degraded, except for narrowest bandwidth \(\Delta\lambda=10\) nm where the target is still resolved. In Supp. Info. Section 3 we correlate the MTF obtained from PSF with MTF measured from USAF 1951 target.
We have experimentally demonstrated a relatively broad operational bandwidth (\(\sim\!40\) nm) of the designed quadratic metalenses over a large FOV (up to 100'). This result is consistent with theoretical estimates: considering an NA \(\sim\!0.35\) and by combining Eqs. (2) and (3), we expect to have a bandwidth of \(\Delta\lambda\sim\!38\) nm, 28 nm and 21 nm, for the R, G and B metalenses, respectively. For high-quality imaging, these metalens bandwidths have to match the spectral R, G and B filters of the color camera used, which have an average bandwidth of \(\sim\!100\) nm if one considers the full width at half maximum (precise quantum efficiencies are given in the Supp. Info. Fig. S8). Even though the bandwidths of our quadratic metalenses here seem to only partially cover the filter spectral bandwidths, next we show that it is sufficient to produce high-quality imaging.
In order to quantify the color imaging performance of the system illustrated in Fig. 1a, we use a standard ColorChecker test chart with 24 painted patches (often referred to as Macbeth chart), depicted in Fig. 4a. To provide uniform object illumination, we utilize a smartphone screen (Xiaomi Redmi Note 8 Pro), which mimics a white color source by three RGB wide bands which nearly
match the metalens bandwidths (the white color emission spectrum of the source is shown in Supp. Info. Fig. S9). Fig. 4b displays the composite RGB image obtained as well as individual R, G and B channel components, in the imaging configuration with FOV of 30deg x 20deg. Note that the merging process also includes a normalization procedure to account for minimum and maximum intensity values (color balance) in each channel. In the resulting RGB image, one can accurately recognize colors of the original for all patches. The details of precise color error calculations are in the Supp. Info. Section 4 and Figure S10. Importantly, the errors vary in the range between 5 and 23 for different patches (CIELAB metric), this result is equivalent to or even better than commercial devices such as iPhone 5s, iPhone 7 or Samsung S10 operating in auto mode [59]. "Veiling glare" or reduced sharpness of the merged image originates from the quadratic metalens imaging behavior and can be significantly improved with the use of deconvolution techniques based on the knowledge of the PSF, as we demonstrate later. Additionally, the ColorChecker was imaged in a configuration with a maximum target FOV of 100deg x 67deg (Fig. 4c) by bringing the smartphone screen closer to the metalenses. It is noteworthy that for this considerably wide FOV one can still clearly distinguish the colors. The fact that image brightness is reduced towards the periphery, deteriorates the color reproduction (Supp. Info. Fig. S10b). This limitation can be effectively addressed through the intensity correction procedure. For the intensity correction procedure in each R, G and B channel, we characterize the angular efficiency and use the result as a calibration curve (details in Methods and Fig. S11). After this intensity calibration, the overall color error is reduced (see Supp. Info. Fig. S10c). The improvement in color reproduction is more substantial on the periphery, with color errors decreased by 30-40 % in some cases (blue-green, yellow-green and white).
Finally, as a proof-of-use for practical cases, we demonstrate large FOV RGB imaging using a multi-color picture of a still-life genre (Fig. 5a). Again, the image was replicated on a smartphone screen and positioned at two different distances from the metalenses, resulting in FOVs of 50deg x 35deg (Fig. 5b) and 100deg x 67deg (Fig. 5c). Even without any post-processing of the obtained RGB images (Fig. 5b-c) the details are still resolved and easily recognized due to the preservation of the MTF at higher spatial frequencies, albeit at a reduced level. To further enhance the image quality, we employed additional computational imaging techniques, including the widely used Wiener filtering, as well as a novel deconvolution algorithm based on the PSF knowledge of the quadratic metalenses (see Supp. Info. Section S5). The images reconstructed by the two techniques (denoted as Wiener filter and EigenCWD, respectively, in Fig. 5b-c) demonstrate greatly improved sharpness and good color reproduction, with better results obtained with our deconvolution algorithm. Indeed, the Wiener filtering presents more elevated noise, a feature that is inevitable for a high-pass filtering
method. In contrast, the EigenCWD algorithm has significantly reduced noise levels, although with minor rippling artefacts in the 100' \(\times\) 67' FOV case, due to the parasitic background noise. To further mitigate these artefacts and improve the image quality it is possible to include an aperture in front of the metalens to suppress background light, as demonstrated in Fig. S5.
For practical applications, it is crucial to consider the computational demands of image processing techniques such as filters or deconvolution algorithms. A fast processing time, low power consumption, and limited computational resources are important factors for portable and compact devices. In this aspect, Wiener filtering presents an advantage with its fast processing time of \(\sim\) 0.3s for a full RGB image (1K x 1K resolution). However, other filters, such as our developed algorithm that belongs to the category of total variation regularizers [51], can result in improved images with better precision and color reproduction, which comes at a cost of increased computational time (up to \(\sim\)1h for full-color 1.6K x 1.2K pixel image). The application of advanced artificial intelligence techniques could further increase the processing speed of these algorithms.
## Discussion
In conclusion, we have proposed the use of quadratic metalenses on a single chip for multi-color RGB imaging with a large FOV. The system was designed and fabricated using GaP platform, which is highly suitable for visible range applications and requires only a single layer of lithography process for its manufacturing, making it highly scalable and cost-effective. The system takes advantage of the internal filters present in a CCD camera, reducing the number of components needed and simplifying implementation. Through PSF analysis and imaging experiments, it was demonstrated that the quadratic metalenses provide large FOV imaging across a relatively broad spectral bandwidth of \(\sim\) 40 nm for individual color channels, attributed to its extended DOF. By using a source with a spectrum matching the metalens working bandwidth, we demonstrate RGB imaging with a high-quality color reproduction up to 100' FOV and well-resolved details even for the raw camera image. Further improvements were made to the details, sharpness, and overall color reproduction through the application of a Wiener filter and our own developed deconvolution algorithm. The concept of the metalens system can be further extended from RGB to multi-spectral applications by using a larger array of metalenses, each operating within a certain bandwidth. This would increase the coverage of camera color filters and provide even more accurate color reproduction. Our system could also benefit from incorporating industrial-grade cameras with an advanced color response and balance.
When scaling up the solution to larger metalens sizes, it is important to keep in mind a few key points. While the DOF of quadratic metalenses remains identical for the same NA (Eq. 2), the working bandwidth is reduced with increasing focal length by virtue of Eq. 3. Moreover, the lateral chromatic aberrations, which scale linearly with the focal length (refer to Eq. S3), may further distort the PSF for larger metalenses. However, as demonstrated in the Supp. Info. Section S6, these aberrations can be reduced by incorporating a dispersion-engineering technique, thus enabling the practical application of our system solution to millimeter-diameter metalens systems.
Despite some flaws of quadratic metalenses, Engelberg et.al. [21] previously showed that the efficiency is still sufficient for outdoor light imaging with an acceptable resolution. Here, we leveraged on the extremely high FOV, long depth of focus and broad bandwidth of those and demonstrated a unique solution for simultaneous large FOV and multi-color imaging in real-life scenarios. Moreover, the system is ultracompact, employing a single functional metalenses layer, and suitable for large-scale and high-throughput industrial fabrication. Thus, we believe that our results represent a significant step towards the widespread use of flat optics for low-cost, compact, multi-color, large FOV imaging systems, with numerous applications in consumer electronics and beyond.
## Materials and Methods
### Design
The transmission and phase-delay are computed - by means of Lumerical FDTD software - for periodic arrays of GaP nanopillars with identical diameters, arranged in an hexagonal lattice with lattice constant \(p_{i}=260\), 220, 190 nm (for \(i=R\), \(G\), \(B\), respectively), by sweeping over the diameters. For that, a single unit-cell is simulated with periodic boundary conditions in the transversal directions and perfectly matched layers in the longitudinal one. The nanopillars are lying on a \(SiO_{2}\) substrate of refractive index \(n=1.46\), and their height is set to \(H=300\) nm in all cases. A monochromatic plane wave with wavelength \(\lambda_{i}=620\), 530, 460 nm and amplitude equal to unity is simulated coming from the glass substrate and at normal incidence.
### Sample fabrication
Commercially available GaP on \(SiO_{2}\) on sapphire wafers were purchased from Yangzhou Changelight Co. Initial thickness of GaP was thinned down to match the design using the inductively coupled plasma-reactive ion etching (ICP-RIE, Oxford Plasmalab 100) with Cl\({}_{2}\) and N\({}_{2}\) gases. Hydrogen silsesquioxane (HSQ, Dow Corning XR-1541) resist followed by current spreading Espacer 300AX01 (Showa Denko) were spin coated on the sample for the electron beam lithography (EBL, Elionix ELS-7000, 100 kV) exposure. All designed RGB metalenses were exposed in one EBL round on the same sample. The resist was developed using tetramethylammonium hydroxide (TMAH) and resulting mask was used to pattern GaP using the ICP-RIE (Oxford Plasmalab 100) at 200degC with Cl\({}_{2}\) and N\({}_{2}\) gases. The residual HSQ resist on top of the etched structures was preserved, as the hydrofluoric acid (HF) solution commonly used to lift-off the HSQ will damage the \(SiO_{2}\) substrate.
### Optical measurements
The MTF characterization (Supplementary Information, Figure S4) started with the corresponding PSF measurements (Figures 2b). For this, metalenses were illuminated with a collimated tunable laser source (supercontinuum fiber laser SuperK EXTREME equipped with a tunable single line filter SuperK VARIA), which was placed on a rotation arm. The produced PSF was transferred to a CMOS camera (CS895MU Thorlabs) by a homemade optical microscope (100\(\times\)Olympus plan apo objective with NA = 0.95 and a tube lens with 150 mm focal length). The optical setup is depicted in Supplementary Information, Figure S3. The final MTF plot was
produced by a horizontal slice (along the transversal PSF shift upon the increase of \(\varphi\), x-axis) of PSF 2D Fourier transform.
In the same way, MTF simulation (Supplementary Information, Figure S2) was performed by PSF Fourier transform. Simulated PSF were obtained by a developed Fourier propagator, which is described elsewhere [49]. Polychromatic PSF was computed by summing up the monochromatic PSF with the wavelength step of 2 nm.
For the USAF 1951 target imaging (Figure 3), the same laser source was utilized. A rotating diffuser was placed in the optical path to generate spatially incoherent light. The image produced by the metalens was transferred to a CMOS camera (Thorlabs CS895MU) by a homemade optical microscope (100\(\times\) Olympus plan apo objective with NA = 0.95 and a tube lens with 50 mm focal length). The optical setup is depicted in Supplementary Information, Figure S6.
RGB imaging (Figure 4 and Figure 5) was performed using the same optical setup with color CMOS camera (Thorlabs CS895CU). The validation of the color reproduction was done by CIELAB metric. To convert RGB intensity values to \(L\)*\(a\)*\(b\)* three-dimensional space, a standard software (ImageJ) was utilized.
The focusing efficiency characterization was performed by the same PSF data, measured for the MTF analysis. PSF intensity for all angles of incidence \(\varphi\) and bandwidth \(\Delta\lambda\) were integrated within a pinhole with a fixed diameter of 6 \(\mu\)m to account for the spot broadening. Then, the result was divided by a reference PSF intensity. As the reference lens, an antireflective achromatic doublet (Thorlabs AC254-030-AB-ML) was used. To be consistent with our previous publication [49], the final efficiency was scaled to be the ratio between the focusing energy and the light energy transmitted through the effective working lens area (diameter of _2f_).
The Wiener filter is a deconvolution technique based on the assumption of linear shift-invariant image formation. The ideal image \(i(x,\,y)\) is blurred by _PSF_ (\(x,\,y\)) of an optical system and corrupted by uncorrelated noise \(n(x,\,y)\). The resulting real image reads: \(r(x,\,y)=s(x,\,y)+n(x,\,y)\), where \(s(x,\,y)=i(x,\,y)\,*PSF\) (\(x,\,y)\) is blurred image in the absence of the noise. To retrieve \(i(x,\,y)\), one needs to deconvolve measured \(r(x,\,y)\) in the Fourier domain applying a proper filtering function to account for noise. This can be written as: \(I(u,\,v)=\frac{R(u,v)F(u,v)}{FPSF(u,v)}\), where \(I(u,\,v)\),\(R(u,\,v)\) and _FPSF_ (\(u,\,v)\) are Fourier transforms of \(i(x,\,y)\), \(r(x,\,y)\) and _PSF_ (\(x,\,y)\), respectively. \(F\) (\(u,\,v\)) is the applied filtering function. Minimizing \(\sum_{u,v}(I(u,v)-\frac{S(u,v)}{FPSF(u,v)})^{2}\) one ends up with final formula in Fourier domain:
\[I(u,v)=\frac{R(u,v)}{FPSF(u,v)\left(1+\left|\frac{N(u,v)}{S(u,v)}\right|^{2}\right)} \tag{4}\]
where, \(N(u,\,v)\) and \(S(u,\,v)\) are Fourier transforms of the noise \(n(x,\,y)\) and blurred image \(s(x,\,y)\), respectively. Applying the Wiener filter in Figures 4 and 5, the power spectrum ratio \(\left|\frac{N(u,v)}{S(u,v)}\right|^{2}\) was heuristically optimized. The Wiener filtering per channel image took \(\sim\)100 ms in Figures 4b and 5b (1020x1000 pixels), \(\sim\)300 ms in Figures 4c and 5c with larger FOV (1616x1160 pixels).
## References
* [1] X. Luo, _Engineering Optics 2.0: A Revolution in Optical Theories, Materials, Devices and Systems_ (Springer, 2019).
* [2] S. Banerji, _et al._, Imaging with flat optics: metalenses or diffractive lenses?, _Optica_**6**, 805-810 (2019).
* [3] J. Engelberg, U. Levy, The advantages of metalenses over diffractive lenses, _Nature Communications_**11**, 1-4 (2020).
* [4] D. A. Buralli, G. M. Morris, Design of diffractive singlets for monochromatic imaging, _Applied Optics_**30**, 2151-2158 (1991).
* [5] J. M. Finlan, K. M. Flood, R. J. Bojko, Efficient f/1 binary-optics microlenses in fused silica designed using vector diffraction theory, _Optical Engineering_**34**, 3560-3564 (1995).
* [6] R. Menon, D. Gil, H. I. Smith, Experimental characterization of focusing by high-numerical-aperture zone plates, _JOSA A_**23**, 567-571 (2006).
* [7] A. Arbabi, Y. Horie, A. J. Ball, M. Bagheri, A. Faraon, Subwavelength-thick lenses with high numerical apertures and large efficiency based on high-contrast transmitarrays, _Nature Communications_**6**, 1-6 (2015).
* [8] M. Khorasaninejad, _et al._, Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging, _Science_**352**, 1190-1194 (2016).
* [9] P. Lalanne, P. Chavel, Metalenses at visible wavelengths: past, present, perspectives, _Laser & Photonics Reviews_**11**, 1600295 (2017).
* [10] R. Paniagua-Dominguez, _et al._, A metalens with a near-unity numerical aperture, _Nano Letters_**18**, 2124-2132 (2018).
* [11] P. R. West, _et al._, All-dielectric subwavelength metasurface focusing lens, _Optics Express_**22**, 26212-26221 (2014).
* [12] M. Khorasaninejad, _et al._, Polarization-insensitive metalenses at visible wavelengths, _Nano Letters_**16**, 7229-7234 (2016).
[13] N. K. Emani, _et al._, High-efficiency and low-loss gallium nitride dielectric metasurfaces for nanophotonics at visible wavelengths, _Applied Physics Letters_**111**, 221101 (2017).
[14] E. Hasman, V. Kleiner, G. Biener, A. Niv, Polarization dependent focusing lens by use of quantized pancharatnam-berry phase diffractive optics, _Applied Physics Letters_**82**, 328-330 (2003).
[15] D. Lin, P. Fan, E. Hasman, M. L. Brongersma, Dielectric gradient metasurface optical elements, _Science_**345**, 298-302 (2014).
[16] X. Ding, _et al._, Ultrathin pancharatnam-berry metasurface with maximal cross-polarization efficiency, _Advanced Materials_**27**, 1195-1200 (2015).
[17] H. Liang, _et al._, Ultrahigh numerical aperture metalens at visible wavelengths, _Nano Letters_**18**, 4460-4466 (2018).
[18] M. Decker, _et al._, High-efficiency dielectric huygens' surfaces, _Advanced Optical Materials_**3**, 813-820 (2015).
[19] Y. F. Yu, _et al._, High-transmission dielectric metasurface with 2\(\pi\) phase control at visible wavelengths, _Laser & Photonics Reviews_**9**, 412-418 (2015).
[20] A. Ozdemir, Z. Hayran, Y. Takashima, H. Kurt, Polarization independent high transmission' large numerical aperture laser beam focusing and deflection by dielectric huygens' metasurfaces, _Optics Communications_**401**, 46-53 (2017).
[21] J. Engelberg, _et al._, Near-IR wide-field-of-view huygens metalens for outdoor imaging applications, _Nanophotonics_**9**, 361-370 (2020).
[22] R. Colom, _et al._, Crossing of the branch cut: the topological origin of a universal 2 \(\pi\)-phase retardation in non-hermitian metasurfaces, _arXiv preprint arXiv:2202.05632_ (2022).
[23] N. Yu, _et al._, Light propagation with phase discontinuities: generalized laws of reflection and refraction, _Science_**334**, 333-337 (2011).
[24] S. Martellucci, A. N. Chester, _Diffractive optics and optical microsystems_ (Springer Science & Business Media, 1997).
[25] D. W. Sweeney, G. E. Sommargren, Harmonic diffractive lenses, _Applied Optics_**34**, 2469-2475 (1995).
[26] D. Faklis, G. M. Morris, Spectral properties of multiorder diffractive lenses, _Applied Optics_**34**, 2462-2468 (1995).
[27] N. Mohammad, M. Meem, B. Shen, P. Wang, R. Menon, Broadband imaging with one planar diffractive lens, _Scientific Reports_**8**, 1-6 (2018).
[28] M. Meem, _et al._, Broadband lightweight flat lenses for long-wave infrared imaging, _Proceedings of the National Academy of Sciences_**116**, 21375-21378 (2019).
[29] J. Engelberg, U. Levy, Achromatic flat lens performance limits, _Optica_**8**, 834-845 (2021).
[30] F. Aieta, M. A. Kats, P. Genevet, F. Capasso, Multiwavelength achromatic metasurfaces by dispersive phase compensation, _Science_**347**, 1342-1345 (2015).
[31] Z. Shi, _et al._, Single-layer metasurface with controllable multiwavelength functions, _Nano Letters_**18**, 2420-2427 (2018).
[32] S. Shrestha, A. C. Overvig, M. Lu, A. Stein, N. Yu, Broadband achromatic dielectric metalenses, _Light: Science & Applications_**7**, 1-11 (2018).
[33] W. T. Chen, _et al._, A broadband achromatic metalens for focusing and imaging in the visible, _Nature Nanotechnology_**13**, 220-226 (2018).
[34] S. Wang, _et al._, A broadband achromatic metalens in the visible, _Nature Nanotechnology_**13**, 227-232 (2018).
[35] Z.-B. Fan, _et al._, A broadband achromatic metalens array for integral imaging in the visible, _Light: Science & Applications_**8**, 1-10 (2019).
[36] F. Presutti, F. Monticone, Focusing on bandwidth: achromatic metalens limits, _Optica_**7**, 624-631 (2020).
[37] W. T. Chen, A. Y. Zhu, F. Capasso, Flat optics with dispersion-engineered metasurfaces, _Nature Reviews Materials_**5**, 604-620 (2020).
[38] Z. Li, _et al._, Meta-optics achieves rgb-achromatic focusing for virtual reality, _Science Advances_**7**, eabe4458 (2021).
[39] M. Decker, _et al._, Imaging performance of polarization-insensitive metalenses, _ACS Photonics_**6**, 1493-1499 (2019).
[40] H. Liang, _et al._, High performance metalenses: numerical aperture, aberrations, chromaticity, and trade-offs, _Optica_**6**, 1461-1470 (2019).
[41] F. Aieta, P. Genevet, M. Kats, F. Capasso, Aberrations of flat lenses and aplanatic metasurfaces, _Optics Express_**21**, 31530-31539 (2013).
[42] A. Kalvach, Z. Szabo', Aberration-free flat lens design for a wide range of incident angles, _JOSA B_**33**, A66-A71 (2016).
[43] M. Y. Shalaginov, _et al._, Single-element diffraction-limited fisheye metalens, _Nano Letters_**20**, 7429-7437 (2020).
[44] C.-Y. Fan, C.-P. Lin, G.-D. J. Su, Ultrawide-angle and high-efficiency metalens in hexagonal arrangement, _Scientific Reports_**10**, 1-9 (2020).
[45] A. Arbabi, _et al._, Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations, _Nature Communications_**7**, 1-9 (2016).
[46] B. Groever, W. T. Chen, F. Capasso, Meta-lens doublet in the visible region, _Nano Letters_**17**, 4902-4907 (2017).
[47] M. Pu, X. Li, Y. Guo, X. Ma, X. Luo, Nanoapertures with ordered rotations: symmetry transformation and wide-angle flat lensing, _Optics Express_**25**, 31471-31477 (2017).
[48] A. Martins, _et al._, On metalenses with arbitrarily wide field of view, _ACS Photonics_**7**, 2073- 2079 (2020).
[49] E. Lassalle, _et al._, Imaging properties of large field-of-view quadratic metalenses and their applications to fingerprint detection, _ACS Photonics_**8**, 1457-1468 (2021).
[50] S. Banerji, M. Meem, A. Majumder, B. Sensale-Rodriguez, R. Menon, Extreme-depth-of-focus imaging with a flat lens, _Optica_**7**, 214-217 (2020).
[51] S. Colburn, A. Zhan, A. Majumdar, Metasurface optics for full-color computational imaging, _Science Advances_**4**, eaar2114 (2018).
[52] E. Bayati, _et al._, Inverse designed extended depth of focus meta-optics for broadband imaging in the visible, _Nanophotonics_**11**, 2531-2540 (2022).
[53] Y. Liu, _et al._, Broadband behavior of quadratic metalenses with a wide field of view, _Optics Express_**30**, 39860-39867 (2022).
[54] E. Khaidarov, _et al._, Large-scale vivid metasurface color printing using advanced 12-in. immersion photolithography, _Scientific Reports_**12**, 14044 (2022).
[55] G. Yoon, K. Kim, D. Huh, H. Lee, J. Rho, Single-step manufacturing of hierarchical dielectric metalens in the visible, _Nature Communications_**11**, 2268 (2020).
[56] V. J. Einck, _et al._, Scalable nanoimprint lithography process for manufacturing visible metasurfaces composed of high aspect ratio tio2 meta-atoms, _ACS Photonics_**8**, 2400-2409 (2021).
[57] J. Engelberg, U. Levy, Optimizing the spectral range of diffractive metalenses for polychromatic imaging applications, _Optics Express_**25**, 21637-21651 (2017).
[58] M. Melli, _et al._, Gallium phosphide optical metasurfaces for visible light applications, _Scientific Reports_**10**, 1-7 (2020).
[59] B. Dugonik, A. Dugonik, M. Marovt, M. Golob, Image quality assessment of digital image capturing devices for melanoma detection, _Applied Sciences_**10**, 2876 (2020).
### Acknowledgements
The authors would like to thank Haizhong Zhang and Leonid Krivitsky (IMRE, A*STAR) for helping with procurement of GaP thin films on _SiO2_/sapphire substrates.
### Funding
This work was supported by National Research Foundation of Singapore under Grant No. NRF-NRFI2017-01,
IET A F Harvey Engineering Research Prize 2016,
AME Programmatic Grant No. A18A7b0058 (Singapore),
J.Y. is funded by the A*Star Graduate Scholarship,
J.Y. and N.D.L. thank the IT staff at Centre for Bio-imaging Sciences, National University of Singapore, for their support.
### Author contributions
A.V.B. performed all the optical measurements, MTF simulation and wrote the first draft. E.K. performed the metalenses nanofabrication, D.E. did the SEM characterization. E.L. designed the metalenses and made the FDTD simulations. J.Y. and N.D.L. developed the deconvolution algorithm for the image reconstruction. A.V.B., R. P-D. and A. I. K. conceived the idea. R. P-D. and A. I. K. supervised the research. All authors analyzed the data and read and corrected the manuscript.
### Competing interest
The authors declare no competing financial interests.
### Data and materials availability
All data are available in the main text or the Supplementary Information.
Figure 1: **RGB imaging with large FOV quadratic metalenses.****a** An artistic schematic of the device. The light coming from an object is focused on a color CCD camera by three quadratic metalenses on the same chip. Each channel produces R, G and B images which subsequently are merged to produce a final RGB image. **b-c** Simulated phase and transmission values versus duty cycle (ratio between the nanopillar diameters and the lattice constant) for periodic arrays of GaP nanopillars with fixed height of 300 nm used for R,G and B metalenses design (depicted as red, green and blue lines, respectively). **d** Optical microscope (top panel in false colors, the scale bars correspond to 20 \(\mu\)m) and SEM images (bottom panel, the scale bars correspond to 200 nm) of the fabricated metalenses. The parameters \(p_{R}=260\) nm, \(p_{G}=220\) nm and \(p_{B}=190\) nm denote the designed and fabricated lattice periods.
## 5 Conclusion
Figure 2: **PSF measurement analysis of the quadratic metalens in red channel.****a** Measurement schematics. **b** Measured polychromatic PSF for incident angles \(\varphi\) and light source bandwidths \(\Delta\lambda=10\) nm to 40 nm. The scale bars correspond to 2 \(\mu\)m.
Figure 3: **USAF 1951 test target simulation and characterization with metalenses in red channel.****a** The monochromatic imaging simulations for hyperbolic (top panel) and quadratic (bottom panel) phase profile metalenses for \(\varphi=0\lx@math@degree\). The designed central wavelength and diameter are 620 nm and 200 \(\mu\)m for both lenses. The scale bars correspond to 10 \(\mu\)m for hyperbolic and to 5 \(\mu\)m for quadratic phase profiles, respectively. **b** Sketch of the optical setup. **c** Imaging with quadratic metalens of the element 2, group - 2 for various angles of incidence and source bandwidths.
Figure 4: **RGB imaging of ColorChecker chart.****a** The original picture. **b,c** The results of RGB imaging for FOV of 30’ x 20’ and 100’ x 67’. R, G and B channels denote raw images, produced by each metalens. RGB image indicates the result of the channels fusion
Figure 5: RGB imaging of a colorful picture. **a** The original picture. **b,c** The results of RGB imaging for FOV of 50’ x 35’ and 100’ x 67’ respectively. R, G and B channels denote raw images, produced by each metalens. RGB image indicates the result of the channels fusion. Finally, Wiener filter and EigenCWD denote the images reconstructed by applying Wiener filtering and our deconvolution algorithm, respectively.
Supplementary Materials for
**Large Field-of-View and Multi-Color Imaging with GaP Quadratic Metalenses**
Anton V. Baranikov\({}^{\dagger 1}\), Egor Khaidarov\({}^{\ddagger 1}\), Emmanuel Lassalle\({}^{1}\), Damien Eschimese\({}^{1}\), Joel Yeo\({}^{1,2,3}\), N. Duane Loh\({}^{2,4}\), Ramon Paniagua-Dominguez\({}^{*}\)\({}^{1}\), and Arseniy I. Kuznetsov\({}^{\dagger 1}\)
\({}^{1}\)Institute of Materials Research and Engineering (IMRE), Agency for Science, Technology and Research (A*STAR), 2 Fusionopolis Way, Innovis # 08-03, Singapore 138634
\({}^{2}\)Department of Physics, National University of Singapore, Singapore 117551
\({}^{3}\)Integrative Sciences and Engineering Programme, NUS Graduate School, National University of Singapore, Singapore 119077
\({}^{4}\)Department of Biological Sciences, National University of Singapore, Singapore 117557
\({}^{\ddagger}\)These authors contributed equally to this work
\({}^{*}\)Corresponding author: Ramon [email protected]
\({}^{\dagger}\)Corresponding author: Arseniy [email protected]
**This PDF file includes:**
Supplementary Text (6 sections)
Figs. S1 to S12
Tables S1
References (1 to 11)
**Figure S1: Measured refractive index of GaP.** Real (n, green curve, left vertical axis) and imaginary (k, blue curve, right vertical axis) parts of the refractive index.
**Figure S2: MTF and PSF simulations of a lens with an ideal quadratic phase profile designed to work in the red (R) region.** (Central wavelength \(\lambda_{R}=620\) nm) **a-c** Simulated MTF for angles of incidence \(\varphi=0^{\lx@math@degree}\), \(30^{\lx@math@degree}\), \(50^{\lx@math@degree}\) and \(\Delta\dot{\lambda}=10\) nm, \(20\) nm, \(30\) nm, \(40\) nm. The diffraction-limited MTF is given for similar \(NA=0.5\) and \(\Delta\dot{\lambda}=0\) nm. d)-f) Simulated PSF for the same conditions. The scale bars correspond to \(2\ \mu m\).
## 1 Lateral chromatic aberrations
We demonstrate here the lateral shift difference \(\Delta x\) of the PSF position between two different wavelengths \(\lambda_{1}\) and \(\lambda_{2}\). A collimated beam with wavelength \(\lambda_{1}\) impinging a quadratic metalens at an angle of incidence \(\varphi\) (in the \(xz\) plane, \(z\) being the optical axis), in addition to phase delay given by Eq. (4) in the main text, accumulates an extra phase delay \(k_{1}\,x\)sin(\(\varphi\)) where \(k_{1}=2\pi/\lambda_{1}\) due to the oblique incidence, compared to a beam coming at normal incidence. Then, the total phase acquired by the beam after the metalens is:
\[\Phi_{1}(r)=\Phi_{1}(0)-k_{1}\frac{r^{2}}{2f}+k_{1}\,x\sin(\varphi)\]
\[=\Phi_{1}(0)-\frac{k_{1}}{2f}\left[\left(x-f\sin(\varphi)\right)^{2}+y^{2} \right]+k_{1}\frac{f}{2}\,\sin^{2}(\varphi)\]
where \(r=\sqrt{x^{2}+y^{2}}\).
Similarly, a collimated beam with wavelength \(\lambda_{2}\), assuming that this wavelength experiences the same quadratic phase profile as \(\lambda_{1}\) (which means that we neglect material dispersion here), accumulates an extra phase delay \(k_{2}\,x\)sin(\(\varphi\)) where \(k_{2}=2\pi/\lambda_{2}\):
\[\Phi_{2}(r)=\Phi_{1}(0)-k_{1}\frac{r^{2}}{2f}+k_{2}\,x\sin(\varphi)\]
\[=\Phi_{1}(0)-\frac{k_{1}}{2f}\left[\left(x-\frac{k_{2}}{k_{1}}f\sin(\varphi) \right)^{2}+y^{2}\right]+\frac{k_{2}^{2}}{k_{1}}\frac{f}{2}\,\sin^{2}(\varphi)\]
Since in Eqs. (S.1) and (S.2) all the terms independent of the position variables \(x\) and \(y\) can be omitted (that is the first and last terms in the right hand side of Eqs. (S.1) and (S.2)), these two equations only differ by the terms \(x_{1}\equiv f\sin(\varphi)\) and \(x_{2}\equiv\frac{k_{2}}{k_{1}}f\sin(\varphi)\), respectively, which correspond to a lateral shift of the PSF in the \(x\)-direction due to the oblique incidence [1]. Hence, the lateral shift difference given by \(\Delta x=x_{2}-x_{1}\) reads:
\[\Delta x=\frac{\Delta k}{k_{1}}f\sin(\varphi)=-\frac{\Delta\lambda}{\lambda_{2 }}f\sin(\varphi)\]
with \(\Delta k=k_{2}-k_{1}\) and \(\Delta\lambda=\lambda_{2}-\lambda_{1}\).
## 2 MTF analysis
Modulation transfer function (MTF) analysis is a fundamental tool for evaluating imaging performance [2,3]. For this, we measure all R,G and B metalenses using the setup illustrated in Fig. S3. We illuminate them with a collimated laser beam, centred at \(\lambda_{R}\) = 620 nm, 530 nm or 460 nm wavelength, image the point spread function (PSF) and compute its 2D Fourier transform. Since we are interested in large FOV RGB imaging, we conduct the experiment varying the laser angle of incidence (\(\varphi\)) and bandwidth (\(\Delta\lambda\)). Fig. S4 presents the measured polychromatic MTF for R(a-c), G (d-f) and B (g-i) channels for \(\varphi\) = 0\({}^{\lx@math@degree}\), 30\({}^{\lx@math@degree}\), 50\({}^{\lx@math@degree}\) and \(\Delta\lambda\) = 10 nm, 20 nm, 30 nm, 40 nm. One can see in Fig. S4 a,d,g that the MTF is almost insensitive to the bandwidth for \(\varphi\) = 0\({}^{\lx@math@degree}\). This confirms the robustness of quadratic metalenses against _axial_ chromatic aberrations. In Fig. S4b,e,h and c,f,i one can see that MTF starts to slightly degrade for \(\varphi\) = 30\({}^{\lx@math@degree}\) and \(\varphi\) = 50\({}^{\lx@math@degree}\), following PSF broadening. Experimental MTF (Fig. S4 a,b,c) are in excellent agreement with simulations (Fig. S2a,b,c) for the R channel.
Fig. S5 compares simulated MTFs with and without the aperture stop. Aperture is located at the front focal plane of the lens. The simulations are done for the example of red quadratic metalens (\(\lambda_{R}\) = 620 nm), and with a source bandwidth of 10 nm. MTFs are significantly improved by the aperture stop for both \(\varphi\) = 0\({}^{\lx@math@degree}\) and \(\varphi\) = 30\({}^{\lx@math@degree}\) cases.
Figure S4: MTF analysis of the quadratic metalenses in R, G and B channels. a-c Measured polychromatic MTF for R metalens for incident angles \(\varphi\) and source bandwidths \(\Delta\lambda\) = 10 nm, 20 nm, 30 nm, 40 nm. d-f, g-i Measured polychromatic MTF for G and B metalens correspondingly. The diffraction-limited MTF is given for similar _NA_ = 0.48 and \(\Delta\lambda\) = 0 nm.
Figure S5: MTF with aperture stop. Comparison between simulated MTFs with and without the aperture stop. The simulations are done for the example of red quadratic metalens (\(\lambda_{R}\) = 620 nm), and with a source bandwidth of 10 nm.
## 3 Correlation of USAF 1951 imaging with MTF measurements
The studied target element (number 2, group -2) has the spatial frequency of 0.28 cycles/mm in the object plane. Produced element image is squeezed in virtue of the R metalens demagnification. Importantly, the demagnification depends on the angle of view \(\varphi\) due to the barrel distortions, leading to a larger compression towards higher \(\varphi\). The distance \(x\) between the object point and the optical axis is related to the angle of view \(\varphi\) as \(x=d\tan\varphi\). Since the object is placed far from the metalens (at a distance \(d\)), light coming from the object point can be approximated as a plane wave. Then, the metalens produces the image of this point located at \(x^{\prime}=f\sin\varphi\) which is a fundamental property of the quadratic phase profile. Next, we make a Taylor expansion of \(x^{\prime}\) and \(x\) around \(\varphi\), validated by the small angular spread (\(\Delta\varphi<4.5\)\(\cdot\) or 0.08 rad). The increments \(dx^{\prime}\) and \(dx\) depend of the angular increment \(d\varphi\) as:
\[dx\approx d\cdot\frac{d\varphi}{\cos^{2}\varphi}\,;\]
\[dx^{\prime}\approx f\cdot\cos\varphi\,d\varphi;\]
From these equations we obtain the relation between \(dx^{\prime}\) and \(dx\), meaning the metalens demagnification:
\[\frac{dx^{\prime}}{dx}\approx\frac{f}{d}\cdot\cos^{3}\varphi\,;\]
Note that for \(\varphi\)=0\(\,\)' the demagnification is equal to \(f/d\), which is a conventional expression for paraxial approximation. Finally, considering that spatial frequencies in the image and object planes are inversely proportional to \(dx^{\prime}\) and \(dx\), we calculate that object plane 0.28 cycles/mm translates to \(\sim 159\), \(\sim 244\) and \(\sim 602\) cycles/mm for \(\varphi\) = 0\(\,\)', 30\(\,\)', 50\(\,\)', respectively.
To quantify the contrast in each of the images of Fig.3c of the main text, we calculate the contrast transfer function (CTF), which is the function that describes the modulation of a square wave grating in dependence of frequency. These values are shown in Fig. S7 for all resolved cases, together with the corresponding MTF values, calculated using the Coltman formula, which relates the CTF to the MTF [4]. One can see that at normal incidence (\(\varphi=0\)\(\,\)'), the contrast is almost insensitive to the bandwidth, while for oblique incidence (\(\varphi=30\)\(\,\)'), the contrast decreases as the bandwidth increases, which corroborates the observation made on the PSF broadening and MTF quality.
Having calculated the spatial frequencies, we can correlate measured MTF (Figures 2a-2c of the main text) to that extracted from the target element imaging (Figure 3c of the main text). Note that one should take corresponding MTF values at \(\sim 159\) for \(\varphi=0\)\(\,\)'(Figure 2a), at \(\sim 244\) for \(\varphi=30\)\(\,\)'(Figure 2b) and at \(\sim 602\) for \(\varphi=50\)\(\,\)'(Figure 2c). Table 1 summarizes the comparison. One can clearly see a good match, though the
values extracted from the imaging (blue color) are slightly elevated. We attribute this to uncertainties in MTF measurements and inaccuracy in the focal plane determination.
Figure S8: **Camera RGB color filters.** Relative response for the color camera sensor’s red, green and blue pixels. The shaded grey region above 650 nm represents wavelengths blocked by the filter. The camera model is Kiralux 8.9 MP CMOS Compact Scientific Cameras from Thorlabs.Image used with the permission from Thorlabs.
## 4 Color error calculations
To quantify the color reproduction, we make use of the CIELAB metric, which is used to determine the color error based on human vision perception [5]. In this case, RGB intensity values for both reference and measured images are converted to luminance (\(L^{*}\)), color relation in red-green (\(a^{*}\)) and color relation in yellow-blue (\(b^{*}\)). Then, the color error \(\Delta E^{*}\) is calculated as geometric distances in the \(L^{*}\)\(a^{*}\)\(b^{*}\) three-dimensional space according to:
\[\Delta E^{*}=\sqrt{(\Delta L^{*})^{2}+(\Delta a^{*})^{2}+(\Delta b^{*})^{2}}\] ( \[S.\] 7)
For FOV of 30\({}^{\circ}\) x 20\({}^{\circ}\) the color error \(\Delta E^{*}\)(presented in Fig. S10a) is found to be in the range between 5 and 23 and varied for different patches. Panel b and c of Fig. S10 show color errors for 100\({}^{\circ}\) x 67\({}^{\circ}\) FOV raw image and with intensity correction. To implement the intensity correction procedure in each R, G and B channel, we characterize the efficiency of the quadratic metalenses and use the result as a calibration curve.
Figure S10: **CIELAB color reproduction assessment.****a** The color error \(\Delta E\)- for the ColorChecker RGB imaging with FOV of 30\({}^{\circ}\) x 20\({}^{\circ}\) and **b** 100\({}^{\circ}\) x 67\({}^{\circ}\). \(\Delta E\)’s given as geometric difference in _L*o*b*_ three-dimensional space:\(\Delta E^{*}=\sqrt{(\Delta L^{*})^{2}+(\Delta a^{*})^{2}+(\Delta b^{*})^{2}}\). **c** The color error (left panel) and obtained RGB image (right panel) after the intensity correction procedure for FOV of 100\({}^{\circ}\) x 67\({}^{\circ}\).
## 5 EigenCWD Spatially-Varying Deconvolution
The spatially-varying deconvolution algorithm used in our paper seeks a solution, \(\mathbf{u}\), which minimizes the following objective function [6]:
\[\min_{\mathbf{u}}\limits^{\mu}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
to modify and improve the traditional CWD-SVD deconvolution through more efficient computation especially for larger image sizes. Also, it provides a smooth interpolation of all off-axis PSFs for all spatial points in the image from just a small sample of PSFs.
We proposed and developed an algorithm, that we call the "EigenCWD algorithm", which is the amalgamation of these two methods - the CWD-SVD deconvolution and the eigenPSF decomposition, that enables the spatially-varying deconvolution of the image demonstrated in this article.
The algorithm consists in the following steps:
(i) One first generates a grid of sampled PSFs, such as the 7\(\times\)5 grid shown in Fig. S12 for example. (ii) The eigenPSFs and eigencoefficients are then computed in a similar fashion as in the eigenPSF decomposition method.
(iii) Next, the computed eigenPSFs and eigencoefficients, as well as the measured image to be deblurred, are sent as inputs into the EigenCWD algorithm to output a deblurred image. For example, in the case treated in this paper (Fig. 5 in the main text), the algorithm requires about 150 iterations to converge.
(iv) Further fine-tuning can be done by using a finer PSF grid to compute a larger basis ofeigenPSFs and eigencoefficients, and reiterating the EigenCWD algorithm on the previously obtained output. For example, the reconstruction in Fig. 5 in the main text is the result after another 150 iterations using the 11 \(\times\) 7 grid.
For the image reconstruction (Fig. 5 in the main text), the point spread functions (PSF) of the R, G and B metalenses were simulated on a 3001 \(\times\) 3001 simulation grid, with pixel pitch of 82 nm (which corresponds to the original camera's pixel pitch of 3.45 \(\mu\)m magnified by the 42x objective microscope). For each sampled point on the object, a monochromatic spherical wave was propagated towards the lens, modulated by the lens function, and then propagated a distance of 77 \(\mu\)m (78 \(\mu\)m for blue channel) to match our imaging plane distance from the lens (note that these numbers are slightly smaller than the focal distance 83 \(\mu\)m, and are chosen to correspond to the maximum intensity position, which does not exactly coincide with the focal distance due to the spherical aberrations of the metalens -- we made a similar observation in [10]). To create broadband PSFs (as it is the case in the experiment), we generated several monochromatic PSFs across a bandwidth of 40 nm with an interval of 2 nm, centered around the color's channel wavelength. Lastly, since the quadratic metalenses impart a barrel distortion onto the images formed, we corrected for the barrel distortions in both measured images as well as the simulated PSFs before performing the EigenCWD deconvolution. Overall calculation times for full-color image aren't scaled only by input image resolution but implicitly depend on other factors such as PSF grid size and finesse. For
example, 1600 x 1200 image takes \(\sim 1h\), while 3400 x 2500 image \(\sim 11h\). Workstation specifications: 28 CPU cores, 2 x E5-2690 v4 @ 2.60GHz; 512GB RAM; 4 GTX TITAN 12GB VRAM.
Figure S12: Sampled broadband PSFs for red (620 nm), green (530 nm) and blue (460 nm) for (top row) 50\({}^{\circ}\times\) 35\({}^{\circ}\)FOV and (bottom row) 100\({}^{\times}\) 67\({}^{\circ}\)FOV. The barrel distortion has already been removed for the PSFs shown. The 7\(\times\)5 grids for each color channel implies an even sampling of 7 and 5 points along the vertical and horizontal span of the object respectively. For example, the top left corner of each 7 x 5 represents the PSF of a point source at the top left corner of the object to be imaged at the specified FOV.
## 6 Dispersion-engineering method for scaling-up the system
The metalens bandwidth limits derived in Ref. [11] were obtained for a metalens incorporating a hyperbolic phase profile. In the case of a quadratic phase profile as the one used in this work, following similar derivation as in Ref. [11] leads to the following metalens bandwidth limit:
\[\Delta\omega\leq\frac{2\kappa c}{f}\frac{(1-\text{NA}^{2})}{\text{NA}^{2}}\] ( \[S.\] 9)
where \(\kappa\) is given by the Tucker's limit in the case of nanopillars acting as waveguides (used to impart locally a certain phase delay via waveguiding) [11]:
\[\kappa=\frac{\omega_{\text{c}}}{c}H(n_{\text{max}}-n_{\text{b}})\] ( \[S.\] 10)
with \(\omega_{\text{c}}\)the central frequency of the bandwidth \(\Delta\omega\), \(H\) the height of the nanopillars, and \(n_{\text{max}}\)and \(n_{\text{b}}\)the refractive indices of the nanopillars and of the background medium, respectively.
By combining Eqs. (S.9) and (S.10), one obtains:
\[\frac{\Delta\omega}{\omega_{\text{c}}}\leq\frac{2H}{f}(n_{\text{max}}-n_{ \text{b}})\,\frac{(1-\text{NA}^{2})}{\text{NA}^{2}}\] ( \[S.\] 11)
For example, in the case of our red metalens, application of Eq. (S.11) with \(D=200\mu\text{m}\), \(f\!=\!83\mu\text{m}\), (i.e. NA = 0.77), \(H=300\text{nm}\), \(n_{\text{max}}\!=\!3.3\) and \(n_{\text{b}}\!=\!1\) gives a normalized bandwidth of \(\Delta\omega/\omega_{c}\!=\!0.0114\) which translates into a bandwidth of \(\Delta\lambda=7\text{nm}\) around the central wavelength \(\lambda_{\text{c}}\!=\!620\text{nm}\) (using the fact that \(\Delta\lambda/\lambda_{\text{c}}\!=\!\Delta\omega/\omega_{\text{c}}\)).
Our experimental measurements of more than \(\Delta\lambda=40\text{nm}\) bandwidth exceeds this theoretical bandwidth limit but one must remember that the bandwidth limit derivation above assumes a diffraction-limited lens with no aberrations [11], which is not strictly the case for quadratic metalenses because of their intrinsic spherical aberrations.
Nevertheless, one can see from Eq. (S.11) that when scaling-up the size of the metalens while maintaining the same NA, the bandwidth shrinks, as it is inversely proportional to \(f\). For example, for a \(D=2\text{mm}\) lens with \(f\!=\!830\mu\text{m}\) (corresponding to 10 times the metalenses fabricated in this work), NA is kept constant, but the bandwidth is shrunk by a factor 10 due to \(f\), and hence once has to either increase the height of the nanopillars considerably, or the refractive index contrast, or a combination of both, to catch up with the factor of 10. Increasing the height represents a challenge for fabrication, as it leads to higher aspect-ratio, but the progress of fabrication techniques may push the current limits further in the future. Note also that when the height \(H\) increases beyond a certain point, the metalens can no longer be considered as an array
of one-dimensional delay lines -- assumption upon which these bandwidth limits are derived breaks -- which may somehow relax the bandwidth limits.
|
2304.02502 | Multi-Spectrally Constrained Low-PAPR Waveform Optimization for MIMO
Radar Space-Time Adaptive Processing | This paper focuses on the joint design of transmit waveforms and receive
filters for airborne multiple-input-multiple-output (MIMO) radar systems in
spectrally crowded environments. The purpose is to maximize the output
signal-to-interference-plus-noise-ratio (SINR) in the presence of
signal-dependent clutter. To improve the practicability of the radar waveforms,
both a multi-spectral constraint and a peak-to-average-power ratio (PAPR)
constraint are imposed. A cyclic method is derived to iteratively optimize the
transmit waveforms and receive filters. In particular, to tackle the
encountered non-convex constrained fractional programming in designing the
waveforms (for fixed filters), we resort to the Dinkelbach's transform,
minorization-maximization (MM), and leverage the alternating direction method
of multipliers (ADMM). We highlight that the proposed algorithm can iterate
from an infeasible initial point and the waveforms at convergence not only
satisfy the stringent constraints, but also attain superior performance. | Da Li, Bo Tang, Lei Xue | 2023-04-05T15:18:55Z | http://arxiv.org/abs/2304.02502v1 | Multi-Spectrally Constrained Low-PAPR Waveform Optimization for MIMO Radar Space-Time Adaptive Processing
###### Abstract
This paper focuses on the joint design of transmit waveforms and receive filters for airborne multiple-input-multiple-output (MIMO) radar systems in spectrally crowded environments. The purpose is to maximize the output signal-to-interference-plus-noise-ratio (SINR) in the presence of signal-dependent clutter. To improve the practicability of the radar waveforms, both a multi-spectral constraint and a peak-to-average-power ratio (PAPR) constraint are imposed. A cyclic method is derived to iteratively optimize the transmit waveforms and receive filters. In particular, to tackle the encountered non-convex constrained fractional programming in designing the waveforms (for fixed filters), we resort to the Dinkelbach's transform, minorization-maximization (MM), and leverage the alternating direction method of multipliers (ADMM). We highlight that the proposed algorithm can iterate from an infeasible initial point and the waveforms at convergence not only satisfy the stringent constraints, but also attain superior performance.
MIMO radar, STAP, spectrally crowded environment, waveform optimization, SINR. +
Footnote †: This work was supported in part by the National Natural Science Foundation of China under Grants 62171450 and 61671453, and Anhui Provincial Natural Science Foundation under Grant 2108085J30. _(Corresponding author: Bo Tang)_.
Da Li, Bo Tang, and Lei Xue are with the College of Electronic Engineering, National University of Defense Technology, Hefei 230037, China (e-mail: [email protected]; [email protected]; [email protected]).
## I Introduction
Multiple-input-multiple-output (MIMO) radar refers to a radar system with multiple transmitters and multiple receivers. Different from traditional phased-array radar, MIMO radar can transmit multiple independent waveforms. Therefore, MIMO radar can leverage the waveform diversity to improve the signal-to-interference-plus-noise-ratio (SINR), operate in more flexible modes, and adapt to the complex environment more intelligently [1]. According to the array spacing between the transmitters/receivers, MIMO radar can be categorized into two categories: statistical MIMO radar [2] and coherent MIMO radar [3]. Statistical MIMO radar has widely separated transmitters/receivers. Therefore, it can fully utilize the spatial diversity to overcome the target fluctuations and improve the target localization accuracy [4]. Compared with statistical MIMO radar, the transmitters/receivers of coherent MIMO radar are closely spaced. Similar to phased-array radar systems, the transmitters of coherent MIMO radar share the same viewing angle of the targets. Differently, the waveform diversity offered by coherent MIMO radar enables a higher number of degrees of freedom than phased-array radar, resulting in an improved parameter identifiability [5], better target detection performance [6], and the capability of supporting multiple functions simultaneously [7].
An airborne early warning (AEW) system (also called AEW and control system), which refers to a radar system operating at a high altitude, is usually used to detect target at a long range. When the AEW system is detecting targets at a low altitude, it might receive strong reflections from, e.g., ground. Owing to the AEW platform motion, the ground clutter is extended not only in range and angle, but also in Doppler. Therefore, a weak target is likely to be obscured by mainlobe clutter from the same angle as the target or by sidelobe clutter from different angles but with the same Doppler frequency. These unfavorable factors deteriorate the target detection performance, especially for the slowly moving targets [8]. To boost the target detection performance in the presence of strong clutter, space time adaptive processing (STAP) techniques have been proposed [8, 9, 10]. Through collecting waveforms from multiple antennas and multiple pulses, the adaptive multi-dimensional filters of STAP can form deep notches along the clutter ridge and thus suppress the clutter power to a low level.
Considering the superiority of MIMO radar and STAP, researchers proposed the concept of MIMO-STAP for future AEW systems and extensive studies have been devoted to this area (see, e.g., [6, 11, 12, 13, 14, 15] and the references therein). The results showed that for detection of slowly-moving targets, MIMO-STAP achieved better performance than conventional STAP methods. However, these studies mainly focused on the design of receivers for MIMO-STAP transmitting orthogonal waveforms. To further enhance the detection performance, there have been ever-increasing interest in jointly optimizing transmit waveforms and receive filters for MIMO-STAP [16, 17, 18, 19, 20]. In [16, 19, 20], the authors considered the maximization of SINR under several practical constraints on the sought waveforms, including the constant-envelope constraint and the similarity constraint. A number of algorithms were developed therein to tackle the joint design problems efficiently. In [18], the authors extended the algorithm in [16] to design finite-alphabet waveforms.
In [17, 21], the authors focused on the robust design for MIMO-STAP under circumstance of prior knowledge mismatch. It was shown that the synthesized waveforms based on maximizing the worst-case SINR exhibited increased robustness.
Note that an operating AEW system not only detects targets from hundreds miles away, but also might communicates with friendly aircrafts/ships to perform command and control. Therefore, if the radar and the communication systems onboard share the same frequency band, they will interfere each other. Moreover, in a spectrally crowded environment, in which the radar has to operate with many nearby radiators simultaneously, the possibly severe mutual interference will degrade the system performance significantly. One possible way to improve the radar performance in spectrally crowded environments is by transmitting intelligent waveforms [22]. In [23, 24, 25, 26, 27, 28, 29, 30], the authors considered the waveform design under a spectral constraint. It was shown that the spectrally constrained waveforms formed notches in the stopbands (i.e., the frequency bands that the nearby radiators operate in), thus enhancing the spectral compatibility of the radar system.
In this paper, we consider the joint design of transmit waveforms and receive filters for MIMO-STAP of AEW systems in spectrally crowded environments. Considering that multiple nearby radiators might be present and to guarantee the quality of service of these radiators, we impose a multi-spectral constraint on the waveforms. Moreover, to minimize the distortion due to the nonlinear effects in high power amplifier, a peak-to-average-power ratio (PAPR) constraint is imposed. We assume that the operating frequency band of the nearby radiators are known _a priori_ (see also similar assumptions in [23, 26, 27, 29, 30]). Indeed, such prior knowledge can be obtained by cognitive methods in [31, 32, 33]. Motivated by [16, 18], we develop two cyclic optimization methods to jointly design the waveforms and the filters. For the challenging non-convex waveform design problem (for fixed filters), we use Dinkelbach's transform [34] to transform the fractional objective function into a quadratic function. Then we resort to the coordinate-descent (CD) method to split the quadratic problem into multiple subproblems, and use the alternating direction method of multipliers (ADMM) to deal with the resulting quadratically constrained quadratic programming (QCQP) problem (we call it the DK-ADMM). Alternatively, we also use the minorization-maximization (MM) technique to construct a quadratic surrogate of the objective, and leverage the CD and ADMM to design the transmit waveforms (we call it MM-ADMM). We highlight that the proposed iterative algorithm in this paper can start from an infeasible point (i.e., a waveform not satisfying the constraints) and the performance of the devised waveforms is insensitive to the initial points. Moreover, the proposed algorithm can achieve better target detection performance than the competing algorithms.
The rest of this paper is organized as follows: Section II establishes the signal model and formulates the waveform design problem. Section III develops a cyclic method to optimize the receive filters and transmit waveforms. Section IV provides numerical examples to demonstrate the performance of the proposed algorithm. Finally, conclusions are drawn in Section V.
_Notations_: See Table I.
## II Signal Model and Problem Formulation
Fig. 1: Geometry of an airborne MIMO STAP radar.
### Signal Model
As shown in Fig. 1, the considered AEW MIMO radar system has \(N_{t}\) transmit antennas and \(N_{r}\) receive antennas. Let \(\mathbf{s}_{n}\in\mathbb{C}^{L}\) be the (discrete-time) baseband waveform of the \(n\)th transmitter, where \(L\) is the code length. Let \(\mathbf{S}=[\mathbf{s}_{1},\mathbf{s}_{2},\cdots,\mathbf{s}_{N_{t}}]^{\top}\in\mathbb{C}^{N_{t} \times L}\) denote the transmit waveform matrix. Assume that the airborne MIMO radar system transmits a burst of \(M\) pulses in a coherent processing interval (CPI) with the pulse repetition frequency (PRF) denoted \(f_{r}\). For a down-looking airborne MIMO radar system, the received signal includes the target returns, the signal-dependent clutter, and the receiver noise. Next we present the signal model associated with these components (we refer to [16, 18] for more details).
1. **Target** Assume that the transmit waveforms are narrowband. Under the far-field assumption, the target return from the \(m\)th pulse (\(m=1,2,...,M\)) can be expressed as \[\mathbf{Y}_{t,m}=\alpha_{t}e^{j(m-1)w_{t}}\mathbf{b}(\theta_{t})\mathbf{a}^{\top}(\theta_{ t})\mathbf{S},\] (1) where \(\alpha_{t}\) is the target amplitude, \(w_{t}=2\pi f_{t}\), \(f_{t}\) is the normalized target Doppler frequency, \(\theta_{t}\) is the target direction of arrival (DOA), \(\mathbf{a}(\theta_{t})\) and \(\mathbf{b}(\theta_{t})\) are the transmit array steering vector and the receive array steering vector at \(\theta_{t}\), respectively. Let \(\mathbf{y}_{t,m}=\text{vec}(\mathbf{Y}_{t,m})\), \(\mathbf{s}=\text{vec}(\mathbf{S})\), and \(\mathbf{A}(\theta_{t})=\mathbf{b}(\theta_{t})\mathbf{a}^{\top}(\theta_{t})\). Then \[\mathbf{y}_{t,m}=\alpha_{t}e^{j(m-1)w_{t}}(\mathbf{I}_{L}\otimes\mathbf{A}(\theta_{t}))\bm {s}.\] (2) Let \(\mathbf{y}_{t}=[\mathbf{y}_{t,1}^{\top},\cdots,\mathbf{y}_{t,M}^{\top}]^{\top}\in \mathbb{C}^{LMN_{r}}\). Then \(\mathbf{y}_{t}\) can be expressed as \[\mathbf{y}_{t}=\alpha\mathbf{V}(w_{t},\theta_{t})\mathbf{s},\] (3) where \(\mathbf{V}(w_{t},\theta_{t})=\mathbf{d}(w_{t})\otimes\mathbf{I}_{L}\otimes\mathbf{A}(\theta_{t})\) with \(\mathbf{d}(w_{t})=[1,\cdots,e^{j(M-1)w_{t}}]^{\top}\) being the temporal steering vector at the Doppler frequency \(f_{t}\).
2. **Clutter** The clutter refers to signal-dependent interference due to unwanted reflections, e.g., from ground, sea, etc. The clutter can be much stronger than the target echoes, due to the large number of clutter patches in the iso-range rings (including the range ring that the target is present and the neighborhood range rings), as shown in Fig. 1. Additionally, the clutter is distributed in Doppler domain owing to the motion of AEW platform [8]. Assume that there are \(2P+1\) clutter rings under consideration, and we split each clutter ring into \(N_{c}\) clutter patches uniformly. Assume that the target is at the \(r\)th range cell, the clutter associated with the \(m\)th pulse, the \((r+p)\)th range cell, and the \(k\)th patch in azimuth, can be modeled by \[\mathbf{Y}_{c,m,p,k}= \alpha_{c,p,k}e^{j2\pi(m-1)f_{c,p,k}T_{r}}\] \[\times\mathbf{b}(\theta_{c,p,k})\mathbf{a}^{\top}(\theta_{c,p,k})\mathbf{S} \mathbf{J}_{p},\] (4) where \(\alpha_{c,p,k}\), \(f_{c,p,k}\), \(\theta_{c,p,k}\) are the amplitude, the Doppler frequency, and the DOA of the \(k\)th clutter patch in the \((r+p)\)th range cell, respectively, \(\mathbf{J}_{p}=\mathbf{J}_{-p}^{\top}\in\mathbb{C}^{L\times L}\) is the shift matrix expressed as \[\mathbf{J}_{p}(m,n)=\left\{\begin{aligned} & 1,\text{if}\ m-n+p=0,\\ & 0,\text{if}\ m-n+p\neq 0.\end{aligned}\right.\] (5) Let \(\mathbf{y}_{c,p,k}=[\text{vec}^{\top}(\mathbf{Y}_{c,1,p,k}^{\top}),\cdots,\text{vec} ^{\top}(\mathbf{Y}_{c,M,p,k}^{\top})]^{\top}\), then the \(k\)th clutter patch in the \((r+p)\)th range cell can be expressed as \[\mathbf{y}_{c,p,k}=\alpha_{c,p,k}\mathbf{V}(w_{c,p,k},\theta_{c,p,k})\mathbf{s},\] (6) where \(\mathbf{V}(w_{c,p,k},\theta_{c,p,k})=\mathbf{d}(w_{c,p,k})\otimes\mathbf{J}_{p}^{\top} \otimes\mathbf{A}(\theta_{c,p,k})\), and \(w_{c,p,k}=2\pi f_{c,p,k}\). By considering the clutter from the nearest \(2P+1\) range cells, the clutter model can be established by \[\mathbf{y}_{c}=\sum_{p=-P}^{P}\sum_{k=1}^{N_{c}}\mathbf{y}_{c,p,k}.\] (7) Assume that the signals associated with different clutter patches are uncorrelated. Then the clutter covariance matrix, defined by \(\mathbf{R}_{c}(\mathbf{s})=\mathbb{E}(\mathbf{y}_{c}\mathbf{y}_{c}^{\dagger})\), can be expressed as \[\mathbf{R}_{c}(\mathbf{s})=\sum_{p=-P}^{P}\sum_{k=1}^{N_{c}}\sigma_{c,p,k}^{2}\mathbf{v}_{ c,p,k}(\mathbf{s})\mathbf{v}_{c,p,k}^{\dagger}(\mathbf{s}),\] (8) where \(\sigma_{c,p,k}^{2}=\mathbb{E}(|\alpha_{c,p,k}|^{2})\) denotes the average power of the \(k\)th clutter patch in the \(p\)th range ring, and \(\mathbf{v}_{c,p,k}(\mathbf{s})=\mathbf{V}(w_{c,p,k},\theta_{c,p,k})\mathbf{s}\).
3. **Noise** Assume that the receiver noise is white, with power of \(\sigma^{2}\). Then the noise covariance matrix can be written as: \[\mathbf{R}_{\text{u}}=\mathbb{E}(\mathbf{y}_{\text{u}}\mathbf{y}_{\text{u}}^{\dagger})= \sigma^{2}\mathbf{I}_{LMN_{r}},\] (9) where \(\mathbf{y}_{\text{u}}\) is the vector of receiver noise.
### Design Metric
In radar systems, the target detection performance is closely related to the SINR. Through maximizing the output SINR, the clutter can be suppressed to a low level and then the detection performance is improved. In this paper, we aim to maximize the output SINR through jointly designing the transmit waveforms and the receive filters. Let \(\mathbf{w}=[\mathbf{w}_{1}^{\top},\cdots,\mathbf{w}_{N_{r}}^{\top}]^{\top}\) denote the receive filter, with \(\mathbf{w}_{j}\in\mathbb{C}^{ML}\) representing the filter in the \(j\)th receiver, \(j=1,\cdots,N_{r}\). The output SINR of the MIMO-STAP radar is defined as follows
\[\begin{split}\text{SINR}(\mathbf{w},\mathbf{s})&=\frac{|\mathbf{ w}^{\dagger}\mathbf{y}_{t}|^{2}}{\mathbf{w}^{\dagger}\mathbb{E}(\mathbf{y}_{c}\mathbf{y}_{c}^{ \dagger}+\mathbf{y}_{\text{u}}\mathbf{y}_{\text{u}}^{\dagger})\mathbf{w}}\\ &=\frac{|\alpha_{t}|^{2}|\mathbf{w}^{\dagger}\mathbf{v}_{t}(\mathbf{s})|^{2}}{ \mathbf{w}^{\dagger}\mathbf{R}_{c}(\mathbf{s})\mathbf{w}},\end{split} \tag{10}\]
where \(\mathbf{v}_{t}(\mathbf{s})=\mathbf{V}(w_{t},\theta_{t})\mathbf{s}\), and \(\mathbf{R}_{v}(\mathbf{s})=\mathbf{R}_{c}(\mathbf{s})+\mathbf{R}_{\text{u}}\).
### Transmit Waveform Constraints
Now we briefly discuss the constraints that the transmit waveforms should satisfy.
1. **Energy Constraint** Since the energy of transmit waveforms is limited, the energy constraint is enforced on the sought waveforms: \[\text{tr}(\mathbf{SS}^{\dagger})=e_{t},\] (11) where \(e_{t}\) is the total transmit energy. Note that \(\mathbf{s}=\text{vec}(\mathbf{S})\). Then we can rewrite the energy constraint as follows \[\mathbf{s}^{\dagger}\mathbf{s}=e_{t}.\] (12) Note also that practical radar systems use almost identical radio frequency amplifiers (RFA), meaning that the transmit energies across different antennas are usually uniform [35]. Thus, the following uniform transmit energy constraint is included: \[\mathbf{s}_{n}^{\dagger}\mathbf{s}_{n}=e_{t}/N_{t},n=1,\cdots,N_{t}.\] (13)
2. **PAPR Constraint** To allow the RFA to operate in a saturated condition as well as avoid nonlinear effects, transmit waveform with low PAPR are desirable [36, 37]. Therefore, we also impose the PAPR constraint on the waveforms, that is, \[\mathbf{s}_{n}^{\dagger}\mathbf{s}_{n}=e_{t}/N_{t},\text{ PAPR}(\mathbf{s}_{n})\leq\rho,\] (14) where \(1\leq\rho\leq L\), \(n=1,\cdots,N_{t}\), and \[\text{PAPR}(\mathbf{s}_{n})=\frac{\text{max}_{l}|s_{n}(l)|^{2}}{\frac{1}{L}\sum_{ l=1}^{L}|s_{n}(l)|^{2}},\ l=1,\cdots,L.\] Particularly, if \(\rho=1\), the PAPR constraint is reduced to the constant-envelope constraint: \[|s_{n}(l)|=\sqrt{p_{s}},\ n=1,\cdots,N_{t},\ l=1,\cdots,L,\] where \(p_{s}=e_{t}/(LN_{t})\).
3. **Multi-Spectral Constraint** Owing to the massive increase in the number of radio devices and the limited spectrum resources, radar systems may have to share the frequency band with communication systems, which will cause mutual interference and deteriorate the performance of both systems. To improve the spectral compatibility, one possible way is to control the radar transmit waveforms to form notches in the stopbands (i.e., minimize the energy spectral density (ESD) of radar transmit waveforms in the working frequency bands of communication systems). In this respect, assume that \(K_{rad}\) licensed radiators are coexisting with the MIMO radar system. Let \(\Omega_{k}=[f_{1}^{k},f_{2}^{k}]\) denote the normalized frequency band of the \(k\)th radiator, where \(f_{1}^{k}\) and \(f_{2}^{k}\) indicate the lower and the upper normalized frequencies, \(k=1,\cdots,K_{rad}\). Note that the ESD of the \(n\)th waveform is written as \[S_{n}(f)=|\mathbf{s}_{n}^{\dagger}\mathbf{a}(f)|^{2},\] (15) where \(\mathbf{a}(f)=[1,e^{j2\pi f},\cdots,e^{j2\pi(L-1)f}]^{\top}\). Therefore, the energy of \(\mathbf{s}_{n}\) leaked on the \(k\)th stopband can be expressed as \[\int_{f_{1}^{k}}^{f_{2}^{k}}S_{n}(f)df=\int_{f_{1}^{k}}^{f_{2}^{k}}|\mathbf{s}_{n} ^{\dagger}\mathbf{a}(f)|^{2}df=\mathbf{s}_{n}^{\dagger}\mathbf{R}_{I}^{k}\mathbf{s}_{n},\] where the \((m,l)\)th element of \(\mathbf{R}_{I}^{k}\) is given by \[\mathbf{R}_{I}^{k}(m,l)=\left\{\begin{array}{ll}f_{2}^{k}-f_{1}^{k},&m=l,\\ \frac{e^{j2\pi f_{2}^{k}(m-l)}-e^{j2\pi f_{1}^{k}(m-l)}}{j2\pi(m-l)},&m\neq l. \end{array}\right.\] To enhance the spectral compatibility of the radar signals with the licensed radiators, the following spectral constraint is enforced on the transmit waveforms, which is given by \[\mathbf{s}_{n}^{\dagger}\mathbf{R}_{I}^{k}\mathbf{s}_{n}\leq E_{I}^{k},\] (16) where \(E_{I}^{k}\) denotes the maximum allowed interference energy of \(\mathbf{s}_{n}\) on the \(k\)th frequency band (\(n=1,\cdots,N_{t},k=1,\cdots,K_{rad}\)). Note that when the constraint in (16) is satisfied, we can precisely control the interference energy of each waveform on every frequency band, meaning that it is possible to ensure the quality of service for each licensed radiator. In the sequel, similar to [27, 28, 30, 38], we call the constraint in (16) a multi-spectral constraint 1. Footnote 1: We point out that the multi-spectral constraint is enforced on multiple waveforms, whereas the studies in [27, 28, 30, 38] enforce the multi-spectral constraint on a single waveform.
### Problem Formulation
By considering the constraints in (12), (14), and (16), we formulate the following joint design problem to maximize the output SINR of MIMO radar in spectrally crowded environments:
\[\mathcal{P}\left\{\begin{array}{ll}\max\limits_{\mathbf{w},\mathbf{s}} \text{SINR}(\mathbf{w},\mathbf{s})\\ \text{s.t.}&\mathbf{s}_{n}^{\dagger}\mathbf{s}_{n}=e_{t}/N_{t},\\ \text{PAPR}(\mathbf{s}_{n})\leq\rho,\\ &\mathbf{s}_{n}^{\dagger}\mathbf{R}_{I}^{k}\mathbf{s}_{n}\leq E_{I}^{k},\\ &n=1,\cdots,N_{t},\ k=1,\cdots,K_{rad}.\end{array}\right. \tag{17}\]
Note that \(\mathcal{P}\) is in general a non-convex problem, due to the PAPR constraint. In the next section, we develop a cyclic method to provide high-quality solutions to the above waveform design problem.
_Remark_: In the formulation of (17), we have assumed that the prior knowledge of the interference characteristics and the operating frequency bands of the licensed radiators are available. Indeed, these knowledge can be obtained via cognitive methods (see, e.g., [39, 31, 32, 33, 31, 33, 34, 35, 36, 37, 38, 39, 40, 41] for more details for the application of cognitive methods in radar systems). We also highlight that if the clutter is non-stationary (e.g., due to internal clutter motion), we will assume that the normalized Doppler frequency of
the \(k\)th clutter patch in the \(p\)th range ring (i.e., \(f_{c,p,k}\)) is uniformly distributed around the mean\(\bar{f}_{c,p,k}\), that is,
\[f_{c,p,k}\sim\mathcal{U}(\bar{f}_{c,p,k}-\delta_{c,p,k}/2,\bar{f}_{c,p,k}+ \delta_{c,p,k}/2), \tag{18}\]
where \(\delta_{c,p,k}\) rules the uncertainty of clutter Doppler frequency. In this case, the clutter covariance matrix \(\mathbf{R}_{c}(\mathbf{s})\) can calculated by the method in [40, 42].
## III Algorithm Design
In this section, we develop cyclic optimization methods to tackle the non-convex problem in (17). For each cyclic optimization method, two sub-problems are involved at the \((t+1)\)th iteration: the optimization of receive filters for fixed transmit waveforms (i.e., \(\mathbf{s}^{(t)}\) is fixed) and the optimization of transmit waveforms for fixed receive filters (i.e., \(\mathbf{w}^{(t+1)}\) is fixed). Next we present solutions to the two subproblems. To lighten the notations, we omit the superscripts if doing so does not have a risk of confusion.
If \(s^{(t)}\) is fixed, the receive filters can be optimized by solving the following maximization problem:
\[\max_{\mathbf{w}}\ \frac{|\mathbf{w}^{\dagger}\mathbf{v}_{t}(\mathbf{s})|^{2}}{\mathbf{w}^{ \dagger}\mathbf{R}_{v}(\mathbf{s})\mathbf{w}}. \tag{19}\]
It can be seen that the minimum variance distortionless response (MVDR) beamformer [43] maximizes the objective, i.e., the solution is given by
\[\mathbf{w}=\mathbf{R}_{v}^{-1}(\mathbf{s})\mathbf{v}_{t}(\mathbf{s}). \tag{20}\]
To optimize \(\mathbf{s}_{n},n=1,\cdots,N_{t}\) (for fixed \(\mathbf{w}^{(t+1)}\)), we note that the SINR can be expressed as
\[\text{SINR}(\mathbf{w},\mathbf{s})=\frac{|\alpha_{t}|^{2}|\mathbf{w}^{\dagger}\mathbf{V}(w_{t },\theta_{t})\mathbf{s}|^{2}}{\mathbf{w}^{\dagger}\mathbf{R}_{c}(\mathbf{s})\mathbf{w}+\mathbf{w}^{ \dagger}\mathbf{R}_{u}\mathbf{w}}. \tag{21}\]
In addition,
\[\mathbf{w}^{\dagger}\mathbf{R}_{c}(\mathbf{s})\mathbf{w}=\mathbf{s}^{\dagger}\mathbf{Q}\mathbf{s}, \tag{22}\]
where
\[\mathbf{Q}=\sum_{p=-P}^{P}\sum_{k=1}^{N_{c}}\sigma_{c,p,k}^{2}\mathbf{V}_{c,p,k}^{ \dagger}\mathbf{w}\mathbf{w}^{\dagger}\mathbf{V}_{c,p,k}, \tag{23}\]
and \(\mathbf{V}_{c,p,k}\triangleq\mathbf{V}(w_{c,p,k},\theta_{c,p,k})\).
Let
\[\mathbf{D}=\mathbf{V}^{\dagger}(w_{t},\theta_{t})\mathbf{w}\mathbf{w}^{\dagger}\mathbf{V}(w_{t}, \theta_{t}). \tag{24}\]
Then, SINR can be expressed as
\[\text{SINR}(\mathbf{w},\mathbf{s})=|\alpha_{t}|^{2}\frac{\mathbf{s}^{\dagger}\mathbf{D}\mathbf{s} }{\mathbf{s}^{\dagger}\mathbf{Q}\mathbf{s}+\beta(\mathbf{w})}, \tag{25}\]
where \(\beta(\mathbf{w})=\mathbf{w}^{\dagger}\mathbf{R}_{u}\mathbf{w}\).
Therefore, the optimization of the multiple transmit waveforms (given \(\mathbf{w}^{(t+1)}\)) can be given by
\[\mathcal{P}_{\mathbf{s}}\left\{\begin{array}{l}\max_{\mathbf{s}}\ \frac{s^{ \dagger}\mathbf{D}\mathbf{s}}{s^{\dagger}\mathbf{Q}\mathbf{s}+\beta(\mathbf{w})}\\ \text{s.t.}\ \mathbf{s}_{n}^{\dagger}\mathbf{s}_{n}=e_{t}/N_{t},\\ \text{PAPR}(\mathbf{s}_{n})\leq\rho,\\ \mathbf{s}_{n}^{\dagger}\mathbf{R}_{l}^{k}\mathbf{s}_{n}\leq E_{I}^{k},\\ n=1,\cdots,N_{t},k=1,\cdots,K_{rad}.\end{array}\right. \tag{26}\]
Note that \(\mathcal{P}_{\mathbf{s}}\) is a fractional programming problem. Next we resort to the Dinkelbach's transform [34] and MM to replace the fractional objective with a quadratic surrogate, respectively. Then, with the quadratic surrogate function, we propose an ADMM algorithm to tackle the non-convex QCQP problem. The corresponding algorithms are referred to as DK-ADMM and MM-ADMM, respectively.
### DK-Admm
Let \(\mathbf{s}^{(t,l)}\) denote the waveform in the \((t,l)\)th iteration of the proposed algorithm, where the superscript \(t\) denotes the outer iteration for the cyclic optimization, and \(l\) denotes the inner iteration for Dinkelbach's transform. Let \(f^{(t,l)}\) denote the SINR associated with \(\mathbf{s}^{(t,l)}\). By applying the Dinkelbach's transform, we formulate the following optimization problem at the \((t,l+1)\)th iteration
\[\hat{\mathcal{P}}_{\mathbf{s}}\left\{\begin{array}{l}\max_{\mathbf{s}}\ \mathbf{s}^{\dagger}\hat{\mathbf{T}}\mathbf{s}\\ \text{s.t.}\ \mathbf{s}_{n}^{\dagger}\mathbf{s}_{n}=e_{t}/N_{t},\\ \text{PAPR}(\mathbf{s}_{n})\leq\rho,\\ \mathbf{s}_{n}^{\dagger}\mathbf{R}_{l}^{k}\mathbf{s}_{n}\leq E_{I}^{k},\\ n=1,\cdots,N_{t},k=1,\cdots,K_{rad}.\end{array}\right. \tag{27}\]
where \(\hat{\mathbf{T}}=\mathbf{T}+\eta\mathbf{I}\),
\[\mathbf{T}=\mathbf{D}-f^{(t,l)}(\mathbf{Q}+\beta(\mathbf{w})/e_{t}\cdot\mathbf{I}), \tag{28}\]
and \(\eta\) is a constant to ensure \(\hat{\mathbf{T}}\succeq\mathbf{0}\).
Next we use the block coordinate descent (CD) method to deal with the optimization problem \(\hat{\mathcal{P}}_{\mathbf{s}}\) (We refer to [44] for a comprehensive review of the CD method). To apply the CD method, we define \(\bar{\mathbf{s}}=\text{vec}(\mathbf{S}^{\top})\). Note that \(\mathbf{s}=\mathbf{P}\bar{\mathbf{s}}\)[16], where \(\mathbf{P}\) is a commutation matrix. Therefore, the objective function of \(\hat{\mathcal{P}}_{\mathbf{s}}\) can be rewritten as
\[\mathbf{s}^{\dagger}\hat{\mathbf{T}}\mathbf{s}=\bar{\mathbf{s}}^{\dagger}\bar{\mathbf{T}}\bar{\mathbf{s}}, \tag{29}\]
where \(\bar{\mathbf{T}}=\mathbf{P}^{\dagger}\hat{\mathbf{T}}\mathbf{P}\). Next, let us partition \(\bar{\mathbf{T}}\) into \(N_{t}\times N_{t}\) blocks, each of which is an \(L\times L\) matrix. Let \(\bar{\mathbf{T}}_{n,m}\) denote the \((n,m)\)th block of \(\bar{\mathbf{T}}\). Then \(\bar{\mathbf{s}}^{\dagger}\bar{\mathbf{T}}\bar{\mathbf{s}}\) can be rewritten as
\[\bar{\mathbf{s}}^{\dagger}\bar{\mathbf{T}}\bar{\mathbf{s}}=\mathbf{s}_{n}^{\dagger}\bar{\mathbf{T} }_{n,n}\mathbf{s}_{n}+2\text{Re}(\mathbf{s}_{n}^{\dagger}\sum_{\begin{subarray}{c}m=1 \\ m\neq n\end{subarray}}^{N_{t}}\bar{\mathbf{T}}_{n,m}\mathbf{s}_{m})+const_{0}, \tag{30}\]
where
\[const_{0}=\sum_{\begin{subarray}{c}m=1\\ m\neq n\end{subarray}}^{N_{t}}\sum_{\begin{subarray}{c}m^{\prime}=1\\ m^{\prime}\neq n\end{subarray}}^{N_{t}}\mathbf{s}_{m}^{\dagger}\bar{\mathbf{T}}_{m,m^{ \prime}}\mathbf{s}_{m^{\prime}}. \tag{31}\]
Based on the observation in (30), we formulate the following problem to optimize \(\mathbf{s}_{n}\):
\[\mathcal{P}_{s_{n}}\left\{\begin{array}{l}\max_{\mathbf{s}_{n}}\ \mathbf{s}_{n}^{\dagger}\bar{\mathbf{T}}_{n,n}\mathbf{s}_{n}+2\text{Re}(\mathbf{b}_{n}^{ \dagger}\mathbf{s}_{n})\\ \text{s.t.}\ \mathbf{s}_{n}^{\dagger}\mathbf{s}_{n}=e_{t}/N_{t},\\ \text{PAPR}(\mathbf{s}_{n})\leq\rho,\\ \mathbf{s}_{n}^{\dagger}\mathbf{R}_{l}^{k}\mathbf{s}_{n}\leq E_{I}^{k},\ k=1,\cdots,K_{rad}, \end{array}\right. \tag{32}\]
where
\[\mathbf{b}_{n}=\sum_{\begin{subarray}{c}m=1\\ m\neq n\end{subarray}}^{N_{t}}\bar{\mathbf{T}}_{n,m}\mathbf{s}_{m}. \tag{33}\]
Next we use the ADMM method to deal with the optimization problem \(\mathcal{P}_{s_{n}}\) (we refer to [45] for a tutorial review of the ADMM method). To proceed, we reformulate \(\mathcal{P}_{s_{n}}\) as
\[\mathcal{P}_{\mathbf{s}_{n},t,\mathbf{g}_{k},\mathbf{z}}\left\{\begin{aligned} \max_{\mathbf{s}_{n},t,\mathbf{g}_{k},\mathbf{z}}& t+2\text{Re}(\mathbf{b}_{n}^ {\dagger}\mathbf{s}_{n})\\ \text{s.t.}&\mathbf{s}_{n}^{\dagger}\mathbf{s}_{n}=e_{t}/N_{t},\\ &\text{PAPR}(\mathbf{s}_{n})\leq\rho,\\ &\mathbf{g}_{k}=\mathbf{B}_{k}^{1/2}\mathbf{s}_{n},\\ &\|\mathbf{g}_{k}\|^{2}\leq 1,\ k=1,\cdots,K_{rad},\\ &\mathbf{z}=\bar{\mathbf{T}}_{n,n}^{1/2}\mathbf{s}_{n},\|\mathbf{z}\|^{2}\geq t, \end{aligned}\right. \tag{34}\]
where \(t\), \(\mathbf{g}_{k}\), and \(\mathbf{z}\) are the introduced auxiliary variables, and \(\mathbf{B}_{k}=\mathbf{R}_{I}^{k}/E_{I}^{k}\). The augmented Lagrangian function corresponding to \(\mathcal{P}_{\mathbf{s}_{n},t,\mathbf{g}_{k},\mathbf{z}}\) can be expressed as
\[L_{\theta}(\mathbf{s}_{n},\mathbf{z},t,\mathbf{g}_{k},\mathbf{c}_{k},\mathbf{d}) \tag{35}\] \[= -t-2\text{Re}(\mathbf{b}_{n}^{\dagger}\mathbf{s}_{n})\] \[+\frac{\vartheta}{2}\left\{\sum_{k=1}^{K_{rad}}\left(||\mathbf{g}_{k} -\mathbf{B}_{k}^{1/2}\mathbf{s}_{n}+\mathbf{c}_{k}||^{2}-||\mathbf{c}_{k}||^{2}\right)\right\}\] \[+\frac{\vartheta}{2}\left\{||\mathbf{z}-\bar{\mathbf{T}}_{n,n}^{1/2}\mathbf{s} _{n}+\mathbf{d}||^{2}-||\mathbf{d}||^{2}\right\},\]
where \(\vartheta\) is the penalty parameter, \(\mathbf{c}_{k}(k=1,2,\cdots,K_{rad})\) and \(\mathbf{d}\) are the Lagrange multiplier vectors. Then, during the \((m+1)\)th iteration of the ADMM method, we carry out the following steps in (36), shown at the bottom of this page:
Next we present solutions to (36a), (36b), and (36c).
**1) Update of \(\mathbf{s}_{n}^{(m+1)}\)**
Define
\[\mathbf{Y}_{n}=-\frac{\vartheta}{2}(\bar{\mathbf{T}}_{n,n}+\sum_{k=1}^{K_{rad}}\mathbf{B}_ {k}), \tag{37}\]
and
\[\mathbf{h}=\frac{\vartheta}{2}(\bar{\mathbf{T}}_{n,n}^{1/2}(\mathbf{z}+\mathbf{d})+\sum_{k=1}^ {K_{rad}}\mathbf{B}_{k}^{1/2}(\mathbf{g}_{k}+\mathbf{c}_{k})). \tag{38}\]
Let \(\mathbf{v}=\mathbf{h}+\mathbf{b}\). Then the update of \(\mathbf{s}_{n}^{(m+1)}\) can be given by
\[\mathcal{P}_{\mathbf{s}_{n}}^{(m+1)}\left\{\begin{aligned} \max_{\mathbf{s}_{n}}&\mathbf{s}_{n}^{\dagger}\mathbf{Y}_{n}\mathbf{s}_{n}+2 \text{Re}(\mathbf{s}_{n}^{\dagger}\mathbf{v})\\ \text{s.t.}&\mathbf{s}_{n}^{\dagger}\mathbf{s}_{n}=e_{t}/N_{t},\\ &\text{PAPR}(\mathbf{s}_{n})\leq\rho.\end{aligned}\right. \tag{39}\]
We can tackle the maximization problem \(\mathcal{P}_{\mathbf{s}_{n}}^{(m+1)}\) leveraging the MM method [46]. To proceed, note that
\[(\mathbf{s}_{n}-\mathbf{s}_{n}^{(m,j)})^{\dagger}(\mathbf{Y}_{n}-\lambda_{\min}(\mathbf{Y}_{n })\mathbf{I})(\mathbf{s}_{n}-\mathbf{s}_{n}^{(m,j)})\geq 0, \tag{40}\]
where \(\mathbf{s}_{n}^{(m,j)}\) is the waveform at the \((m,j)\)th iteration, and \(\lambda_{\min}(\mathbf{Y}_{n})\) is the smallest eigenvalue of \(\mathbf{Y}_{n}\). We can derive from (40) that
\[\mathbf{s}_{n}^{\dagger}\mathbf{Y}_{n}\mathbf{s}_{n}\geq 2\text{Re}(\mathbf{s}_{n}^{\dagger}(\mathbf{Y} _{n}-\lambda_{\min}(\mathbf{Y}_{n})\mathbf{I})\mathbf{s}_{n}^{(m,j)})+const_{1}, \tag{41}\]
where \(const_{1}=-(\mathbf{s}_{n}^{(m,j)})^{\dagger}\mathbf{Y}_{n}\mathbf{s}_{n}^{(m,j)}+2 \lambda_{\min}(\mathbf{Y}_{n})e_{t}/N_{t}\). Let
\[\mathbf{u}^{(m,j)}=(\mathbf{Y}_{n}-\lambda_{\min}(\mathbf{Y}_{n})\mathbf{I})\mathbf{s}_{n}^{(m,j) }+\mathbf{v}, \tag{42}\]
then the minorized problem based on (41) at the \((m,j+1)\)th iteration can be formulated as
\[\max_{\mathbf{s}_{n}} \text{Re}(\mathbf{s}_{n}^{\dagger}\mathbf{u}^{(m,j)})\] (43) s.t. \[\mathbf{s}_{n}^{\dagger}\mathbf{s}_{n}=e_{t}/N_{t},\] \[\text{PAPR}(\mathbf{s}_{n})\leq\rho.\]
In [47], an algorithm is provided to solve the above problem. Particularly, if \(\rho=1\), this problem has a closed-form solution
\[s_{n}^{(m,j+1)}(l)=\sqrt{p_{s}}\text{exp}(j\text{arg}(u^{(m,j)}(l))), \tag{44}\]
where \(s_{n}^{(m,j+1)}(l)\) and \(u^{(m,j)}(l)\) denote the \(l\)th element of \(\mathbf{s}_{n}^{(m,j+1)}\) and \(\mathbf{u}^{(m,j)}\), respectively.
**2) Update of \(\mathbf{z}^{(m+1)}\) and \(t^{(m+1)}\)**
Let \(\mathbf{q}=\bar{\mathbf{T}}_{n,n}^{1/2}\mathbf{s}_{n}-\mathbf{d}\), then the update of \(\mathbf{z}^{(m+1)}\) and \(t^{(m+1)}\) can be given by
\[\mathcal{P}_{\mathbf{z},t}^{(m+1)}\left\{\begin{aligned} \min_{\mathbf{z},t}& \frac{\vartheta}{2}||\mathbf{z}-\mathbf{q}||^{2}-t\\ \text{s.t.}&||\mathbf{z}||^{2}\geq t.\end{aligned}\right. \tag{45}\]
It is evident that if \(t=\|\mathbf{z}\|^{2}\), the objective function achieves the smallest value. As a result, we can obtain the solution to \(\mathcal{P}_{\mathbf{z},t}^{(m+1)}\) through solving the following unconstrained optimization:
\[\min_{\mathbf{z}}\frac{\vartheta}{2}\|\mathbf{z}-\mathbf{q}\|^{2}-||\mathbf{z}\|^{2}. \tag{46}\]
Let \(\mathbf{v}=\mathbf{h}+\mathbf{b}\). Then the update of \(\mathbf{s}_{n}^{(m+1)}\) can be given by
\[\mathcal{P}_{\mathbf{s}_{n}}^{(m+1)} \tag{37a}\] \[(\mathbf{z}^{(m+1)},t^{(m+1)}) =\min_{\mathbf{z},t} \ L_{\vartheta}(\mathbf{s}_{n}^{(m+1)},\mathbf{z},t,\mathbf{g}_{k}^{(m)},\mathbf{c}_{k }^{(m)},\mathbf{d}^{(m)}),\] (48a) \[\mathbf{g}_{k}^{(m+1)} =\min_{\mathbf{g}_{k}} \ L_{\vartheta}(\mathbf{s}_{n}^{(m+1)},\mathbf{z}^{(m+1)},t^{(m+1)},\mathbf{g}_{k },\mathbf{c}_{k}^{(m)},\mathbf{d}^{(m)}),\] (48b) \[\mathbf{c}_{k}^{(m+1)} =\mathbf{c}_{k}^{(m)}+\mathbf{g}_{k}^{(m+1)}-\mathbf{B}_{k}^{1/2}\mathbf{s}_{n} ^{(m+1)},\] (48c) \[\mathbf{d}^{(m+1)} =\mathbf{d}^{(m)}+\mathbf{z}^{(m+1)}-\bar{\mathbf{T}}_{n,n}^{1/2}\mathbf{s}_{n} ^{(m+1)}, \tag{48d}\]
Assume that \(\vartheta>2\). Then the optimal solution to \(\mathbf{z}\) is shown as follows
\[\mathbf{z}=\frac{\vartheta\mathbf{q}}{\vartheta-2}. \tag{47}\]
**3) Update of \(\mathbf{g}_{k}^{(m+1)}\)**
Let \(\mathbf{x}_{k}=\mathbf{B}_{k}^{1/2}\mathbf{s}_{n}-\mathbf{c}_{k}\). Then the update of \(\mathbf{g}_{k}\) can be given by
\[\mathcal{P}_{\mathbf{g}_{k}}^{(m+1)}\left\{\begin{aligned} &\min_{\mathbf{g}_{k}}||\mathbf{g}_{k}-\mathbf{x}_{k}||^{2},\\ &\text{s.t.}\ ||\mathbf{g}_{k}||^{2}\leq 1.\end{aligned}\right. \tag{48}\]
Obviously, the solution to \(\mathcal{P}_{\mathbf{g}_{k}}^{(m+1)}\) is given by
\[\mathbf{g}_{k}=\left\{\begin{aligned} &\mathbf{x}_{k},&||\mathbf{x}_{k}||^{2} \leq 1,\\ &\mathbf{x}_{k}/||\mathbf{x}_{k}||,&||\mathbf{x}_{k}||^{2}>1. \end{aligned}\right. \tag{49}\]
We sum up the proposed ADMM algorithm in Algorithm 1, where the algorithm terminates if \(||\mathbf{r}^{(m+1)}||<\xi\) or the algorithm reaches a maximum number of iterations, \(\xi>0\) is a small user-defined value, and
\[\mathbf{r}^{(m+1)}= \mathbf{z}^{(m+1)}-\bar{\mathbf{T}}_{n,n}^{(m+1)}\mathbf{s}_{n}^{(m+1)}\] \[+\sum_{k=1}^{K_{rad}}(\mathbf{g}_{k}^{(m+1)}-\mathbf{B}_{k}^{1/2}\mathbf{s}_{ n}^{(m+1)}). \tag{50}\]
```
Input:\(e_{t}\), \(N_{t}\), \(\bar{\mathbf{R}}_{n,n}\), \(\rho\), \(\mathbf{R}_{I}^{k}\), \(E_{I}^{k}\), and \(\xi\). Output:\(\mathbf{s}_{n}^{(l,l+1)}\)
1 Initialize:\(m=0\), \(\mathbf{s}_{n}^{(m)}\), \(\mathbf{z}\), \(t\), \(\mathbf{g}_{k}\), \(\mathbf{c}_{k}\), \(\mathbf{d}\) and \(\vartheta\). repeat // Update of \(\mathbf{s}_{n}^{(m+1)}\)
2\(j=0\), \(\mathbf{s}_{n}^{(m,j)}=\mathbf{s}_{n}^{(m)}\); repeat Compute \(\mathbf{u}^{(m,j)}\) by (42); Update \(\mathbf{s}_{n}^{(m,j+1)}\) by solving (43); \(j=j+1\); until convergence; \(\mathbf{s}_{n}^{(m+1)}=\mathbf{s}_{n}^{(m,j+1)}\); // Update of \(\mathbf{z}^{(m+1)}\)
3\(\mathbf{q}^{(m)}=\bar{\mathbf{R}}_{n,n}\mathbf{s}_{n}^{(m)}-\mathbf{d}^{(m)}\); \(\mathbf{z}^{(m+1)}=\vartheta\mathbf{q}^{(m)}/(\vartheta-2)\); \(t^{(m+1)}=\|\mathbf{z}^{(m+1)}\|_{2}^{2}\); // Update of \(\mathbf{g}_{k}^{(m+1)}\)
4 Update \(\mathbf{g}_{k}^{(m+1)}\) by (49); \(\mathbf{c}_{k}^{(m+1)}=\mathbf{c}_{k}^{(m)}+\mathbf{g}_{k}^{(m+1)}-\mathbf{B}_{k}^{1/2}\mathbf{s }_{n}^{(m+1)}\); \(\mathbf{d}^{(m+1)}=\mathbf{d}^{(m)}+\mathbf{z}^{(m+1)}-\bar{\mathbf{R}}_{n,n}^{1/2}\mathbf{s}_{n}^ {(m+1)}\); \(m=m+1\) until\(||\mathbf{r}^{(m)}||<\xi\); \(\mathbf{s}_{n}^{(l,l+1)}=\mathbf{s}_{n}^{(m+1)}\).
```
**Algorithm 1**ADMM algorithm for \(\mathcal{P}_{s_{n}}\).
### Mm-Admm
Substituting (20) into (10), we rewrite SINR as
\[\text{SINR}(\mathbf{s})=|\alpha_{t}|^{2}\mathbf{v}_{t}^{\dagger}(\mathbf{s})\mathbf{R}_{v}^{ \dagger}(\mathbf{s})\mathbf{v}_{t}(\mathbf{s}), \tag{51}\]
According to [48, Lemma 1], \(\text{SINR}(\mathbf{s})\) is minorized by:
\[-\mathbf{s}^{\dagger}\mathbf{R}s+2\text{Re}(\mathbf{c}^{\dagger}\mathbf{s})+const_{2}, \tag{52}\]
where \(\mathbf{c}=\mathbf{V}^{\dagger}(w_{t},\theta_{t})\mathbf{R}_{v}^{-1}(\mathbf{s}^{(k)})\mathbf{V}(w _{t},\theta_{t})\mathbf{s}^{(k)}\), \(const_{2}=-\text{tr}(\mathbf{B}_{k}\mathbf{R}_{u})\),
\[\mathbf{R}=\sum_{p=-P}^{P}\sum_{k=1}^{N_{e}}\sigma_{c,p,k}^{2}\mathbf{V}_{c,p,k}\mathbf{B}_ {k}\mathbf{V}_{c,p,k}^{\dagger},\]
and \(\mathbf{B}_{k}=\mathbf{u}_{k}\mathbf{u}_{k}^{\dagger}\), \(\mathbf{u}_{k}=\mathbf{R}_{v}^{-1}(\mathbf{s}^{(k)})\mathbf{V}(w_{t},\theta_{t})\mathbf{s}^{(k)}\), the superscript "\(k\)" denotes the \(k\)th iteration in the MM-based algorithm. Let \(\bar{\mathbf{R}}=\mu\mathbf{I}-\mathbf{R}\), where \(\mu\) is set to ensure \(\hat{\mathbf{R}}>0\). By omitting the constant terms, the optimization of \(\mathbf{s}\) can be formulated as
\[\overline{\mathcal{P}}_{\mathbf{s}}\left\{\begin{aligned} &\max_{\mathbf{s}}\mathbf{s}^{\dagger}\hat{\mathbf{R}}s+2\text{Re}(\mathbf{c}^{ \dagger}\mathbf{s})\\ &\text{s.t.}\ \mathbf{s}_{n}^{\dagger}\mathbf{s}_{n}=e_{t}/N_{t},\\ &\text{PAPR}(\mathbf{s}_{n})\leq\rho,\\ &\mathbf{s}_{n}^{\dagger}\mathbf{R}_{I}^{k}\mathbf{s}_{n}\leq E_{I}^{k},\\ & n=1,\cdots,N_{t},k=1,\cdots,K_{rad}.\end{aligned}\right. \tag{53}\]
Let \(\mathbf{c}=\mathbf{P}\bar{\mathbf{c}}\), and the objective function of \(\overline{\mathcal{P}}_{\mathbf{s}}\) can be rewritten as
\[\mathbf{s}^{\dagger}\hat{\mathbf{R}}\mathbf{s}+2\text{Re}(\mathbf{c}^{\dagger}\mathbf{s})=\bar{\mathbf{ s}}^{\dagger}\bar{\mathbf{R}}\bar{\mathbf{s}}+2\text{Re}(\mathbf{c}^{\dagger}\bar{\mathbf{s}}), \tag{54}\]
where \(\bar{\mathbf{R}}=\mathbf{P}^{\dagger}\hat{\mathbf{R}}\mathbf{P}\). Next, let us partition \(\bar{\mathbf{R}}\) and \(\bar{\mathbf{c}}\) into \(N_{t}\times N_{t}\) and \(N_{t}\times 1\) blocks, each of which are an \(L\times L\) matrix and an \(L\times 1\) vector, respectively. Let \(\bar{\mathbf{R}}_{n,m}\) and \(\mathbf{c}_{n}\) denote the \((n,m)\)th and the \(n\)th block of \(\bar{\mathbf{R}}\) and \(\bar{\mathbf{c}}\). Then \(\bar{\mathbf{s}}^{\dagger}\bar{\mathbf{R}}\bar{\mathbf{s}}+2\text{Re}(\bar{\mathbf{c}}^{\dagger} \bar{\mathbf{s}})\) can be rewritten as
\[\bar{\mathbf{s}}^{\dagger}\bar{\mathbf{R}}\bar{\mathbf{s}}+2\text{Re}(\bar{\mathbf{c}}^{\dagger} \bar{\mathbf{s}})=\mathbf{s}_{n}^{\dagger}\bar{\mathbf{R}}_{n,n}\mathbf{s}_{n}+2\text{Re}(\mathbf{ f}_{n}^{\dagger}\mathbf{s}_{n})+const_{3} \tag{55}\]
where \(\mathbf{f}_{n}=\frac{1}{2}\sum_{m=n}^{N_{t}}\bar{\mathbf{R}}_{n,m}\mathbf{s}_{m}+\mathbf{c}_{n}\),
\[const_{3}=\sum_{\begin{subarray}{c}m=1\\ m\neq n\end{subarray}}^{N_{t}}\sum_{\begin{subarray}{c}m^{\prime}=1\\ m^{\prime}\neq n\end{subarray}}^{N_{t}}\mathbf{s}_{m}^{\dagger}\bar{\mathbf{R}}_{m,m^{ \prime}}\mathbf{s}_{m^{\prime}}+\sum_{\begin{subarray}{c}m=1\\ m\neq n\end{subarray}}^{N_{t}}\mathbf{c}_{m}^{\dagger}\mathbf{s}_{m}.\]
By using (55), we formulate the following problem to optimize \(\mathbf{s}_{n}\):
\[\overline{\mathcal{P}}_{s_{n}}\left\{\begin{aligned} &\max_{\mathbf{s}_{n}}\ \mathbf{s}_{n}^{\dagger}\bar{\mathbf{R}}_{n,n}\mathbf{s}_{n}+2\text{Re}(\mathbf{
is large, the computational complexity of DK-ADMM is higher than that of MM-ADMM; otherwise, if \(L\) is large, the computational complexity of MM-ADMM is higher than that of DK-ADMM.
```
Input:\(\mathbf{R_{\text{u}}}\), \(\mathbf{V}(w_{t},\theta_{t})\), \(\mu\), \(\epsilon_{1}\), \(\epsilon_{2}\), Output:\(\mathbf{s_{\text{opt}}}\) and \(\mathbf{w_{\text{opt}}}\).
1 Initialize:\(t=0\), \(\mathbf{s}_{n}^{(t)},n=1,\cdots,N_{t}\). repeat // Update of \(\mathbf{w}^{(t+1)}\) Compute \(\mathbf{R_{\text{c}}}(\mathbf{s}^{(t)})\) by (8); \(\mathbf{R_{\text{c}}}(\mathbf{s}^{(t)})=\mathbf{R_{\text{c}}}(\mathbf{s}^{(t)})+\mathbf{R_{\text{u}}}\); \(\mathbf{w}^{(t)}=\mathbf{R_{\text{c}}}^{-1}(\mathbf{s}^{(t)})\mathbf{v_{t}}(\mathbf{s}^{(t)})\); // Update of \(\mathbf{s}^{(t+1)}\) for DK-ADMM Algorithmdo \(l=0\), \(\mathbf{s}^{(t,l)}=\mathbf{s}^{(t)}\); repeat Compute \(\mathbf{D}\), \(\mathbf{Q}\), and \(\beta(\mathbf{w}^{(t+1)})\); Compute \(\mathbf{f}^{(t,l)}\); Compute \(\mathbf{\tilde{T}}_{n,n}^{(t,l)}\) and \(\mathbf{b}_{n}^{(t,l)}\); for\(n=1\)to\(\mathbf{N_{\text{d}}}\) Update \(\mathbf{s}_{n}^{(t,l+1)}\) using Algorithm 1; end for \(l=l+1\); until\(|f^{(t,l+1)}-f^{(t,l)}|/f^{(t,l+1)}<\epsilon_{1}\); \(\mathbf{s}_{n}^{(t+1)}=\mathbf{s}_{n}^{(t,l)}\); end for for MM-ADMM Algorithmdo Compute \(\mathbf{B}_{k}^{(t)}\), \(\mathbf{R}^{(t)}\), and \(\mathbf{c}^{(t)}\); Compute \(\mathbf{\tilde{R}}_{n,n}^{(t)}\) and \(\mathbf{f}_{n}^{(t)}\); for\(n=1\)to\(N_{t}\)do Update \(\mathbf{s}_{n}^{(t+1)}\) using Algorithm 1; end for end for \(t=t+1\); until\(|\text{{SINR}}^{(t+1)}-\text{{SINR}}^{(t)}|/\text{{SINR}}^{(t+1)}<\epsilon_{2}\); \(\mathbf{s}_{\text{opt}}=\mathbf{s}^{(t+1)}\); \(\mathbf{w_{\text{opt}}}=\mathbf{w}^{(t+1)}\).
```
**Algorithm 2**Multi-spectrally constrained waveform design for MIMO STAP.
### Extension to Multiple Space-Frequency Constraints
In some situations, the directions of the licensed radiators might be approximately known. Assume that the direction of the \(k\)th licensed radiator belongs to \(\Theta_{k}=[\theta_{1}^{k},\theta_{2}^{k}]\), where \(\theta_{1}^{k}\) and \(\theta_{2}^{k}\) are the lower and upper angles, respectively, \(k=1,\cdots,K_{rad}\). Therefore, the energy of \(s\) leaked on the \(k\)th space-frequency band can be expressed as
\[\int_{v_{1}^{k}}^{v_{2}^{k}}\int_{f_{k}^{k}}^{f_{2}^{k}}|\mathbf{s}_{ \theta}^{\dagger}\mathbf{a}(f)|^{2}dfd\theta=\int_{v_{1}^{k}}^{v_{2}^{k}}\mathbf{s}_{ \theta}^{\dagger}\mathbf{R}_{l}^{k}\mathbf{s}_{\theta}d\theta=\mathbf{s}^{\dagger}\mathbf{F}_{ l}^{k}\mathbf{s},\]
where \(v_{1}^{k}=\sin(\theta_{1}^{k})\), \(v_{2}^{k}=\sin(\theta_{2}^{k})\), \(\mathbf{s}_{\theta}=\text{vec}(\mathbf{a}^{\top}(\theta)\mathbf{S})=(\mathbf{I}_{L}\otimes\mathbf{a }^{\top}(\theta))\mathbf{s}\), \(\mathbf{F}_{l}^{k}=\mathbf{R}_{l}^{k}\otimes\mathbf{U}\), the \((p,q)\)th entry of \(\mathbf{U}\in\mathbb{C}^{N_{t}\times N_{t}}\) is given by
\[\mathbf{U}(p,q)=\left\{\begin{array}{ll}v_{2}^{k}-v_{1}^{k},&p=q,\\ \frac{e^{j2\pi\mathbbm{k}(q-p)d_{t}/\lambda}-e^{j2\pi\mathbbm{k}(q-p)d_{t}/ \lambda}}{j2\pi(q-p)d_{t}/\lambda},&p\neq q,\end{array}\right.\]
and we have assumed that the transmit array is a uniform linear array (ULA) with inter-element spacing denoted \(d_{t}\). Then we can enforce a space-frequency constraint to control the energy leaked on the space-frequency band. When multiple space-frequency constraints and the PAPR constraint are imposed, the optimization of \(\mathbf{s}\) (at each iteration) can be formulated by the following:
\[\mathcal{P}_{\mathbf{s}}\left\{\begin{array}{ll}\max_{\mathbf{s}}&\frac{\mathbf{s}^{ \dagger}\mathbf{D}\mathbf{s}}{\mathbf{s}^{\dagger}\mathbf{Q}\mathbf{s}+\beta(\mathbf{w})}\\ \text{s.t. }&\mathbf{s}^{\dagger}\mathbf{s}=e_{t},\\ &\text{PAPR}(\mathbf{s})\leq\rho,\\ &\mathbf{s}^{\dagger}\mathbf{F}_{l}^{k}\mathbf{s}\leq E_{l}^{k},\\ &k=1,\cdots,K_{rad}.\end{array}\right. \tag{57}\]
Similarly, we can use Algorithm 2 to tackle the above optimization problem.
## IV Numerical Examples
In this section, numerical experiments are conducted to evaluate the performance of the proposed algorithm. The considered MIMO radar system has \(N_{t}=4\) transmitters and \(N_{r}=4\) receivers, where both transmit array
and receive array are assumed to be ULAs, with inter-element spacing \(d_{t}=2\lambda\) and \(d_{r}=\lambda/2\), respectively (\(\lambda\) is the wavelength). The radar system is at an altitude of \(h_{a}=9000\) m and moving with a constant speed of \(v_{a}=75\) m/s. The total transmit energy of the waveforms is \(e_{t}=1\). The waveform has a bandwidth of \(800\) kHz and a duration of \(T=200\mu\)s, sampled with a frequency of \(f_{s}=800\) kHz (i.e., the code length is \(L=160\)). Additionally, we use a linear frequency modulated (LFM) waveform with a chirp rate of \(\gamma_{s}=3.5\times 10^{9}\) s\({}^{-2}\) as the initial waveform for all the transmit waveforms (Note that such waveforms do not satisfy the multi-spectral constraint, meaning that the initial waveforms are infeasible). The radar transmits \(M=16\) pulses in a CPI with a constant PRF of \(f_{r}=1000\) Hz. The target of interest is at an azimuth of \(0^{\circ}\), and a range of \(R_{t}=12728\) m. To establish the clutter model, we assume that \(P=3\) and \(N_{c}=361\) clutter patches are uniformly distributed in each iso-range ring. Additionally, \(\sigma_{c,p,k}^{2}=1,p=-P,\cdots,P,k=1,\cdots,N_{c}\). The noise power is \(\sigma^{2}=1\). \(K_{rad}=3\) licensed radios are coexisting with the AEW radar system. The normalized frequency bands of the licensed radiators are \(\Omega_{1}=[0.2218,0.2773]\), \(\Omega_{2}=[0.4609,0.6132]\), and \(\Omega_{3}=[0.7223,0.76328]\). The maximum allowed interfered energy of each waveform on these bands are \(E_{I}^{1}=-35\) dB, \(E_{I}^{2}=-35\) dB, and \(E_{I}^{3}=-30\) dB, respectively. Regarding the ADMM algorithm, we set the penalty parameter to \(\vartheta=4\), and the maximum number of iterations to \(1000\). For the stopping criterion of the ADMM algorithm, the Dinkelbach's transform, and the cyclic optimization, we set \(\xi=5\times 10^{-10}\), \(\epsilon_{1}=3\times 10^{-3}\), and \(\epsilon_{2}=3\times 10^{-4}\), respectively. Finally, the experiments are conducted on a standard PC with Intel(R) Core(TM) i7-9750H CPU and 16GB RAM.
First, we analyze the convergence of the proposed algorithm. Fig. 2 shows the SINR curves of the proposed algorithm versus the CPU time, under the PAPR constraints of \(\rho=1\) (i.e., the constant-envelope constraint), \(\rho=2\), \(\rho=3\), and \(\rho=L\) (i.e., the energy constraint), respectively, where the target velocity is \(v_{t}=52.5\) m/s (i.e., \(f_{t}=0.35\)). Note that for both DK-ADMM and MM-ADMM, the SINR monotonically increases as the iterations, which confirms the convergence of the proposed algorithm. The SINR of the waveforms synthesized by the DK-ADMM algorithm and the MM-ADMM algorithm at convergence is shown in Table III. We can see that a larger PAPR corresponds to a higher SINR, because of the larger feasibility region. In addition, even the stringent constant-envelope constraint is enforced on the waveforms, the SINR of the synthesized low-PAPR waveforms is very close to that of energy-constrained waveforms. Moreover, the SINR achieved by the DK-ADMM algorithm is slightly higher than that of the MM-ADMM algorithm. Regarding the CPU time to reach convergence, as shown in Table IV, the DK-ADMM algorithm is faster than the MM-ADMM algorithm. Interestingly, the results therein also imply that a larger PAPR results in a faster convergence.
Fig. 3 presents the ESDs of the designed waveforms. The three stopbands are shaded in gray with red dash-dot lines. The blue lines indicate the ESDs of the initial waveforms, and the yellow lines denote the ESDs of the designed waveforms. From Fig. 3, we can observe that all the transmit waveforms form deep nulls in the stopbands and satisfy the spectral constraints. In other words, the designed waveform can precisely control the energy leaked on the stopbands, which enhance the coexistence between the radar system and other radio frequency systems. Moreover, we can observe that the ESDs of the waveforms synthesized by the DK-ADMM algorithm is smoother than by the MM-ADMM algorithm. Considering that the DK-ADMM algorithm achieves a larger SINR in a shorter time and the associated ESDs of the synthesized waveforms are smoother, we use the DK-ADMM algorithm to synthesize the multi-spectrally constrained waveforms in the sequel.
Next we analyze the space-time cross-ambiguity (STCA) function of the devised waveforms under different constraints, where the STCA function is defined as [16]
\[P_{\mathbf{w},\mathbf{s}}(\theta,f)=|\mathbf{w}^{\dagger}\mathbf{V}(\theta,f)\mathbf{s}|^{2}, \tag{58}\]
where \(\mathbf{V}(\theta,f)=\mathbf{d}(f)\otimes\mathbf{I}_{L}\otimes\mathbf{A}(\theta)\), and \(\mathbf{d}(f)=[1,\cdots,e^{j2\pi(M-1)f}]^{\top}\). Fig. 4 shows the STCA function of the constant-envelope waveforms and the energy-constrained waveforms. We can observe the mainlobes
of all the STCA functions at zero spatial frequency (which corresponds to an azimuth of \(0^{\circ}\)) and a normalized Doppler frequency of \(0.35\). Additionally, these functions form deep nulls along the clutter ridges. Therefore, the devised waveforms and filters can successfully suppress the clutter and improve the SINR performance.
To assess the impact of initial points on the performance of the designed algorithms, various randomly generated waveforms are set to be the initial points, where the random waveforms are constant-envelope waveforms with modulus of \(\sqrt{p_{s}}\) and phases following a zero-mean Gaussian distribution. The SINRs at convergence and the associated CPU time for different initial points are shown in Fig. 5, where 50 Monte Carlo trials are conducted. Table V and Table VI show the maximum, the average, and the minimum value of the SINR at convergence and the CPU time needed to reach convergence for the different PAPR-constrained waveforms (i.e., \(\rho=1,2,3,L\)). From Fig. 5 and the results in Table V and VI, we find that the SINR of the designed algorithm is insensitive to the initial points, but the convergence speed is affected by the initial points. To show that the synthesized waveforms satisfy the spectral constraint, we randomly select a set of the results and plot the ESDs of the designed waveforms in Fig. 6. The results indicate that compared with the initial waveforms, the waveforms devised via the proposed algorithm achieve better spectral compatibility. Interestingly, the spectrum of the waveforms initialized by randomly generated waveforms is not as smooth as that in Fig. 3.
In the following, we analyze the impact of spectral notch depths on the achieved SINR. Fig. 7(a) shows the achieved SINR with respect to different spectral notch depths and compares with that of the waveforms devised
via the algorithm in [30]2, which is initialized by a heuristic initialization via alternating optimization with MM (HIVAM) or a heuristic initialization via alternating optimization with CD (HIVAC) method. For simplicity we assume that \(E_{I}^{1}=E_{I}^{2}=E_{I}^{3}=E_{I}\). It can be seen that as the notch depth goes deeper, the waveforms devised via the proposed algorithm attain higher SINR than those synthesized by the algorithm in [30]. This is because that the algorithm in [30] needs to scale the energy of the waveforms to satisfy the multi-spectral constraint. To see this, Fig. 7(b) draws the energy of the waveforms. It can be observed that as the notch depth goes deeper, the energy of the waveforms devised via the algorithm in [30] drops to a low level (to satisfy the stringent multi-spectral constraint), while the energy of the waveforms devised via the proposed algorithm always reaches the highest possible level. Since the SINR performance improves with the waveform energy, the performance of our waveforms is superior to that of the waveforms devised via the algorithm in [30].
Footnote 2: It should be noted that the algorithm in [30] focuses on designing constant-envelope waveforms for a SISO radar system. Herein, we extend this algorithm to deal with the MIMO case. However, it is difficult for the algorithm in [30] to secure a feasible initial point to satisfy both the equality constraint \(\mathbf{s}^{\dagger}\mathbf{s}=e_{t}\) and the multi-spectral constraint. Therefore, when using the algorithm in [30] to design the constant-envelope waveforms, we replace this equality constraint with the inequality constraint \(\mathbf{s}^{\dagger}\mathbf{s}\leq e_{t}\).
Fig. 8 compares the SINR of the waveforms devised via the proposed algorithm versus the normalized target Doppler frequencies with the algorithm in [30], where the performance of the energy-constrained waveforms is also included as a benchmark. From Fig. 8, we can see that the waveforms devised via the proposed algorithm achieve better detection performance than those devised via the algorithm in [30], especially at the low Doppler frequency area.
Next, we assess the robustness of the proposed algorithm with respect to the Doppler uncertainty of the clutter patches. Fig. 9 shows the SINR of the constant-envelope waveform versus the normalized Doppler frequency under different clutter uncertainty. Note that the Doppler uncertainty degrades the target detection performance, especially in the low Doppler frequency region (about \(2\sim 3\) dB loss in this area). However, the proposed algorithm still achieves better radar detection performance than the competing algorithms.
Finally, we extend the proposed ADMM algorithm to deal with the multiple space-frequency constraints. Fig. 10
\begin{table}
\begin{tabular}{c c c c} \hline CPU time (s) & Maximum & Average & Minimum \\ \hline Constant envelope & 484.717 & 329.859 & 253.514 \\ \(\rho=2\) & 384.215 & 288.868 & 229.881 \\ \(\rho=3\) & 372.878 & 255.552 & 199.061 \\ Energy constraint & 298.581 & 227.069 & 151.206 \\ \hline \end{tabular}
\end{table} TABLE VI: CPU time needed to reach convergence
Fig. 5: The impact of initial waveforms on SINR and CPU time. Random waveforms are used as the initial point. \(e_{t}=1\), \(v_{t}=52.5\) m/s. \(E_{I}^{1}=E_{I}^{2}=-35\) dB, \(E_{I}^{3}=-30\) dB. (a) SINR. (b) CPU time.
Fig. 6: ESDs of the designed waveforms. The blue and yellow lines represent the ESDs of the initial random waveforms and the optimized waveforms. \(e_{t}=1\). \(E_{I}^{1}=E_{I}^{2}=-35\) dB, \(E_{I}^{3}=-30\) dB. (a) Constant-envelope waveforms. (b) \(\rho=2\). (c) \(\rho=3\). (d) Energy-constrained waveforms.
Fig. 8: (a) SINR versus normalized target Doppler frequency. \(f_{t}\in[-0.5,0.5]\). (b) SINR at low Doppler frequencies. \(f_{t}\in[-0.1,0.1]\). \(E_{I}^{1}=E_{I}^{2}=-35\) dB, \(E_{I}^{3}=-30\) dB.
analyzes the SINR of the proposed algorithm versus the CPU time, where the spatial regions associated with the three radiators are \(\Theta_{1}=[-60^{\circ},-25^{\circ}]\), \(\Theta_{2}=[20^{\circ},60^{\circ}]\), and \(\Theta_{1}=[25^{\circ},70^{\circ}]\), respectively, \(E_{I}^{I}=E_{I}^{T}=E_{I}^{T}=-35\) dB. The inter-element spacing is set to be \(d_{t}=\lambda/2\) and \(d_{r}=2\lambda\). The elevation of the target of interest is set to be \(\phi=10^{\circ}\). The MIMO radar transmits \(M=24\) pulses in a CPU. The results show the monotonically increasing SINR of the waveforms synthesized by the proposed algorithm. Fig. 11 shows the spectral distribution of the synthesized waveforms over the spatial-frequency domain. We can see that the synthesized waveforms can precisely control the energy leaked on the spatial-frequency domains corresponding to the radiators, further improving the spectral coexistence of the MIMO radar system and the nearby radiators.
## V Conclusions
We derived efficient algorithms to design low-PAPR waveforms for airborne MIMO radar in spectrally crowded environments. The purpose was to maximize the output SINR by jointly optimizing the transmit waveforms and receive filters. To tackle the multi-spectrally constrained waveform optimization problem, we developed two iterative algorithms. which were based on cyclic optimization, Dinkelbach's transform, MM, and ADMM. Results showed that the waveforms devised via the proposed algorithm not only improved the detection performance of airborne MIMO radar, but also attained better spectral compatibility.
Possible future work includes the design of filter banks to account for unknown target Doppler (see, e.g., [49] for a discussion on this topic), the investigation of the correlation properties of the designed waveforms, and the performance analysis of the waveforms on hardware. It's also crucial to develop computationally efficient algorithms to design the waveforms in real time. Finally, the theoretical analysis for the convergence of the proposed ADMM algorithm will be left as a future topic.
|
2310.11387 | Transitive generalized toggle groups containing a cycle | In \cite{striker2018rowmotion} Striker generalized Cameron and
Fon-Der-Flaass's notion of a toggle group. In this paper we begin the study of
transitive generalized toggle groups that contain a cycle. We first show that
if such a group has degree $n$ and contains a transposition or a 3-cycle then
the group contains $A_n$. Using the result about transpositions, we then prove
that a transitive generalized toggle group that contains a short cycle must be
primitive. Employing a result of Jones \cite{jones2014primitive}, which relies
on the classification of the finite simple groups, we conclude that any
transitive generalized toggle group of degree $n$ that contains a cycle with at
least 3 fixed points must also contain $A_n$. Finally, we look at imprimitive
generalized toggle groups containing a long cycle and show that they decompose
into a direct product of primitive generalized toggle groups each containing a
long cycle. | Jonathan S. Bloom, Dan Saracino | 2023-10-17T16:35:06Z | http://arxiv.org/abs/2310.11387v1 | # Transitive generalized toggle groups containing a cycle
###### Abstract
In [9] Striker generalized Cameron and Fon-Der-Flaass's notion of a toggle group. In this paper we begin the study of transitive generalized toggle groups that contain a cycle. We first show that if such a group has degree \(n\) and contains a transposition or a \(3\)-cycle then the group contains \(A_{n}\). Using the result about transpositions, we then prove that a transitive generalized toggle group that contains a short cycle must be primitive. Employing a result of Jones [8], which relies on the classification of the finite simple groups, we conclude that any transitive generalized toggle group of degree \(n\) that contains a cycle with at least \(3\) fixed points must also contain \(A_{n}\). Finally, we look at imprimitive generalized toggle groups containing a long cycle and show that they decompose into a direct product of primitive generalized toggle groups each containing a long cycle.
## 1 Introduction
The study of toggle groups dates back to the 1995 work [1] of Cameron and Fon-Der-Flaass on order ideals of posets. To provide a simpler proof of a result from [4], Cameron and Fon-Der-Flaass associated to each element \(p\) of a finite poset \(P\) a permutation \(s_{p}\) of the set \(J(P)\) of order ideals of \(P\). For each \(I\in J(P)\), \(s_{p}(I)\) was defined to be \(I\triangle\left\{p\right\}\) if \(I\triangle\left\{p\right\}\in J(P)\), and \(s_{p}(I)=I\) otherwise. After using the \(s_{p}\)'s to achieve their proof, they went on to study the subgroup \(G(P)=\left\langle s_{p}\mid p\in P\right\rangle\) of the symmetric group \(S_{J(P)}\). They proved [1, Theorem 4] that if the Hasse diagram of \(P\) is connected then \(G(P)\) contains the alternating group on \(J(P)\), while if \(P\) is the disjoint union of posets \(P_{1}\) and \(P_{2}\) then \(G(P)\cong G(P_{1})\times G(P_{2})\).
In [10] Striker and Williams used the \(s_{p}\)'s in studying what they called "promotion" and "rowmotion" on order ideals. They called the \(s_{p}\)'s _toggles_ and \(G(P)\) the _toggle group_. Several other papers soon followed [3, 5, 6] that used analogues of the \(s_{p}\)'s in different contexts. In [9] Striker noted that the definition of the \(s_{p}\)'s only relied on the order ideals of \(P\) being subsets of \(P\), and generalized the definition of the toggle group.
**Definition 1.1**.: Let \(E\) be a finite set, which we refer to as the _ground set_. Let \(\mathcal{L}\) be a set of subsets of \(E\). For each \(e\in E\), define the _toggle_\(\tau_{e}:\mathcal{L}\rightarrow\mathcal{L}\) by letting \(\tau_{e}(X)=X\triangle\left\{e\right\}\) if
\(X\triangle\{e\}\in\mathcal{L}\) and \(\tau_{e}(X)=X\) otherwise. The _generalized toggle group_\(T(\mathcal{L})\) is the subgroup of the symmetric group \(S_{\mathcal{L}}\) generated by \(\{\tau_{e}\,|\,e\in E\}\).
Striker initiated the study of \(T(\mathcal{L})\) for a number of combinatorially interesting choices of \(\mathcal{L}\). In particular, she obtained analogs of [1, Theorem 4], for several examples where \(\mathcal{L}\) is a set of subsets of a finite poset or graph.
In what follows, we head in a somewhat different direction. There is a long history of results about primitive permutation groups that contain a nontrivial cycle, dating back to the work of Jordan in the 1870's. We will prove some structure results for all transitive generalized toggle groups that contain a nontrivial cycle. In Section 3 we will show that if \(T(\mathcal{L})\) is transitive with degree \(n\) and contains a transposition then \(T(\mathcal{L})\) must be the full symmetric group \(S_{n}\), and if \(T(\mathcal{L})\) is transitive with degree \(n\) and contains a \(3\)-cycle then \(T(\mathcal{L})\) must contain the alternating group \(A_{n}\). These results are analogues of standard results [2, Theorem 3.3A] for primitive permutation groups, but we will obtain them for transitive generalized toggle groups without assuming primitivity. Using the result for transpositions, we will then show that if \(T(\mathcal{L})\) is transitive with degree \(n\) and contains a "short cycle" (i.e., a cycle of length at most \(n-1\)), then \(T(\mathcal{L})\) is primitive. Using [8, Corollary 1.3] (a result that relies on the classification of the finite simple groups) it will then follow that if \(T(\mathcal{L})\) contains a cycle with at least three fixed points then \(T(\mathcal{L})\) contains \(A_{n}\).
In Section 4 we will show that if \(T(\mathcal{L})\) is imprimitive and contains a nontrivial cycle (necessarily a "long cycle" of length \(n\)) then \(T(\mathcal{L})\) is isomorphic to a direct product of primitive groups \(T(\mathcal{L}_{1})\times\cdots\times T(\mathcal{L}_{k})\) where each \(T(\mathcal{L}_{i})\) has degree at least two and contains a long cycle whose length is the degree of \(T(\mathcal{L}_{i})\).
In obtaining the results of Section 4 we will use what Striker [9] calls the "toggle-disjoint Cartesian product" of sets \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\).
**Definition 1.2**.: If \(E_{1}\) and \(E_{2}\) are disjoint ground sets and \(\mathcal{L}_{i}\subseteq 2^{E_{i}}\) for \(i=1,2\) then we say \(\mathcal{L}\)_is the toggle-disjoint Cartesian product of \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\)_ and write \(\mathcal{L}=\mathcal{L}_{1}\otimes\mathcal{L}_{2}\) if \(\mathcal{L}=\{X_{1}\cup X_{2}\,|\,X_{i}\in\mathcal{L}_{i}\}\).
Striker makes this definition under an assumption weaker than \(E_{1}\cap E_{2}=\emptyset\), but we will only use it when \(E_{1}\cap E_{2}=\emptyset\). It is straightforward to check [9, Theorem 2.18] that if \(\mathcal{L}\) is the toggle-disjoint Cartesian product of \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) then \(T(\mathcal{L})\cong T(\mathcal{L}_{1})\times T(\mathcal{L}_{2})\).
**Conventions.** It may happen that for some elements \(e\in E\), \(\tau_{e}\) is the identity element of the symmetric group on \(\mathcal{L}\). Since we will always be assuming that \(T(\mathcal{L})\) is transitive in what follows, this can only happen if \(e\) is in all or none of the sets in \(\mathcal{L}\). We will always remove such \(e\)'s from \(E\). Doing this has no effect on \(T(\mathcal{L})\).
To simplify terminology we shall refer to generalized toggle groups simply as toggle groups.
## 2 Block systems in toggle groups
Recall that if a group \(G\) acts on a set \(X\) then we say a subset \(\mathcal{B}\subseteq X\) is a _block_ for \(G\) if, for all \(g\in G\), either \(g(\mathcal{B})=\mathcal{B}\) or \(g(\mathcal{B})\) is disjoint from \(\mathcal{B}\). If \(G\) acts transitively on \(X\) and \(\mathcal{B}\) is
a block for \(G\) then the sets \(\left\{g(\mathcal{B})\,|\,g\in G\right\}\) form a partition of \(X\). We call such a partition a _block system_ for \(G\). Clearly, the blocks in a given block system must all have the same size. If there exists a block system for which this size is strictly between 1 and \(\left|X\right|\) then the group action is said to be _imprimitive_, and otherwise the action is said to be _primitive_. A good source for background information on these ideas is [2, Chapter 1].
The purpose of this section is to explore how toggles behave on a block system. In particular, the lemmas established in this section describe a rich structure in the context of imprimitive toggle groups. We then exploit this structure in the following sections.
Recall that, as in the introduction, \(E\) is our finite ground set, \(\mathcal{L}\) is a collection of subsets of \(E\) and \(T(\mathcal{L})\) is the corresponding toggle group. We assume throughout that \(T(\mathcal{L})\) acts transitively on \(\mathcal{L}\), and we fix a system of blocks for \(T(\mathcal{L})\). We often denote individual blocks by \(\mathcal{B}\) or \(\mathcal{C}\).
**Definition 2.1**.: We say that \(g\in T(\mathcal{L})\) is _type 2_ if \(g(\mathcal{B})=\mathcal{B}\) for all blocks \(\mathcal{B}\), and that \(g\) is _type 1_ otherwise.
**Lemma 2.2**.: _Let \(\tau_{y}\) be a type 1 toggle in \(T(\mathcal{L})\)._
* _If_ \(\tau_{y}(\mathcal{B})=\mathcal{B}\) _for some block_ \(\mathcal{B}\) _then_ \(\tau_{y}\) _is the identity on this block._
* \(\tau_{y}\) _commutes with all type 2 toggles in_ \(T(\mathcal{L})\)_._
* _For any type 2 toggle_ \(\tau\) _and blocks_ \(\mathcal{B},\mathcal{C}\)_, the cycle structure of_ \(\tau|_{\mathcal{B}}\) _is the same as the cycle structure of_ \(\tau|_{\mathcal{C}}\)_._
Proof.: We prove i) by contradiction. Suppose that for some \(A\in\mathcal{B}\) we have \(A,A\triangle\left\{y\right\}\in\mathcal{B}\). By definition of \(\tau_{y}\) there exists another block \(\mathcal{C}\neq\mathcal{B}\). By transitivity there exist toggles \(\tau_{z_{1}},\ldots,\tau_{z_{k}}\) such that \(\tau_{z_{k}}\cdots\tau_{z_{1}}(\mathcal{B})=\mathcal{C}\). Let \(\mathcal{B}_{0}=\mathcal{B}\) and \(\mathcal{B}_{i}=\tau_{z_{i}}(\mathcal{B}_{i-1})\). By omitting some \(\tau_{i}\)'s we can assume that \(\mathcal{B}_{i}\neq\mathcal{B}_{i-1}\). Then \(A\triangle\left\{z_{1}\right\},A\triangle\left\{y,z_{1}\right\}\in\mathcal{B} _{1}\) and hence \(\tau_{y}(A\triangle\left\{z_{1}\right\})=A\triangle\left\{z_{1},y\right\}\in \mathcal{B}_{1}\). So \(\tau_{y}(\mathcal{B}_{1})=\mathcal{B}_{1}\). By iterating this argument we can show that \(\tau_{y}(\mathcal{C})=\mathcal{C}\). Since \(\mathcal{C}\) was an arbitrary block distinct from \(\mathcal{B}\), it follows that \(\tau_{y}\) is type 2, a contradiction.
To prove ii) let \(\tau_{x}\) be a type 2 toggle and fix a block \(\mathcal{B}\). If \(\tau_{y}(\mathcal{B})=\mathcal{B}\) then by i) it is clear that \(\tau_{x}\) and \(\tau_{y}\) commute at all elements of \(\mathcal{B}\). So we may assume that \(\tau_{y}(\mathcal{B})=\mathcal{C}\) where \(\mathcal{B}\neq\mathcal{C}\). Fix \(A\in\mathcal{B}\) so that \(A\triangle\left\{y\right\}=\tau_{y}(A)\in\mathcal{C}\). If \(\tau_{x}(A)=A\) then \(A\triangle\left\{x\right\}\not\in\mathcal{B}\) (since \(A\triangle\left\{x\right\}\not\in\mathcal{L}\)) and hence \(A\triangle\left\{x,y\right\}\notin\mathcal{C}\). So
\[\tau_{y}\tau_{x}(A)=\tau_{y}(A)=A\triangle\left\{y\right\}=\tau_{x}(A\triangle \left\{y\right\})=\tau_{x}\tau_{y}(A),\]
where the third equality follows from the fact that \(\tau_{x}\) is type 2 so that \(\tau_{x}(A\triangle\left\{y\right\})\) must be in \(\mathcal{C}\). So if \(\tau_{x}(A)=A\) then \(\tau_{x}\) and \(\tau_{y}\) commute at \(A\). If \(\tau_{x}(A)=A\triangle\left\{x\right\}\) then \(A\triangle\left\{x\right\}\in\mathcal{B}\) and therefore \(\tau_{y}\tau_{x}(A)=A\triangle\left\{x,y\right\}=\tau_{x}\tau_{y}(A)\).
To prove iii) consider a type 2 toggle \(\tau\) and blocks \(\mathcal{B},\mathcal{C}\). As in the proof of i) it suffices by transitivity to prove the claim in the case where \(\tau_{y}(\mathcal{B})=\mathcal{C}\). If we write \(\tau\) as the product of its block restrictions we have
\[\tau=\cdots\tau|_{\mathcal{B}}\cdots\tau|_{\mathcal{C}}\cdots.\]
Conjugating by \(\tau_{y}\) gives
\[\tau_{y}\tau\tau_{y}=\cdots(\tau_{y}\tau|_{\mathcal{B}}\tau_{y})\cdots(\tau_{y} \tau|c\tau_{y})\cdots\]
and we note that \(\tau_{y}\tau|_{\mathcal{C}}\tau_{y}\) is a permutation of \(\mathcal{B}\). Consequently
\[\tau_{y}\tau|_{\mathcal{C}}\tau_{y}=(\tau_{y}\tau\tau_{y})|_{\mathcal{B}}=\tau |_{\mathcal{B}},\]
where the last equality follows by part ii) and the fact that \(\tau_{y}\) is type 1. Since conjugation preserves cycle structure the result is now clear.
**Lemma 2.3**.: _Let \(\rho\in T(\mathcal{L})\) be a product of type 2 toggles. If \(\rho\) fixes all elements, pointwise, in some block \(\mathcal{B}\) then \(\rho=1\)._
Proof.: If \(\mathcal{B}\) is the only block we are done. Otherwise, transitivity implies that there must be some type 1 toggle \(\tau\) such that \(\tau(\mathcal{B})=\mathcal{C}\) with \(\mathcal{B}\neq\mathcal{C}\). It follows by Lemma 2.2 that \(\tau\) and \(\rho\) commute. Fix \(C\in\mathcal{C}\) and let \(B=\tau(C)\in\mathcal{B}\). Since \(\tau\) is an involution we have
\[\rho(C)=\rho\tau(B)=\tau\rho(B)=\tau(B)=C.\]
Hence \(\rho\) fixes all elements in \(\mathcal{C}\). As in the proof of the first part of the preceding lemma, iterating this argument implies that \(\rho=1\).
**Lemma 2.4**.: _Let \(\sigma\in T(\mathcal{L})\) be a product of type 1 toggles. For each block \(\mathcal{B}\) there exists some set \(X\subseteq E\) such that for each \(A\in\mathcal{B}\) we have_
\[\sigma(A)=A\triangle X.\]
_In particular, if \(\sigma(\mathcal{B})=\mathcal{B}\) then \(\sigma|_{\mathcal{B}}\) is either the identity or an involution with no fixed points._
Proof.: Let \(\sigma=\tau_{x_{k}}\cdots\tau_{x_{1}}\) where each \(\tau_{x_{i}}\) is a type 1 toggle. Set \(\mathcal{B}_{0}=\mathcal{B}\) and let \(\mathcal{B}_{i}=\tau_{x_{i}}(\mathcal{B}_{i-1})\). By the first part of Lemma 2.2 we may assume \(\mathcal{B}_{i}\neq\mathcal{B}_{i-1}\). It then follows that
\[\mathcal{B}_{i}=\left\{A\triangle\left\{x_{i}\right\}\,|\,A\in\mathcal{B}_{i- 1}\right\}.\]
Defining \(X\) to be the set of all the \(x_{i}\) that appear an odd number of times in \(x_{1},\ldots,x_{k}\) it further follows that \(\sigma(A)=A\triangle X\) for every \(A\in\mathcal{B}\). This proves our first claim. The second claim follows since \(\sigma\) is now seen to act by symmetric difference with a fixed set \(X\), on all of \(\mathcal{B}\).
**Lemma 2.5**.: _Let \(X\) and \(Y\) be disjoint subsets of \(E\) and define \(\sigma=\tau_{x_{1}}\cdots\tau_{x_{m}}\) and \(\rho=\tau_{y_{1}}\cdots\tau_{y_{\ell}}\) where \(x_{i}\in X,\ y_{j}\in Y\). For all \(A\in\mathcal{L}\) we have \(\sigma(A)=\rho(A)\) if and only if \(\sigma(A)=A=\rho(A)\)._
Proof.: The reverse direction is clear. Now assume \(\sigma(A)=\rho(A)\). For some \(S\subseteq X\) and \(T\subseteq Y\) we have
\[A\triangle S=\sigma(A)=\rho(A)=A\triangle T.\]
(Note that the subsets \(S\) and \(T\) may be dependent on \(A\).) As \(X\) and \(Y\) are disjoint this can only occur if \(S=T=\emptyset\), and the forward direction follows.
**Lemma 2.6**.: _Let \(E_{1}=\{x\in E\,|\,\tau_{x}\text{ is type 1}\}\), \(E_{2}=\{y\in E\,|\,\tau_{y}\text{ is type 2}\}\) and set \(\mathcal{L}_{i}=\{A\cap E_{i}\,|\,A\in\mathcal{L}\}\)._
_Suppose \(T(\mathcal{L})\) is such that any product \(\sigma\) of type 1 toggles has the property that if \(\sigma(\mathcal{B})=\mathcal{B}\), for some block \(\mathcal{B}\), then \(\sigma(A)=A\) for all \(A\in\mathcal{B}\). Then \(\mathcal{L}=\mathcal{L}_{1}\otimes\mathcal{L}_{2}\) where \(|\mathcal{L}_{1}|\) is the number of blocks and \(|\mathcal{L}_{2}|\) is the size of each block._
Proof.: Fix \(A\in\mathcal{L}\) and define \(\mathcal{O}(A)\) to be the orbit containing \(A\) under the group generated by all type 1 toggles. As \(T(\mathcal{L})\) is transitive it follows that for any two blocks there is a product of type 1 toggles that maps one to the other. As a result \(\mathcal{O}(A)\) contains at least one set from each block. On the other hand if \(B,B^{\prime}\in\mathcal{O}(A)\) are in the same block it follows by our assumption about type 1 toggles that \(B=B^{\prime}\). So \(\mathcal{O}(A)\) contains exactly one set from each block. Consequently for a fixed block \(\mathcal{B}\) we have the disjoint union
\[\mathcal{L}=\bigcup_{A\in\mathcal{B}}\mathcal{O}(A).\]
We call the sets \(\mathcal{O}(A)\)_layers_.
If \(A\neq B\) are elements of the same block \(\mathcal{B}\) then by transitivity, the second part of Lemma 2.2 and the fact that \(\mathcal{O}(A)\) contains exactly one set from each block, it follows that for some product \(\rho\) of type 2 toggles we have \(\rho(A)=B\). From this we see that
\[A\cap E_{1}=B\cap E_{1}\text{ and }A\cap E_{2}\neq B\cap E_{2}.\]
On the other hand, if \(A\neq C\) are elements in the same layer then
\[A\cap E_{1}\neq C\cap E_{1}\text{ and }A\cap E_{2}=C\cap E_{2}.\]
So all the sets in \(\mathcal{O}(A)\) have different intersections with \(E_{1}\) and every set \(D\in\mathcal{L}\) has the same intersection with \(E_{1}\) as some set in \(\mathcal{O}(A)\) since \(D\) is in the same block as some set in \(\mathcal{O}(A)\). Thus \(|\mathcal{L}_{1}|=|\mathcal{O}(A)|\) is the number of blocks \(\mathcal{B}_{i}\), since \(\mathcal{O}(A)\) contains exactly one set from each block. Since all sets in \(\mathcal{O}(A)\) have the same intersection with \(E_{2}\) we have
\[\mathcal{O}(A)=\mathcal{L}_{1}\otimes\{A\cap E_{2}\}.\]
So we have
\[\mathcal{L}=\bigcup_{A\in\mathcal{B}}\mathcal{O}(A)=\bigcup_{A\in\mathcal{B} }\mathcal{L}_{1}\otimes\{A\cap E_{2}\}=\mathcal{L}_{1}\otimes\mathcal{L}_{2},\]
since every set in \(\mathcal{L}\) is in the same layer as some set in \(\mathcal{B}\), hence has the same intersection with \(E_{2}\). The last equality further shows that \(|\mathcal{L}_{2}|\) is the size of each block as claimed.
**Lemma 2.7**.: _Let \(T(\mathcal{L})\) be imprimitive with a system of blocks of imprimitivity \(\mathcal{B}_{i}\). If the block size is odd, then \(\mathcal{L}\) is a toggle-disjoint Cartesian product of sets each of which has at least two elements._
Proof.: To apply Lemma 2.6, suppose \(\sigma\) is a product of type 1 toggles such that \(\sigma(\mathcal{B}_{i})=\mathcal{B}_{i}\) for some \(i\). By Lemma 2.4 it follows that \(\sigma|_{\mathcal{B}_{i}}\) is either an involution with no fixed points or the identity. Since the blocks \(\mathcal{B}_{i}\) have odd size we see that \(\sigma|_{\mathcal{B}_{i}}\) is the identity. Lemma 2.6 completes the proof.
**Lemma 2.8**.: _Let \(T(\mathcal{L})\) be imprimitive with a system of blocks of imprimitivity \(\mathcal{B}_{i}\). Assume there exists some \(\sigma\), a product of type 1 toggles, such that \(\sigma(\mathcal{B}_{i})=\mathcal{B}_{i}\) and \(\sigma|_{\mathcal{B}_{i}}\neq 1\) for some \(\mathcal{B}_{i}\). Then \(\tau|_{\mathcal{B}_{j}}\) is an even permutation for all type 2 toggles \(\tau\) and all blocks \(\mathcal{B}_{j}\)._
_Consequently, if for some \(\mathcal{B}_{i}\) there exists a type 2 toggle \(\tau\) such that \(\tau|_{\mathcal{B}_{i}}\) is odd, then \(\mathcal{L}\) is a toggle-disjoint Cartesian product of two sets each of which has at least two elements._
Proof.: Let \(\mathcal{B}_{i}\) and \(\sigma\) be as stated. As \(\sigma|_{\mathcal{B}_{i}}\neq 1\), Lemma 2.4 implies that \(\sigma|_{\mathcal{B}_{i}}\) is an involution with no fixed points. Define a partition on \(\mathcal{B}_{i}\) so that elements \(A\) and \(B\) are in the same class if and only if \(\sigma(A)=B\). Denote the classes in this partition by \(\mathcal{C}_{j}\) and observe that \(|\mathcal{C}_{j}|=2\). As \(\sigma\) is a product of type 1 toggles, it follows from Lemma 2.2 that any type 2 toggle commutes with \(\sigma\). Consequently, the \(\mathcal{C}_{j}\) form a block system for the restriction to \(\mathcal{B}_{i}\) of the group \(G\) generated by all the type 2 toggles. By Lemma 2.5 we see that any type 2 toggle \(\tau\) restricted to \(\mathcal{B}_{i}\) can either have \(\tau(\mathcal{C}_{j})=\mathcal{C}_{k}\) with \(j\neq k\) or \(\tau|_{\mathcal{C}_{j}}=1\). As each \(C_{j}\) has size 2 this implies that the restriction to \(\mathcal{B}_{i}\) of any type 2 toggle is an even permutation. By the third part of Lemma 2.2, this proves our first claim.
The proof of our second claim is now immediate by Lemma 2.6.
## 3 Transitive toggle groups containing short cycles
Throughout this section we continue to assume that \(T(\mathcal{L})\) is transitive. The _degree_ of \(T(\mathcal{L})\) is the number of elements in the set \(\mathcal{L}\).
The first two theorems of this section state that if \(T(\mathcal{L})\) has degree \(n\) and contains a transposition then \(T(\mathcal{L})\) is \(S_{n}\), and if \(T(\mathcal{L})\) has degree \(n\) and contains a 3-cycle then \(T(\mathcal{L})\) contains \(A_{n}\). These two theorems echo well-known results of Jordan for primitive permutation groups (see [2, Theorem 3.3A] or [7, Theorem 8.17, Corollary 8.19]). Interestingly, in the toggle-group case we do not need to assume the groups are primitive but instead we obtain that as a consequence.
**Theorem 3.1**.: _Assume that \(T(\mathcal{L})\) has degree \(n\) and contains a transposition. Then \(T(\mathcal{L})\cong S_{n}\)._
Proof.: We define an equivalence relation \(\sim\) on \(\mathcal{L}\) by letting \(A\sim B\) if and only if either \(A=B\) or else \((A,B)\in T(\mathcal{L})\). Since conjugating \((A,B)\) by \((B,C)\) yields \((A,C)\), it is easy to check that \(\sim\) is an equivalence relation. Likewise, conjugating \((A,B)\) by any \(g\in T(\mathcal{L})\) yields \((g(A),g(B))\), so the equivalence classes of \(\sim\) constitute a system of blocks for \(T(\mathcal{L})\). For the remainder of the proof we fix this system of blocks and denote the blocks by \(\mathcal{B}_{1},\ldots,\mathcal{B}_{m}\).
We prove the theorem by showing that \(m=1\) (which implies that \(T(\mathcal{L})\) contains all transpositions). By assumption we have \((A,B)\in T(\mathcal{L})\) for some \(A,B\in\mathcal{L}\), and we can suppose without loss of generality that \(A,B\in\mathcal{B}_{1}\). By the second part of Lemma 2.2 we can express
\[(A,B)=\underbrace{\tau_{y_{1}}\cdots\tau_{y_{k}}}_{\sigma}\underbrace{\tau_{x _{1}}\cdots\tau_{x_{\ell}}}_{\rho}\]
where the \(\tau_{y_{j}}\) are type 1 and the \(\tau_{x_{i}}\) are type 2. By squaring both sides we obtain
\[1=\sigma^{2}\rho^{2}.\]
As \(\{x_{1},\ldots,x_{m}\}\cap\{y_{1},\ldots,y_{\ell}\}=\emptyset\), Lemma 2.5 implies that \(\sigma^{2}=\rho^{2}=1\).
Assume for a contradiction that \(m>1\), and let \(C\in\mathcal{B}_{2}\). As \((A,B)=\sigma\rho\) we must have \(\sigma\rho(C)=C\). As \(\sigma^{2}=1\) it follows that \(\rho(C)=\sigma(C)\). By Lemma 2.5 then \(\rho(C)=C=\sigma(C)\) for all \(C\in\mathcal{B}_{2}\). So \(\rho\), a product of type 2 toggles, fixes \(\mathcal{B}_{2}\) pointwise. Hence by Lemma 2.3, \(\rho=1\) and \(\sigma=(A,B)\). Now if each block has even size then it follows from the first part of Lemma 2.2 that each \(\tau_{y_{i}}\) is an even permutation, contradicting \(\sigma=(A,B)\). So each block has odd size. As \(\sigma(\mathcal{B}_{1})=\mathcal{B}_{1}\) it follows by the second claim in Lemma 2.4 that \(\sigma|_{\mathcal{B}_{1}}=1\), contradicting the fact that \(\sigma=(A,B)\). We conclude that \(m=1\) as needed.
**Theorem 3.2**.: _Assume \(T(\mathcal{L})\) has degree \(n\) and contains a 3-cycle. Then \(A_{n}\leq T(\mathcal{L})\)._
Proof.: We define an equivalence relation \(\sim\) on \(\mathcal{L}\) by letting \(A\sim B\) if and only if either \(A=B\) or else \((A,B,C)\in T(\mathcal{L})\) for some \(C\). Using the fact that \((A,B,C)^{2}=(B,A,C)\), it is easy to check, as in the preceding proof, that \(\sim\) is an equivalence relation, and that the equivalence classes of \(\sim\) constitute a system of blocks for \(T(\mathcal{L})\). We again denote the blocks by \(\mathcal{B}_{1},\ldots,\mathcal{B}_{m}\).
To prove the theorem it suffices to show that \(T(\mathcal{L})\) must contain all 3-cycles or, equivalently, that \(m=1\). To see that this suffices, suppose \(m=1\) and let \(A,B,C\) be distinct elements of \(\mathcal{L}\). We claim that \((A,B,C)\in T(\mathcal{L})\). Since \(m=1\) we have \(A\sim B\), so \((A,B,D)\in T(\mathcal{L})\) for some \(D\neq A,B\). If \(D=C\) we are done. Otherwise, \(D\neq C\) and since \(m=1\) we must have a 3-cycle \((D,C,E)\in T(\mathcal{L})\). If \(E\neq A,B\) we are done by conjugating \((A,B,D)\) by \((D,C,E)\). If \(E=A\) then we have \((A,B,C)=(D,C,A)(A,B,D)\in T(\mathcal{L})\). If \(E=B\) we have \((A,B,C)=(A,B,D)(D,C,B)^{2}\in T(\mathcal{L})\).
We now show that \(m=1\). By assumption we have a 3-cycle \((A,B,C)\) in \(T(\mathcal{L})\), and without loss of generality we can suppose that \(A,B\in\mathcal{B}_{1}\) and therefore \(C\in\mathcal{B}_{1}\) since \((A,B,C)\) maps \(A\) and \(B\) into the same block. By Lemma 2.2 we can write
\[(A,B,C)=\underbrace{\tau_{y_{1}}\cdots\tau_{y_{k}}}_{\sigma}\underbrace{\tau_ {x_{1}}\cdots\tau_{x_{\ell}}}_{\rho}\]
where the \(\tau_{y_{j}}\) are type 1 and the \(\tau_{x_{i}}\) are type 2.
For a contradiction assume \(m>1\) and let \(D\in\mathcal{B}_{i}\) where \(i\neq 1\). As \((A,B,C)=\sigma\rho\) we have \(\sigma\rho(D)=D\) and hence \(\rho(D)=\sigma^{-1}(D)\). By Lemma 2.5 we conclude that \(\rho(D)=D\). As \(D\) is an arbitrary element of \(\mathcal{B}_{i}\) then \(\rho\) is the identity on \(\mathcal{B}_{i}\). By Lemma 2.3 it follows that \(\rho=1\). So \(\sigma=(A,B,C)\). But then Lemma 2.4 forces \(\sigma^{2}=1\), a contradiction. We conclude that \(m=1\) as needed.
In addition to the results that motivated Theorem 3.1 and Theorem 3.2, Jordan also proved that if a primitive permutation group \(G\) of degree \(n\) contains a cycle of prime length \(p\leq n-3\) then \(A_{n}\leq G\) (see [2, Theorem 3.3E] or [7, Theorem 8.23]). Much more recently, Jones, in [8, Corollary 1.3], removed the assumption that the length of the cycle be prime in Jordan's result. To obtain an analogue of Jones' result for toggle groups (as opposed to primitive permutation groups), we will combine Jones' result with the following.
**Theorem 3.3**.: _Suppose \(T(\mathcal{L})\) has degree \(n\) and contains a nontrivial cycle \(\gamma\) of length \(\leq n-1\). Then \(T(\mathcal{L})\) is primitive._
Proof.: Suppose not, and take a system of blocks of imprimitivity for \(T(\mathcal{L})\). Note that if two points of \(\mathcal{L}\) are in the same block then so are their images under \(\gamma\), so if one point moved by \(\gamma\) is in the same block as a fixed point of \(\gamma\) then all points moved by \(\gamma\) are in that block.
We first consider the case when all points moved by \(\gamma\) are in a block \(\mathcal{B}\) that contains a fixed point of \(\gamma\). We know there are blocks other than \(\mathcal{B}\), and each of these other blocks consists entirely of fixed points of \(\gamma\). Let \(\mathcal{C}\) be any one of these other blocks. If we write \(\gamma=\sigma\rho\) with \(\sigma\) a product of type 1 toggles and \(\rho\) a product of type 2 toggles, then we can argue as we have before that \(\rho\) must fix every point of \(\mathcal{C}\), and therefore \(\rho=1\) by Lemma 2.3. Therefore \(\gamma=\sigma\), and \(\gamma\) maps \(\mathcal{B}\) to itself. By Lemma 2.4, \(\gamma^{2}=1\), so \(\gamma\) is a transposition. By Theorem 3.1, \(T(\mathcal{L})\) is \(S_{n}\), so since \(T(\mathcal{L})\) has degree \(n\), \(T(\mathcal{L})\) is primitive.
We now consider the case when no points moved by \(\gamma\) are in the same block as any fixed point of \(\gamma\). We first show that in this case each block has even size. If not, then there is a block \(\mathcal{B}\) of odd size \(|\mathcal{B}|>1\) consisting of points moved by \(\gamma\), and there is a block \(\mathcal{C}\) consisting of fixed points of \(\gamma\) (since \(\gamma\) has length \(\leq n-1\)). As in the first case, we write \(\gamma=\sigma\rho\) and use the existence of the block \(\mathcal{C}\) to show that \(\rho=1\) and \(\gamma=\sigma\). If we choose \(A\neq B\) in \(\mathcal{B}\) then there exists a positive integer \(m\) such that \(\gamma^{m}(A)=B\), and therefore \(\gamma^{m}\) maps \(\mathcal{B}\) to \(\mathcal{B}\). Since \(\gamma^{m}=\sigma^{m}\), a product of type 1 toggles, Lemma 2.4 implies that \(\gamma^{m}|_{\mathcal{B}_{i}}\) is either an involution with no fixed points or the identity. Since the blocks have odd size we see that \(\sigma|_{\mathcal{B}_{i}}\) is the identity. But this is impossible, since \(\gamma^{m}(A)=B\) where \(A\neq B\).
So we know the blocks have even size. Since the elements moved by \(\gamma\) comprise a set of blocks, \(\gamma\) has even length, hence is an odd permutation. But the fact that the blocks have even size also implies, by the first part of Lemma 2.2, that every type 1 toggle is an even permutation. Since \(\gamma=\sigma\) is a product of type 1 toggles, \(\gamma\) is an even permutation. This contradiction concludes the proof.
**Corollary 3.4**.: _If \(T(\mathcal{L})\) has degree \(n\) and contains a nontrivial cycle of length \(\leq n-3\) then \(A_{n}\leq T(\mathcal{L})\)._
Proof.: By Theorem 3.3, \(T(\mathcal{L})\) is primitive, so the corollary follows from [8, Corollary 1.3], which states, among other things, that if \(G\) is a primitive permutation group of degree \(n\) that contains a nontrivial cycle with at least three fixed points, then \(A_{n}\leq G\).
Corollary 3.4 provides an alternate proof of Theorem 3.2 if we check separately the cases \(n\leq 5\). But Corollary 3.4 relies on [8, Corollary 1.3], which depends on the classification of the finite simple groups.
## 4 Imprimitive toggle groups containing long cycles
We have shown that a toggle group that contains a short cycle must be primitive. Therefore, if an imprimitive toggle group contains a cycle that cycle can only be a long cycle. In this
section we study imprimitive toggle groups that contain a long cycle.
**Lemma 4.1**.: _Let \(E_{1},E_{2}\) be disjoint ground sets and let \(\mathcal{L}_{i}\subseteq 2^{E_{i}}\) be such that \(|\mathcal{L}_{i}|\geq 2\) for \(i=1,2\). Set \(\ell=|\mathcal{L}_{1}|\) and \(m=|\mathcal{L}_{2}|\) and \(\mathcal{L}=\mathcal{L}_{1}\otimes\mathcal{L}_{2}\). Then \(T(\mathcal{L})\) contains a long cycle if and only if \((\ell,m)=1\) and both \(T(\mathcal{L}_{1})\) and \(T(\mathcal{L}_{2})\) contain long cycles._
Proof.: Let \(\mathcal{L}_{1}=\{A_{1},\ldots,A_{\ell}\}\) and \(\mathcal{L}_{2}=\{B_{1},\ldots,B_{m}\}\). Consider the block system given by the blocks
\[\mathcal{C}_{i}=\{A_{i}\cup B_{j}\,|\,B_{j}\in\mathcal{L}_{2}\}\,,\]
so that the type 1 toggles are precisely \(\tau_{x}\) for \(x\in E_{1}\) and the type 2 toggles are precisely \(\tau_{y}\) for \(y\in E_{2}\). (Recall that, by the convention we adopted at the outset, no \(\tau_{x}\) or \(\tau_{y}\) is the identity.) Note that \(|\mathcal{C}_{i}|\geq 2\) for all \(\mathcal{C}_{i}\).
Suppose \(T(\mathcal{L})\) contains a long cycle \(\gamma\). Fix a block \(\mathcal{C}\). Since \(|\mathcal{C}|\geq 2\) and \(\gamma\) is a long cycle, there exists a smallest positive integer \(a\) such that \(\gamma^{a}\) maps a point of \(\mathcal{C}\) into \(\mathcal{C}\). Choose \(X\in\mathcal{C}\) such that \(\gamma^{a}(X)\in\mathcal{C}\). By our choice of \(a\), it follows that \(X,\gamma(X),\gamma^{2}(X),\ldots,\gamma^{(a-1)}(X)\) must all be in distinct blocks and so \(a\leq\ell\). Further observe that \(\gamma^{ka}(X)\in\mathcal{C}\) for all \(k\). If we write \(\ell m=ka+r\), with \(0\leq r<a\), then \(\gamma^{r}(\gamma^{ka}(X))=X\in\mathcal{C}\) since \(\gamma\) is an \(\ell m\)-cycle. Since \(0\leq r<a\) we must have \(r=0\) by our choice of \(a\). So \(a|\ell m\). Again since \(\gamma\) is an \(\ell m\)-cycle, we see that \(\gamma^{a}(X),\gamma^{2a}(X)\ldots,\gamma^{\ell m}(X)\) are distinct and in \(\mathcal{C}\). As \(|\mathcal{C}|=m\) have \(\ell m/a\leq m\), so \(\ell\leq a\) and therefore \(\ell=a\) and we have
\[\gamma^{\ell}=\gamma_{1}\cdots\gamma_{\ell},\]
where \(\gamma_{i}\) is a long cycle on block \(\mathcal{C}_{i}\).
Now, using Lemma 2.2, write \(\gamma=\sigma\rho\) where \(\sigma\) is a product of type 1 toggles and \(\rho\) is a product of type 2 toggles. Since \(\sigma\) and \(\rho\) commute we have \(\gamma^{\ell}=\sigma^{\ell}\rho^{\ell}\), and since \(\gamma^{\ell}\) and \(\rho^{\ell}\) are type 2 permutations so is \(\sigma^{\ell}\). Since \(\sigma^{\ell}\) is a product of type 1 toggles and \(\mathcal{L}\) is a toggle-disjoint Cartesian product it follows that \(\sigma^{\ell}=1\). So
\[\gamma_{1}\cdots\gamma_{\ell}=\rho^{\ell}.\]
Restricting our attention to a fixed block \(\mathcal{C}_{i}\) we see that \(\rho^{\ell}|_{\mathcal{C}_{i}}=\gamma_{i}\). Since \(\gamma_{i}\) is a long cycle on this block it follows that \(\rho|_{\mathcal{C}_{i}}\) must be a long cycle on \(\mathcal{C}_{i}\) and that \((m,\ell)=1\). It is now immediate that \(T(\mathcal{L}_{2})\) must contain a long cycle as well. By repeating this argument with the block system
\[\mathcal{C}_{i}^{\prime}=\{B_{i}\cup A_{j}\,|\,A_{j}\in\mathcal{L}_{1}\}\]
we conclude that \(T(\mathcal{L}_{1})\) must also have a long cycle as claimed.
Let us now consider the converse. Suppose \((m,\ell)=1\) and \(T(\mathcal{L}_{1})\) and \(T(\mathcal{L}_{2})\) have long cycles \(\gamma\) and \(\delta\), respectively. Note that \(\gamma\) and \(\delta\) have orders \(\ell\) and \(m\), respectively. Label the sets in \(\mathcal{L}\) by \((i,j)\) for \(1\leq i\leq\ell,\ 1\leq j\leq m\) and observe that \(T(\mathcal{L})\) contains permutations
\[\gamma^{\prime}(i,j):=(\gamma(i),j)\]
and
\[\delta^{\prime}(i,j):=(i,\delta(j))\]
with orders \(\ell\) and \(m\), respectively. The element \(\gamma^{\prime}\sigma^{\prime}\) is a long cycle in \(T(\mathcal{L})\), because \(\gamma^{\prime}\sigma^{\prime}(1,1),\ldots,(\gamma^{\prime}\sigma^{\prime})^{ \ell m}(1,1)\) gives us all the elements of \(\mathcal{L}\).
**Theorem 4.2**.: _Assume \(T(\mathcal{L})\) is imprimitive and contains a long cycle. Then \(\mathcal{L}=\mathcal{L}_{1}\otimes\mathcal{L}_{2}\) for some \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) such that \(|\mathcal{L}_{1}|\geq 2,\ |\mathcal{L}_{2}|\geq 2,\ (|\mathcal{L}_{1}|,|\mathcal{L}_{2}|)=1\), and each of \(T(\mathcal{L}_{1})\) and \(T(\mathcal{L}_{2})\) contains a long cycle._
Proof.: As \(T(\mathcal{L})\) is imprimitive, let \(\mathcal{B}_{1},\ldots,\mathcal{B}_{\ell}\) be a system of blocks of imprimitivity. Set \(|\mathcal{B}_{i}|=m\). If \(m\) is odd, then the result follows from Lemma 2.7 and Lemma 4.1.
Now assume \(m\) is even. If we replace the blocks \(\mathcal{C}_{i}\) in the proof of Lemma 4.1 by the \(\mathcal{B}_{i}\)'s and define \(a\) as we did before, we obtain
\[\gamma^{\ell}=\gamma_{1}\cdots\gamma_{\ell},\]
where \(\gamma_{i}\) is a long cycle on block \(\mathcal{B}_{i}\). We write \(\gamma=\sigma\rho\) as before and have
\[\gamma_{1}\cdots\gamma_{\ell}=\gamma^{\ell}=\sigma^{\ell}\rho^{\ell}.\]
Since \(\gamma^{\ell}\) and \(\rho^{\ell}\) are type 2 permutations, so is \(\sigma^{\ell}\).
We claim that \(\sigma^{\ell}=1\). For a contradiction, assume otherwise. Lemma 2.8 then implies that every type 2 toggle is even. Since \(m\) is even we also have (using the first part of Lemma 2.2) that every type 1 toggle is even. Therefore since \(\gamma\) is product of toggles it must be even, but \(\gamma\) is an \(\ell m\)-cycle which is odd because \(m\) is even. We conclude that \(\sigma^{\ell}=1\) as claimed.
Since \(\sigma^{\ell}=1\) we see that \(\rho^{\ell}=\gamma^{\ell}\) which is the long cycle \(\gamma_{i}\) on each block \(\mathcal{B}_{i}\). So \(\rho\) must be a long cycle on each block \(\mathcal{B}_{i}\). As each block has even size \(\rho|_{\mathcal{B}_{i}}\) must be odd. Since \(\rho\) is a product of type 2 toggles we must have, for each \(\mathcal{B}_{i}\), some type 2 toggle \(\tau\) such that \(\tau|_{\mathcal{B}_{i}}\) is odd. Lemma 2.8 and Lemma 4.1 now complete the proof.
**Corollary 4.3**.: _Assume \(T(\mathcal{L})\) is imprimitive and contains a long cycle. Then_
\[T(\mathcal{L})\cong T(\mathcal{L}_{1})\times\cdots\times T(\mathcal{L}_{k}),\]
_where each \(T(\mathcal{L}_{i})\) is primitive and contains a long cycle, their degrees are larger than 1 and pairwise coprime, and \(|\mathcal{L}|=|\mathcal{L}_{1}|\cdots|\mathcal{L}_{k}|\)._
Proof.: Apply Theorem 4.2 repeatedly.
**Corollary 4.4**.: _If \(T(\mathcal{L})\) is transitive, has prime-power degree, and contains a long cycle, then \(T(\mathcal{L})\) is primitive._
Proof.: This is immediate from Corollary 4.3.
Corollary 4.4 expresses a special property of toggle groups. For example, the dihedral group of order eight, in its natural action on a set of order four, contains a 4-cycle but is not primitive by [2, Theorem 4.2A(vi)], since it has nontrivial center and is nonabelian.
In view of Theorem 3.3, Corollary 3.4 and Corollary 4.3 it would be very interesting to have an answer to the following question: Which primitive toggle groups of degree \(n\) contain a cycle of length at least \(n-2\) but do not contain \(A_{n}\)? A list of all the primitive permutation groups that contain a cycle of length at least \(n-2\) but do not contain \(A_{n}\) is given in [8, Theorem 1.2]. The first groups on the list are the subgroups of the affine group \(AGL_{1}(p)\) that contain a cyclic subgroup of order \(p\), and we can show that these are not toggle groups. But at this point we do not know the status of the other groups on the list.
|
2305.15357 | Solving Diffusion ODEs with Optimal Boundary Conditions for Better Image
Super-Resolution | Diffusion models, as a kind of powerful generative model, have given
impressive results on image super-resolution (SR) tasks. However, due to the
randomness introduced in the reverse process of diffusion models, the
performances of diffusion-based SR models are fluctuating at every time of
sampling, especially for samplers with few resampled steps. This inherent
randomness of diffusion models results in ineffectiveness and instability,
making it challenging for users to guarantee the quality of SR results.
However, our work takes this randomness as an opportunity: fully analyzing and
leveraging it leads to the construction of an effective plug-and-play sampling
method that owns the potential to benefit a series of diffusion-based SR
methods. More in detail, we propose to steadily sample high-quality SR images
from pre-trained diffusion-based SR models by solving diffusion ordinary
differential equations (diffusion ODEs) with optimal boundary conditions (BCs)
and analyze the characteristics between the choices of BCs and their
corresponding SR results. Our analysis shows the route to obtain an
approximately optimal BC via an efficient exploration in the whole space. The
quality of SR results sampled by the proposed method with fewer steps
outperforms the quality of results sampled by current methods with randomness
from the same pre-trained diffusion-based SR model, which means that our
sampling method "boosts" current diffusion-based SR models without any
additional training. | Yiyang Ma, Huan Yang, Wenhan Yang, Jianlong Fu, Jiaying Liu | 2023-05-24T17:09:54Z | http://arxiv.org/abs/2305.15357v5 | # Solving Diffusion ODEs with Optimal Boundary Conditions for Better Image Super-Resolution
###### Abstract
Diffusion models, as a kind of powerful generative model, have given impressive results on image super-resolution (SR) tasks. However, due to the randomness introduced in the reverse process of diffusion models, the performances of diffusion-based SR models are fluctuating at every time of sampling, especially for samplers with few resampled steps. This inherent randomness of diffusion models results in ineffectiveness and instability, making it challenging for users to guarantee the quality of SR results. However, our work takes this randomness as an opportunity: fully analyzing and leveraging it leads to the construction of an effective plug-and-play sampling method that owns the potential to benefit a series of diffusion-based SR methods. More in detail, we propose to steadily sample high-quality SR images from pretrained diffusion-based SR models by solving diffusion ordinary differential equations (_diffusion ODEs_) with optimal boundary conditions (BCs) and analyze the characteristics between the choices of BCs and their corresponding SR results. Our analysis shows the route to obtain an approximately optimal BC via an efficient exploration in the whole space. The quality of SR results sampled by the proposed method with fewer steps outperforms the quality of results sampled by current methods with randomness from the same pretrained diffusion-based SR model, which means that our sampling method "boosts" current diffusion-based SR models without any additional training.
## 1 Introduction
Diffusion models [12] have drawn great research attention within the domain of computer vision because of their great capacity for image generation. Therefore, it is intuitive to leverage such powerful models to tackle the demanding task of image super-resolution (SR). The diffusion-based image SR task is modeled as generating high-quality images by diffusion models conditioned on corresponding low-resolution images [39; 21; 40; 36]. However, the reverse process (_i.e_., generating process) of diffusion models, including randomness [12; 43; 44], results in the unstable performances of the diffusion-based SR methods. In other words, the users cannot guarantee the quality of SR results if they lack a principled approach and can only rely on random sampling from diffusion-based models. The previous methods did not consider or explore the issue of randomness. Although multiple repeated samplings can lead to reasonable SR images using well-trained diffusion-based SR models. However, we cannot guarantee the quality of with one-time sampling, and the sampled results on average still fall short of optimal quality, with significant performance gaps. Thus, it is critical to pursue a stable sampling method that generates SR images from pre-trained diffusion models with guaranteed good performance.
Most current diffusion-based SR works [39; 21; 40; 36] focus on the model design instead of sampling method. The most commonly used sampling method for diffusion-based SR works is resampled
DDPM sampler with 100 steps (DDPM-100) instead of the original DDPM sampler with 1000 steps of the training noise schedule (DDPM-1000), due to its significantly reduced time cost, despite the trade-off in SR image quality. It is first introduced by SR3 [39] from WaveGrad [4]. Later works follow SR3 using DDPM-100 as a default setting. These discrete-time DDPM samplers sample from a Gaussian distribution with learned parameters at each step, resulting in instability. The successive work [44] demonstrates that such discrete-time DDPM samplers can be regarded as solving diffusion stochastic differential equations (_diffusion SDE_s) and further gives ordinary differential equations which share the same marginal probability densities as _diffusion SDEs_. Such ordinary differential equations are referred to as _diffusion ODE_s. Different from _diffusion SDE_s, given a boundary condition (BC) \(\mathbf{x}_{T}\), one can solve the _diffusion ODE_s via ODE samplers (_e.g._, DDIM [43], DPM Solver [25]) getting an exact solution \(\mathbf{x}_{0}\). Nevertheless, the BCs \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) also comes with randomness, also leading to the instability issue in sampling SR images. Hence, it is highly desirable to obtain a principle way for estimating the optimal BC \(\mathbf{x}_{T}^{*}\), steadily offering sampled SR images with high-quality.
In this paper, we analyze the characteristics of the optimal BC \(\mathbf{x}_{T}^{*}\) of _diffusion ODE_s of SR models and propose an approach to approximate the optimal BC \(\tilde{\mathbf{x}}_{T}\) by exploring the whole space with the criterion of a reference set containing \(R\) HR-LR image pairs \(\mathcal{R}=\{(\mathbf{z}_{i},\mathbf{y}_{i})\}_{i=1}^{R}\) which is a small subset of the training dataset. Then, we can steadily generate high-quality SR images by solving the _diffusion ODEs_s of the trained diffusion-based SR model with the above derived approximately optimal BC \(\tilde{\mathbf{x}}_{T}\). We establish that the optimal boundary condition \(\mathbf{x}_{T}^{*}\) utilized to solve the diffusion ODE in diffusion-based SR models is independent of the LR image inputs. Thus, we only need to prepare the approximately optimal BC \(\tilde{\mathbf{x}}_{T}\) once to sample SR images of other unseen LR images. The experiment demonstrates that this simple independence assumption empirically offers impressive performance in a plug-and-play manner. The main idea of the proposed method is shown in Fig. 1.
In order to evaluate the effectiveness of our method, we train a vanilla diffusion-based SR model with a noise-prediction UNet which simply concatenate LR images with noisy images \(\mathbf{x}_{t}\) as the architecture proposed in SR3 [39]. Experiment show that the quality of SR images sampled by few-step _diffusion ODE_ samplers with our explored BC \(\tilde{\mathbf{x}}_{T}\) significantly outperforms the quality of results sampled by existing methods owning the same architecture. Our method is not restricted to any specific architecture of diffusion-based SR models. Therefore, any diffusion-based SR model can leverage our proposed method to consistently sample high-quality SR images with only a few steps, leading to improved performance. In this way, our method can boost existing diffusion-based SR models in the plug-and-play manner.
Figure 1: Given a well-trained diffusion-based SR model, by solving _diffusion ODEs_, we can sample reasonable SR results with different BCs \(\mathbf{x}_{T}\) as the figure shows. However, there is instability of the performances of each BC \(\mathbf{x}_{T}\). We manage to find an approximately optimal BC \(\tilde{\mathbf{x}}_{T}\) which can be projected to the sample \(\tilde{\mathbf{x}}_{0}\) with nearly the highest probability density by the solution \(h_{\theta}(\tilde{\mathbf{x}}_{T},\mathbf{y})\) to _diffusion ODE_. Based on our analysis in Sec. 3.2, \(\tilde{\mathbf{x}}_{T}\) is shared by different LR images \(\mathbf{y}_{i}\). The method of finding \(\tilde{\mathbf{x}}_{T}\) refers to Sec. 3.3 **[Zoom in for best view]**
Related Work
### Image Super-Resolution
Image super-resolution has drawn great research interest in recent years [8; 16; 45; 23; 20; 46; 50; 22]. As a pioneer work of deep-learning based SR method, SRCNN [8] builds a 3-layer convolutional neural network to map LR patches to SR patches with criterion of MSE between SR patches and HR patches, getting better PSNR than traditional methods. SRResNet[20] introduces residual connections into SR networks, achieving impressive performances. RCAN [50] uses channel-attention mechanism to learn local-correlation which is crucial to SR task. SWINIR [22] leverages vision transformers [9; 24] to build backbones of SR neural networks and outperforms CNN-based NNs.
However, PSNR between SR images and HR images has gap with visual quality of SR images and using generative models can synthesize more perceptually pleasant results. Thus, SRGAN [20] introduces GANs [10] to SR tasks. Furthermore, [29; 47; 3] embed pre-trained GANs of certain domains to SR frameworks, utilize the generative prior of GANs. PixelSR [6] use auto-regressive models to generative SR images pixel-by-pixel. SRFlow [26] model SR tasks by normalizing flow-based models [19]. SR3 [39] first use diffusion models [12; 44] to generate SR images conditioned on corresponding LR images. DDRM [15] designs a training-free algorithm to guide pre-trained diffusion models to generate high-quality images which are consistent to the LR images.
### Diffusion Models
In recent years, diffusion models [12; 44], as a kind of generative model, have achieved impressive results in several aspects, including image generation [7; 31], text-to-image generation [30; 32; 38], multi-modal generation [34; 27] and so on. Diffusion models are first proposed in [42] and simplified as DDPM in [12] which can be trained as several simple denoising models. ImprovedDDPM [31] proposes to learn the variance of each reverse step and AnalyticDPM [2] claims that such variances have analytic forms which not need to be learned. [44] extend the diffusion models with discrete Markovian-chains to continuous differential equations. [11] propose to train diffusion models by "velocity", getting more efficiency. [33] builds diffusion models on latent spaces instead of image spaces, reducing the training and inferring cost.
In terms of applying diffusion models, GLIDE [30] first proposes to build a diffusion model to generate images from descriptive texts. DALL-E 2 [32] and Imagen [38] design better architecture and use more computing resources, achieving better performances. Palette [37] first apply diffusion models to image-to-image translation tasks. DreamBooth [35] finetunes pre-trained text-to-image diffusion models to achieve the goal of subject-driven generation. MM-Diffusion [34] generates aligned audios and videos at the same time. [41] creates novel videos from texts without text-to-video data. These works prove that diffusion models have strong generative abilities.
## 3 Sampling SR Images with Optimal BCs of _Diffusion ODEs_
We first review diffusion models and their continuous differential equations, then analyze the optimal BCs \(\mathbf{x}_{T}^{*}\) used by _diffusion ODEs_ to sample SR images from diffusion-based SR models, last depict the method of approximating the optimal BCs \(\tilde{\mathbf{x}}_{T}\) in Eqn. 19 with criterion of a reference set containing \(N\) image pairs. With the approximately optimal \(\tilde{\mathbf{x}}_{T}\), we can sample high-quality SR images from diffusion-based SR models by solving _diffusion ODEs_ steadily.
### Diffusion Models, _Diffusion SDEs_ and _Diffusion ODEs_
Diffusion models [12; 44] are a kind of generative model which first maps samples from an unknown distribution (_e.g._, natural image distribution) to samples from an well-known distribution (_e.g._, standard Gaussian distribution) by gradually adding noise, and then attempts to revert such process via denoising step by step. The first process is called _forward process_. Taking \(\mathbf{x}_{0}\) as a sample of the unknown distribution \(X\), \(T\) as the number of noise-adding step, the state \(\mathbf{x}_{t},t\in[0,T]\) of _forward process_ satisfies
\[q(\mathbf{x}_{t}|\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t};\alpha(t)\mathbf{ x}_{0},\sigma^{2}(t)\mathbf{I}),q(\mathbf{x}_{T})=\mathcal{N}(\mathbf{x}_{T}; \mathbf{0},\mathbf{I}), \tag{1}\]
where \(\alpha(t),\sigma(t)\) are differential functions of \(t\) defined by hyper-parameters. Furthermore, [17] proves that the transition distribution \(q(\mathbf{x}_{t}|\mathbf{x}_{0})\) can be given by the following stochastic differential equation (SDE) at any \(t\in[0,T]\):
\[\mathrm{d}\mathbf{x}_{t}=f(t)\mathbf{x}_{t}\mathrm{d}t+g(t)\mathrm{d}\mathbf{w} _{t}, \tag{2}\]
where \(\mathbf{w}_{t}\) is a standard Wiener process, and \(f(t),g(t)\) are given by
\[f(t)=\frac{\mathrm{d}\log\alpha(t)}{\mathrm{d}t},g^{2}(t)=\frac{\mathrm{d} \sigma^{2}(t)}{\mathrm{d}t}-2\frac{\mathrm{d}\log\alpha(t)}{\mathrm{d}t}\sigma ^{2}(t). \tag{3}\]
The _reverse process_ attempts to learn a parameterized distribution \(p_{\theta}(\mathbf{x}_{0})\) to fit the real data distribution \(q(\mathbf{x}_{0})\) by using a trained noise-prediction model \(\mathbf{\epsilon}_{\theta}(\mathbf{x}_{t},t)\) to gradually generate \(\mathbf{x}_{0}\) from \(\mathbf{x}_{T}\)[12]. [25] proves that the reverse process can be done by solving the following parameterized SDE (_diffusion SDE_) with numerical solvers:
\[\mathrm{d}\mathbf{x}_{t}=[f(t)\mathbf{x}_{t}+\frac{g^{2}(t)}{\sigma(t)}\mathbf{ \epsilon}_{\theta}(\mathbf{x}_{t},t)]\mathrm{d}t+g(t)\mathrm{d}\bar{\mathbf{ w}}_{t},\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I}), \tag{4}\]
where \(\mathbf{\epsilon}_{\theta}(\mathbf{x}_{t},t)\) is a trainable noise-prediction neural network and \(\bar{\mathbf{w}}_{t}\) is another standard Wiener process in reverse time. The original DDPM [12] sampler used by current diffusion-based SR models is a discrete-time solver of _diffusion SDE_. When discretizing _diffusion SDEs_, the step sizes are limited because the Wiener process \(\bar{\mathbf{w}}_{t}\) contains randomness. As a consequence, the resampled DDPM-100 samplers with larger step sizes perform not satisfying.
Moreover, [44] gives an ordinary differential equation (ODE) which has the same marginal distribution of _diffusion SDE_:
\[\frac{\mathrm{d}\mathbf{x}_{t}}{\mathrm{d}t}=f(t)\mathbf{x}_{t}+\frac{g^{2}(t )}{2\sigma(t)}\mathbf{\epsilon}_{\theta}(\mathbf{x}_{t},t),\mathbf{x}_{T}\sim \mathcal{N}(\mathbf{0},\mathbf{I}). \tag{5}\]
Such ODE is called _diffusion ODE_. Because _diffusion ODEs_ have no randomness, one can get an exact solution \(\mathbf{x}_{0}\) given a BC \(\mathbf{x}_{T}\) by solving the _diffusion ODEs_ with corresponding numerical solvers like DDIM [43] or DPM-Solver [25]. Thus, we can use a parameterized projection:
\[\mathbf{x}_{0}=h_{\theta}(\mathbf{x}_{T}),\mathbf{x}_{T}\sim\mathcal{N}( \mathbf{0},\mathbf{I}), \tag{6}\]
to represent the solution of 5. We can extend the diffusion models to conditional ones \(p_{\theta}(\mathbf{x}_{0}|c)\) by providing conditions \(c\) when training the noise-prediction model \(\mathbf{\epsilon}_{\theta}(\mathbf{x}_{t},c,t)\). By randomly dropping the condition, the model can be jointly conditional and unconditional [13]. We define the projections:
\[\mathbf{x}_{0}=h_{\theta}(\mathbf{x}_{T},c),\mathbf{x}_{0}=h_{\theta}(\mathbf{ x}_{T},\phi), \tag{7}\]
are the solution to conditional _diffusion ODE_ and the solution to unconditional _diffusion ODE_ of the same diffusion model respectively, where \(\phi\) denotes blank condition which is dropped.
### Analyzing Optimal BCs \(\mathbf{x}_{T}^{*}\) of _Diffusion ODEs_ for Diffusion-based SR Models
For image SR tasks, steady SR results mean deterministic samples of the learned conditional distribution \(p_{\theta}(\mathbf{x}_{0}|c)\), where the conditions \(c\) are LR images \(\mathbf{y}\). In other words, we should only sample once from the distribution. The parameterized distribution \(p_{\theta}(\mathbf{x}_{0}|\mathbf{y})\) learned by a well-trained diffusion model is a fitting to the data probability distribution \(q(\mathbf{x}_{0}|\mathbf{y})\) and the training data pairs \((\mathbf{z}_{i},\mathbf{y}_{i})\) is samples and conditions of the distribution \(q(\mathbf{x}_{0}|\mathbf{y})\), where \(\mathbf{z}_{i}\) denotes the corresponding HR image of \(\mathbf{y}_{i}\). From the perspective of max-likelihood, the \((\mathbf{z}_{i},\mathbf{y}_{i})\) pairs should locate at the point with biggest probability distribution \(p_{\theta}(\mathbf{x}_{0}|y_{1})\):
\[\mathbf{z}_{i}=\operatorname*{arg\,max}_{\mathbf{x}_{0}}q(\mathbf{x}_{0}| \mathbf{y}_{i}). \tag{8}\]
So, the optimal sample of \(p_{\theta}(\mathbf{x}_{0}|\mathbf{y})\) should satisfy:
\[\mathbf{x}_{0}^{*}=\operatorname*{arg\,max}_{\mathbf{x}_{0}}p_{\theta}( \mathbf{x}_{0}|\mathbf{y}). \tag{9}\]
When we solving _diffusion ODE_s to sample from the diffusion model \(p_{\theta}(\mathbf{x}_{0}|\mathbf{y})\), we actually sample \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and project \(\mathbf{x}_{T}\) to final samples \(\mathbf{x}_{0}\) via the projection in Eqn. 7. Thus, we have:
\[p_{\theta}(\mathbf{x}_{0}|\mathbf{y})=p_{\theta}(h_{\theta}(\mathbf{x}_{T}, \mathbf{y})),p_{\theta}(\mathbf{x}_{0})=p_{\theta}(h_{\theta}(\mathbf{x}_{T}, \phi)). \tag{10}\]
Further explanations of Eqn. 10 refer to the supplementary materials. Substituting Eqn. 10 into Eqn. 9, optimal BCs and samples should satisfy:
\[\mathbf{x}_{T}^{*}=\operatorname*{arg\,max}_{\mathbf{x}_{T}\sim\mathcal{N}( \mathbf{0},\mathbf{I})}p_{\theta}(h_{\theta}(\mathbf{x}_{T},\mathbf{y})), \mathbf{x}_{0}^{*}=h_{\theta}(\mathbf{x}_{T}^{*},\mathbf{y}). \tag{11}\]
Because a well-trained \(p_{\theta}(\mathbf{x}_{0}|\mathbf{y})\) is a fitting to \(q(\mathbf{x}_{0}|\mathbf{y})\). Based on Bayesian rule, we have:
\[p_{\theta}(\mathbf{x}_{0}|\mathbf{y})=\frac{p_{\theta}(\mathbf{x}_{0},\mathbf{ y})}{p(\mathbf{y})}=\frac{p_{\theta}(\mathbf{y}|\mathbf{x}_{0})}{p(\mathbf{y})}p_{ \theta}(\mathbf{x}_{0}). \tag{12}\]
Substituting Eqn. 10 into Eqn. 12, the parameterized conditional distribution is:
\[p_{\theta}(h_{\theta}(\mathbf{x}_{T},\mathbf{y}))=p_{\theta}(\mathbf{x}_{0}| \mathbf{y})=\frac{p_{\theta}(\mathbf{y}|\mathbf{x}_{0})}{p(\mathbf{y})}p_{ \theta}(\mathbf{x}_{0})=\frac{p_{\theta}(\mathbf{y}|h_{\theta}(\mathbf{x}_{T},\phi))}{p(\mathbf{y})}p_{\theta}(h_{\theta}(\mathbf{x}_{T},\phi)). \tag{13}\]
In Eqn. 13, \(p(\mathbf{y})\) is the prior probability distribution of LR images which is a uniform distribution, \(p_{\theta}(h_{\theta}(\mathbf{x}_{T},\phi))\) is not related to the LR image \(LR\). \(p_{\theta}(\mathbf{y}|h_{\theta}(\mathbf{x}_{T},\phi))\) is an implicit classifier, indicating the probability of unconditionally generating an image which is the corresponding SR image of the LR image \(\mathbf{y}\). For a well-trained model, such probability is also approximately uniform. Thus, \(p_{\theta}(h_{\theta}(\mathbf{x}_{T},\mathbf{y}))\) is approximately independent to the specific LR images \(\mathbf{y}\):
\[\mathbf{x}_{T}^{*}=\operatorname*{arg\,max}_{\mathbf{x}_{T}\sim\mathcal{N}( \mathbf{0},\mathbf{I})}p_{\theta}(h_{\theta}(\mathbf{x}_{T},\mathbf{y}))= \operatorname*{arg\,max}_{\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I} )}p_{\theta}(h_{\theta}(\mathbf{x}_{T},\mathbf{y}_{i})),\forall\mathbf{y}_{i} \in\mathcal{C}, \tag{14}\]
where \(\mathcal{C}\) is the theoretically universal set of all LR images. We design an experiment in Sec. 4.3 to validate a derivation of the approximate independence of \(p_{\theta}(h_{\theta}(\mathbf{x}_{T},\mathbf{y}))\) to different \(\mathbf{y}\). Hitherto, we have stated that the optimal BC \(\mathbf{x}_{T}^{*}\) is general for different LR images \(\mathbf{y}\). In the next subsection, we depict how to approximate \(\mathbf{x}_{T}^{*}\) with the criterion of a reference set containing \(R\) HR-LR image pairs \(\mathcal{R}=\{(\mathbf{z}_{i},\mathbf{y}_{i})\}_{i=1}^{R}\) which is a subset of training dataset.
Approximating Optimal BCs \(\tilde{\mathbf{x}}_{T}\) of _Diffusion ODEs_ for Diffusion-based SR Models
As we discussed before, a well-trained model \(p_{\theta}(\mathbf{x}_{0}|\mathbf{y})\) is a fitting of \(q(\mathbf{x}_{0}|\mathbf{y})\). Thus, we can take \(q(\mathbf{x}_{0}|\mathbf{y})\) to substitute \(p_{\theta}(\mathbf{x}_{0}|\mathbf{y})\) in Eqn. 14, getting an approximation \(\tilde{\mathbf{x}}_{T}\) of \(\mathbf{x}_{T}^{*}\):
\[\tilde{\mathbf{x}}_{T}=\operatorname*{arg\,max}_{\mathbf{x}_{T}\sim\mathcal{N} (\mathbf{0},\mathbf{I})}q(h_{\theta}(\mathbf{x}_{T},\mathbf{y}_{i})). \tag{15}\]
Besides, we have the max-likelihood Eqn. 8 of \(q(\mathbf{x}_{0}|\mathbf{y})\):
\[\mathbf{z}_{i}=\operatorname*{arg\,max}_{\mathbf{x}_{0}}q(\mathbf{x}_{0}| \mathbf{y}_{i})=\operatorname*{arg\,max}_{\mathbf{x}_{T}\sim\mathcal{N}( \mathbf{0},\mathbf{I})}q(h_{\theta}(\mathbf{x}_{T},\mathbf{y}_{i})). \tag{16}\]
Considering the characteristics of natural images, the distribution \(q(\mathbf{x}_{0}|\mathbf{y})\) is a continuous distribution. So, there exists a neighbour around \(\mathbf{z}_{i}\) where \(q(\mathbf{x}_{0}|\mathbf{y}_{i})\) is monotonic. Furthermore, the closer \(\mathbf{x}_{0}\) gets to \(\mathbf{z}_{i}\), the bigger \(q(\mathbf{x}_{0}|\mathbf{y}_{i})\) is. Taking \(M(\cdot,\cdot)\) as the function which measures the distance of two images, the \(\tilde{\mathbf{x}}_{T}\) can be approximated by:
\[\tilde{\mathbf{x}}_{T}=\operatorname*{arg\,max}_{\mathbf{x}_{T}\sim\mathcal{N} (\mathbf{0},\mathbf{I})}q(h_{\theta}(\mathbf{x}_{T},\mathbf{y}_{i}))\approx \operatorname*{arg\,max}_{\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I} )}M(h_{\theta}(\mathbf{x}_{T},\mathbf{y}_{i}),\mathbf{z}_{i}). \tag{17}\]
Because the monotonicity of \(q(\mathbf{x}_{0}|\mathbf{y}_{i})\) is limited in a small neighbour, we can use a set containing \(R\) HR-LR image pairs \(\mathcal{R}=\{(\mathbf{z}_{i},\mathbf{y}_{i})\}_{i=1}^{R}\) to calculate \(\tilde{\mathbf{x}}_{T}\):
\[\tilde{\mathbf{x}}_{T}\approx\operatorname*{arg\,max}_{\mathbf{x}_{T}\sim \mathcal{N}(\mathbf{0},\mathbf{I})}\sum_{i=1}^{R}M(h_{\theta}(\mathbf{x}_{T}, \mathbf{y}_{i}),\mathbf{z}_{i}). \tag{18}\]
Considering the perceptual characteristics of images, we take negative LPIPS [48] as the implementation of \(M(\cdot,\cdot)\). Because the projection \(h_{\theta}\) is the solution to _diffusion ODE_, it is difficult to give a analytical result of Eqn. 18. We use the idea of Monte Carol method to estimate \(\tilde{\mathbf{x}}_{T}\). We randomly sample \(K\)\(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), calculate Eqn. 18 and choose the best one:
\[\tilde{\mathbf{x}}_{T}\approx\operatorname*{arg\,max}_{\mathbf{x}_{T}\in \mathcal{K}}\sum_{i=1}^{R}-\mathrm{LPIPS}(h_{\theta}(\mathbf{x}_{T},\mathbf{y} _{i}),\mathbf{z}_{i}), \tag{19}\]
where \(\mathcal{K}\) is the set of randomly sampled \(K\)\(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). Last, given unseen LR images \(\mathbf{y}\), the corresponding SR images can be calculated by:
\[\tilde{\mathbf{x}}_{0}=h_{\theta}(\tilde{\mathbf{x}}_{T},\mathbf{y}). \tag{20}\]
Experiments
In order to demonstrate the effectiveness of the proposed sampling method, we train a vanilla diffusion-based SR model which has similar architecture of SR3 [39] as a baseline and evaluate several commonly-used sampling methods and our method on it.
### Implementation Details
**Datasets.** We train the SR model on the widely-used dataset DF2k [1; 23] which containing 3,450 high-resolution images. We train a 64\(\times\)64 \(\rightarrow\) 256\(\times\)256 model and the downsampling method is Bicubic. During testing, we use 3 different datasets containing DIV2k-test [1], Urban100 [14], B100 [28]. For DIV2k-test and Urban100, we randomly crop 1,000 256\(\times\)256 patches as HR images and downscale them to 64\(\times\)64 patches by bicubic kernel as corresponding LR patches. For B100, we randomly extract 200 patches as the image resolutions in this dataset are not large compared with those in other datasets.
**Training details.** Following SR3 [39], we build a UNet-based noise-prediction model which directly concatenates LR images with noisy states \(\mathbf{x}_{t}\) along the channel dimension for our diffusion model and upsample origin LR images to the size of SR images by bicubic kernel to ensure that they have the same sizes as \(\mathbf{x}_{t}\). Our UNet has similar architecture to the one used by SR3, but only contains about 36M parameters. We train the model for 2M iterations with a batch size of 16 at first, then train the model for another 1M iterations with a batch size of 64. The learning rate is fixed to \(1e-4\). More details of the UNet and the diffusion model can be found in supplementary details.
**Compared methods.** This paper proposes a method of sampling from diffusion-based SR models, so, the main baselines are current sampling methods used by other diffusion-base SR models on the same model, namely resampled DDPM-250 [7], resampled DDPM-100 [39] and DDIM-50 [43]. Furthermore, we report the performances of DDPM-1000 [12] as upper bounds of previous sampling methods, which serves as evidence of our model's capability. Besides, we report performances of the state-of-the-art diffusion-based method, SRDiff [21], and GAN-based methods, _i.e._ ESRGAN [46] and RankSRGAN [49]. We use the open-resource codes and pretrained models of these methods without any modification. To the best of our knowledge, our diffusion-based models have achieved superior performance compared to GAN-based [10] SR models, even with a smaller number of parameters. This highlights the effectiveness and efficiency of our approach in surpassing the capabilities of GAN-based models for SR. More implementation details of compared methods refer to supplementary materials.
**Settings of calculating \(\tilde{\mathbf{x}}_{T}\) and _diffusion ODE_ solvers.** As we discussed in Sec. 3.3, we use a reference set containing HR-LR image pairs \(\mathcal{R}=\{(\mathbf{z}_{i},\mathbf{y}_{i})\}_{i=1}^{R}\) and a set of randomly sampled \(\mathbf{x}_{T}\)\(\mathcal{K}\) to calculate the approximately optimal BC \(\tilde{\mathbf{x}}_{T}\). In practice, we randomly crop \(R=300\) 256\(\times\)256
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Classification} & \multicolumn{2}{c}{DIV2k-test} & \multicolumn{2}{c}{Urban100} & \multicolumn{2}{c}{BSD100} \\ & & LPIPS & PSNR & LPIPS & PSNR & LPIPS & PSNR \\ \hline Bicubic & - & 0.4065 & 28.50 & 0.4826 & 21.75 & 0.5282 & 24.18 \\ \hline ESRGAN [46] & GAN & 0.1082 & 28.18 & 0.1226 & 23.04 & 0.1579 & 23.65 \\ RankSRGAN [49] & GAN & 0.1171 & 27.98 & 0.1403 & 23.16 & 0.1714 & 23.80 \\ \hline SRDiff [21] & Diffusion & 0.1286 & 28.96 & 0.1391 & 23.88 & 0.2046 & 24.17 \\ \hline DDPM-1000 & Diffusion & 0.1075 & 28.75 & 0.1165 & 24.33 & 0.1555 & 23.86 \\ DDPM-250 & Diffusion & 0.1142 & 28.95 & 0.1181 & 24.41 & 0.1621 & 24.00 \\ DDPM-100 & Diffusion & 0.1257 & 29.16 & 0.1232 & 24.51 & 0.1703 & 24.15 \\ DDIM-50 & Diffusion & 0.1483 & 28.55 & 0.1333 & 24.16 & 0.1823 & 23.75 \\ DDIM-50 + \(\tilde{\mathbf{x}}_{T}\) & Diffusion & 0.1053 & 28.65 & 0.1164 & 24.26 & 0.1552 & 23.99 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Qualitative results on testing datasets. “\(\tilde{\mathbf{x}}_{T}\)” denotes “approximately optimal boundary condition” calculated by the proposed method. The metrics of bottom 5 rows are all sampled with the same vanilla diffusion-based SR model trained by us. Red numbers denote the best performances and blue numbers denote the second best performances.
patches from DF2K dataset as reference HR patches, downsample them to 64\(\times\)64 as reference LR patches and set \(K=1000\). The discussion on the effect of \(R\) and \(K\) refers to Sec. 4.4. We use DDIM [43], which is the first-order approximation of the original _diffusion ODE_[44; 25], as _diffusion ODE_ solver and set the number of steps to 50.
### Quantitative and Qualitative Results
The performance on testing datasets is shown in Tab. 1. It should be noticed that the metrics in bottom 5 rows are all sampled with the same vanilla diffusion-based SR model by different sampling methods. The performance of DDPM-1000 shows the capacity of the model, while the commonly-used sampling methods including DDPM-250, DDPM-100 and DDIM-50 trade off sample quality for faster sampling speed. It can be seen that the performance of the proposed sampling method (DDIM-50 + \(\tilde{\mathbf{x}}_{T}\)) outperforms all other sampling methods of the same diffusion-based SR model. Remarkably, our method surpasses the previous upper-bound DDPM-1000, which is 20 times slower. Such results demonstrate that we can steadily generate high-quality SR images from the pretrained diffusion-based SR models by the proposed method. Visual comparisons of SR images of different methods can be found in Fig. 2. More visual results can be found in supplementary materials.
Figure 2: Qualitative comparisons of different results. “RSRGAN” denotes RankSRGAN [49]. All images on the right of the black line are sampled from the same vanilla diffusion-based SR model trained by us. **[Zoom in for best view]**
Validation on the Independence of \(p_{\theta}(h_{\theta}(\mathbf{x}_{T},\mathbf{y}))\) to \(\mathbf{y}\)
As we stated in Sec. 3.2, \(p_{\theta}(h_{\theta}(\mathbf{x}_{T},\mathbf{y}))\) is not related to the specific LR images \(LR\). In this section, we design show the related evidence. As we mentioned in Sec. 3.3, we assume distance measurement function \(M(h_{\theta}(\mathbf{x}_{T},\mathbf{y}),\mathbf{z})\) has the same shape as \(q(h_{\theta}(\mathbf{x}_{T},\mathbf{y}))\) and we use \(q(h_{\theta}(\mathbf{x}_{T},\mathbf{y}))\) to approximate \(p_{\theta}(h_{\theta}(\mathbf{x}_{T},\mathbf{y}))\). So, given different LR images \(\mathbf{y}_{i}\), if \(p_{\theta}(h_{\theta}(\mathbf{x}_{T},\mathbf{y}_{i}))\) are independent, the functions \(M(h_{\theta}(\mathbf{x}_{T},\mathbf{y}_{i}),\mathbf{z}_{i})\) of \(\mathbf{x}_{T}\) should have the same shape. Thus, we validate the shapes of \(M(h_{\theta}(\mathbf{x}_{T},\mathbf{y}_{i}),\mathbf{z}_{i})\) of different \(\mathbf{y}_{i}\). We randomly sample 10 LR-HR image pairs and 100 \(\mathbf{x}_{T}\), then generate 100 SR images of each LR image and calculate their LPIPS, getting 10 LPIPS sequences. To evaluate the shapes of the 10 LPIPS sequences, we calculate the Pearson correlation coefficients of every two sequences and form a matrix shown in Tab. 2. It can be seen that the coefficients are all high, indicating the strong correlation between different LPIPS sequences. To visualize the correlation between SR results of different LR images \(\mathbf{y}_{i}\), we further exhibit several SR images sharing the same \(\mathbf{x}_{T}\) in Fig. 3. It can be seen that SR images of different LR images with the same \(\mathbf{x}_{T}\) have similar visual features. SR results with \(\mathbf{x}_{T1}\) seem over-sharp and contain excessive artifacts while SR results with \(\mathbf{x}_{T1}\) seem over-smooth. All of them are reasonable but not satisfying enough, indicating the necessity of finding an approximately optimal BC \(\tilde{\mathbf{x}}_{T}\).
\begin{table}
\begin{tabular}{c|c c c c c c c c c} \hline & LR-1 & LR-2 & LR-3 & LR-4 & LR-5 & LR-6 & LR-7 & LR-8 & LR-9 & LR-10 \\ \hline LR-1 & 1.000 & 0.754 & 0.840 & 0.798 & 0.811 & 0.751 & 0.902 & 0.837 & 0.877 & 0.765 \\ LR-2 & 0.754 & 1.000 & 0.811 & 0.789 & 0.832 & 0.702 & 0.831 & 0.775 & 0.812 & 0.717 \\ LR-3 & 0.840 & 0.811 & 1.000 & 0.732 & 0.799 & 0.654 & 0.841 & 0.799 & 0.836 & 0.745 \\ LR-4 & 0.798 & 0.789 & 0.732 & 1.000 & 0.756 & 0.699 & 0.855 & 0.801 & 0.793 & 0.732 \\ LR-5 & 0.811 & 0.832 & 0.799 & 0.756 & 1.000 & 0.632 & 0.811 & 0.792 & 0.856 & 0.789 \\ LR-6 & 0.751 & 0.702 & 0.654 & 0.699 & 0.632 & 1.000 & 0.721 & 0.734 & 0.611 & 0.704 \\ LR-7 & 0.902 & 0.831 & 0.841 & 0.855 & 0.811 & 0.721 & 1.000 & 0.754 & 0.787 & 0.725 \\ LR-8 & 0.837 & 0.775 & 0.799 & 0.801 & 0.792 & 0.734 & 0.754 & 1.000 & 0.813 & 0.786 \\ LR-9 & 0.877 & 0.812 & 0.836 & 0.793 & 0.856 & 0.611 & 0.787 & 0.813 & 1.000 & 0.801 \\ LR-10 & 0.765 & 0.717 & 0.745 & 0.732 & 0.789 & 0.704 & 0.725 & 0.786 & 0.801 & 1.000 \\ \hline \end{tabular}
\end{table}
Table 2: Pearson’s coefficients between 10 LPIPS sequences of 100 SR images for each LR image.
Figure 3: SR results with shared \(\mathbf{x}_{T}\). Results with \(\mathbf{x}_{T1}\) all have excessive artifacts and results with \(\mathbf{x}_{T2}\) are all over-smooth. Results with shared \(\mathbf{x}_{T}\) share visual features. [**Zoom in for best view]**
It should be noticed that this experiment only validates that the consistency of shapes of \(M(h_{\theta}(\mathbf{x}_{T},\mathbf{y}_{i}),\mathbf{z}_{i})\), which is an derivation of the independence of \(p_{\theta}(h_{\theta}(\mathbf{x}_{T},\mathbf{y}))\) to \(\mathbf{y}\), instead of the independence itself.
### Ablation Studies
As we discussed in Sec. 3.3, we use a reference set \(\mathcal{R}=\{(\mathbf{z}_{i},\mathbf{y}_{i})\}_{i=1}^{R}\) and a set of randomly sampled \(\mathbf{x}_{T}\)\(\mathcal{K}\) to estimate the approximately optimal BC \(\tilde{\mathbf{x}}_{T}\). The scales of the two sets will affect the quality of the estimated \(\tilde{\mathbf{x}}_{T}\). The larger \(\mathcal{R}\) and \(\mathcal{K}\) are, the better estimation of \(\tilde{\mathbf{x}}_{T}\) is. Thus, we perform ablation studies on the scale of two sets.
For ablation on \(\mathcal{R}\), we keep \(K=200\). We build subsets \(\mathcal{R}_{i}\) contain \(i\) image pairs and set \(i\) to 1, 2, 4, 8, 16. For each \(i\), we build 8 \(\mathcal{R}_{i}\) with different random image pairs. With criterion of each \(\mathcal{R}_{i}\), we choose corresponding \(\tilde{\mathbf{x}}_{T}\) and test them on a subset of DIV2k test set containing 100 patches with DDIM-50. The mean and standard deviation LPIPS of the SR results with estimated \(\tilde{\mathbf{x}}_{T}\) at each \(i\) are shown in Fig. 4. It can be seen that the performances become better and steadier as \(R=i\) increases.
For ablation on \(\mathcal{K}\), we keep \(R=20\). We randomly sample \(i\)\(\mathbf{x}_{T}\) to build sets \(\mathcal{K}_{i}\) and set it to 10, 20, 40, 80, 160. For each \(i\), we build 8 \(\mathcal{K}_{i}\) with different \(\mathbf{x}_{T}\). We estimate \(\tilde{\mathbf{x}}_{T}\) from each \(\mathcal{K}_{i}\) and test them on the same subset of DIV2k test set used in the ablation studies on \(\mathcal{R}\) with DDIM-50. The mean and standard deviation LPIPS of the SR results with estimated \(\tilde{\mathbf{x}}_{T}\) at each \(i\) are shown in Fig. 4. It can be seen that the performances become better and steadier as \(K=i\) increases.
## 5 Conclusion and Future Work
In this work, we propose to steadily sample high-quality SR images from diffusion-based SR models by solving _diffusion ODEs_ with approximately optimal BCs \(\tilde{\mathbf{x}}_{T}\). We describe the process of finding these optimal boundary conditions. Experiments show that the proposed sampling method outperform commonly-used sampling methods for diffusion-based SR models. Our method is not limited to specific architectures of diffusion-based SR models and does not require additional training. This flexibility allows our method to effectively enhance the sampling performance of pre-trained diffusion-based SR models without any constraints in a plug-and-play manner.
In this work, we only discuss the SR tasks under the bicubic degradation. However, our analysis does not limit the formulation of tasks. In the future, we will manage to extend the proposed method to other low-level tasks including image colorization, low-light enhancement, blind image super-resolution, _etc._ Besides, the calculated approximately optimal BC \(\tilde{\mathbf{x}}_{T}\) has the same dimension to LR images \(\mathbf{y}\), which can not be directly applied to LR images with other shapes. We will manage to design algorithms to extend the application of \(\tilde{\mathbf{x}}_{T}\) to LR images with other shapes.
Figure 4: Ablation on values of \(R\) and \(K\). Shadows denote the standard deviation, the red dotted lines denote LPIPS of SR samples by DDIM-50 with randomly sampled \(\mathbf{x}_{T}\), indicating the lower-bound of performance, and the green dotted lines denote LPIPS of SR samples by DDIM-50 with \(\tilde{\mathbf{x}}_{T}\), indicating the upper-bound of performance. |
2301.08197 | Stochastic entropy production associated with quantum measurement in a
framework of Markovian quantum state diffusion | The reduced density matrix that characterises the state of an open quantum
system is a projection from the full density matrix of the quantum system and
its environment, and there are many full density matrices consistent with a
given reduced version. Without a specification of relevant details of the
environment, the evolution of a reduced density matrix is therefore typically
unpredictable, even if the dynamics are deterministic. With this in mind, we
investigate a two level open quantum system using a framework of quantum state
diffusion. We consider the pseudorandom evolution of its reduced density matrix
when subjected to an environment-driven process of continuous quantum
measurement of a system observable, using dynamics that asymptotically send the
system to an eigenstate. The unpredictability is characterised by a stochastic
entropy production, the average of which corresponds to an increase in the
subjective uncertainty of the quantum state adopted by the system and
environment, given the underspecified dynamics. This differs from a change in
von Neumann entropy, and can continue indefinitely as the system is guided
towards an eigenstate. As one would expect, the simultaneous measurement of two
non-commuting observables within the same framework does not send the system to
an eigenstate. Instead, the probability density function describing the reduced
density matrix of the system becomes stationary over a continuum of pure
states, a situation characterised by zero further stochastic entropy
production. Transitions between such stationary states, brought about by
changes in the relative strengths of the two measurement processes, give rise
to finite positive mean stochastic entropy production. The framework
investigated can offer useful perspectives on both the dynamics and
irreversible thermodynamics of measurement in quantum systems. | Claudia L. Clarke, Ian J. Ford | 2023-01-19T17:47:39Z | http://arxiv.org/abs/2301.08197v1 | Stochastic entropy production associated with quantum measurement in a framework of Markovian quantum state diffusion
###### Abstract
The reduced density matrix that characterises the state of an open quantum system is a projection from the full density matrix of the quantum system and its environment, and there are many full density matrices consistent with a given reduced version. Without a specification of relevant details of the environment, the evolution of a reduced density matrix is therefore typically unpredictable, even if the dynamics are deterministic. With this in mind, we investigate a two level open quantum system using a framework of quantum state diffusion. We consider the pseudorandom evolution of its reduced density matrix when subjected to an environment-driven process of continuous quantum measurement of a system observable, using dynamics that asymptotically send the system to an eigenstate. The unpredictability is characterised by a stochastic entropy production, the average of which corresponds to an increase in the subjective uncertainty of the quantum state adopted by the system and environment, given the underspecified dynamics. This differs from a change in von Neumann entropy, and can continue indefinitely as the system is guided towards an eigenstate. As one would expect, the simultaneous measurement of two non-commuting observables within the same framework does not send the system to an eigenstate. Instead, the probability density function describing the reduced density matrix of the system becomes stationary over a continuum of pure states, a situation characterised by zero further stochastic entropy production. Transitions between such stationary states, brought about by changes in the relative strengths of the two measurement processes, give rise to finite positive mean stochastic entropy production. The framework investigated can offer useful perspectives on both the dynamics and irreversible thermodynamics of measurement in quantum systems.
## I Introduction
In classical mechanics, entropy quantifies subjective uncertainty in the adopted configuration of a system when only partial detail is available concerning the coordinates of the component particles. Predictability of future behaviour when such a system is coupled to an similarly underspecified environment is limited and knowledge of the state worsens with time, even if the dynamics are entirely deterministic. The total entropy of the system and environment increases as a consequence. In many situations such evolution can be associated with the dissipation of potential energy into heat as the world progresses into the future, and this underpins the role played by entropy in the (19th century) second law of thermodynamics [1; 2; 3].
The 21st century concept of entropy production, however, is based on dynamical consideration of the probabilities of forward and backward sequences of events governed by an effective stochastic dynamics. In this framework of'stochastic thermodynamics', entropy change is the expectation value of a'stochastic entropy production', and this has clarified a number of long standing conceptual issues [4; 5; 6; 7; 8].
The central aim of this paper is to employ entropy as a description of uncertainty of adopted configuration at the level of a reduced density matrix in quantum mechanics. In the absence of quantum measurement, the full density matrix of a system together with its environment (the 'world') evolves deterministically according to the unitary dynamics of the von Neumann equation. This can give rise to a non-unitary evolution of the reduced density matrix describing the system, corresponding to thermalisation for example [9; 10; 11]. But the trajectory that a reduced density matrix follows will be unpredictable if the complete initial state of the world is not specified. This intrinsic unpredictability holds whether or not we impose traditional ideas of randomness arising from quantum mechanical measurement. It is natural, therefore, to consider the idea of an effective Brownian motion of the reduced density matrix, with associated entropy increase. The concept is illustrated in Fig. 1.
In developing this idea, we regard the reduced density matrix as an analogue of classical system coordinates and hence as a physical description of the quantum state, not merely as a vehicle for specifying probabilities of projective measurements or a representation of a state of knowledge. There is a'real' evolution trajectory that needs to be modelled, and if this is effectively stochastic, then there is also a subjective uncertainty in the actual state adopted by the world over time. But coordinates that describe a real state of a system ought not to change discontinuously, which would seem to raise difficulties in connection with the instantaneous jumps normally considered to arise from quantum measurement. A realist viewpoint therefore obliges us to describe quantum measurement in a fashion that avoids jumps.
We can use the description of 'weak' or continuous measurement processes in quantum mechanics to achieve this [12; 13; 14], though we employ the ideas in a slightly unconventional fashion. Instead of regarding weak measurement as a consequence of projective measurements
of remote parts of the environment coupled to the system, we imagine that complex dynamical interactions exist between the system and environment that can guide the system towards eigenstates of observables under certain conditions [15]. In other words, we imagine a situation where quantum measurement is just an aspect of the unitary dynamics of the world, its stochasticity being a consequence of a failure to specify the degrees of freedom of the environment, or more precisely those of a measuring device. This is reminiscent of ideas employed in classical statistical mechanics.
The implications of such a point of view can be captured by a continuous, Markovian, stochastic evolution of the reduced density matrix according to a framework known as quantum state diffusion [16; 17; 18; 19; 20], a broad category of dynamics that includes weak measurement. More elaborate schemes are also possible, for example involving non-Markovian dynamics. Such modelling is consistent with strong projective measurements as a limiting behaviour and can be made compatible with the Born rule. Measurement is then a process driven by specific system-environment coupling terms in the Hamiltonian and takes place without discontinuities [21; 22; 23]. This is a quantum dynamics that resembles classical dynamics, but where the dynamical variables are the elements of a reduced density matrix. It combines both aspects of quantum evolution: determinism of the von Neumann equation together with effective stochasticity representing measurement or more general environmental effects [24]. It is not without its controversies [25; 26; 27; 28].
It is nevertheless highly advantageous to employ an evolution of the reduced density matrix that avoids discontinuities, because then the concept of stochastic entropy production can be implemented in quantum mechanics in a straightforward way [29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. If the dynamical variables evolve according to Markovian stochastic differential equations (SDEs), or Ito processes [39], then it is possible to derive a related Ito process for the stochastic entropy production [7]. We can then compute a stochastic entropy production associated with individual Brownian trajectories taken by the reduced density matrix of a system. This can include situations where the system is guided towards an eigenstate of an observable, hence enabling us to compute the stochastic entropy production characterising a process of measurement.
A positive expectation value of stochastic entropy production represents increasing subjective uncertainty in the quantum state of the world. Growth in uncertainty is natural since we model the evolution using stochastic methods starting from an incompletely specified initial state. The state of the system can become _less_ uncertain, a necessary consequence of the performance of measurement, but subjective uncertainty regarding the state of the rest of the world increases by a greater amount, thereby allowing the second law of thermodynamics to be satisfied. It should be noted that stochastic entropy production does not correspond to a change in von Neumann entropy, which instead describes the uncertainty of outcomes when a system is subjected to projective measurement in a specific basis.
In Section II we develop these ideas in the context of the measurement of a single observable in a two level quantum system starting in a mixed state [40]. Mean stochastic entropy production is found to be positive and without limit as the system is guided, asymptotically in time, into one or other of the two eigenstates. We go on in Section III to consider the simultaneous measurement of two non-commuting observables and show how the stochastic entropy production is finite, a consequence of the inability of the dynamics, in this situation, to guide the system asymptotically into a definite eigenstate of either observable. We interpret the results in Section IV and summarise our conclusions in Section V, suggesting that dynamics based on quantum state diffusion, with an interpretation of the reduced density matrix as a set of physical properties of a state, together with the use of stochastic entropy production to monitor the process of eigenstate selection, can provide some conceptual clarification of the quantum measurement problem [41].
## II Measurement of \(\sigma_{z}\)
### Dynamics
The two level system will be described by a reduced density matrix (hereafter, simply a density matrix \(\rho\)) defined in a basis of eigenstates \(|\pm 1\rangle\) of the \(\sigma_{z}\) operator. Pure states denoting occupation of each of the two levels correspond to \(\rho_{\pm}^{c}=|\pm 1\rangle\langle\pm 1|\). Starting in the mixed state \(\rho=\frac{1}{2}\left(a_{1}\rho_{+}^{c}+a_{-1}\rho_{-}^{c}\right)\), where \(a_{\pm 1}\) are amplitudes, we use a quantum state diffusion approach to model the
Figure 1: The box and the grey area represent the phase spaces of the density matrix of the world \(\rho_{\text{world}}\) and of the reduced density matrix \(\rho\) of a constituent open quantum system, respectively. Deterministic trajectories \(\rho_{\text{world}}(t)\) that start at \(t=0\) from a macrostate subspace (shown as a red line) characterised by a given initial value \(\rho(0)\) of the reduced density matrix, are manifested as pseudorandom trajectories \(\rho(t)\) when projected into the reduced phase space.
stochastic evolution of the system into one or other of the levels in accordance with the Born rule.
We consider a minimal scheme [13] employing a rule for stochastic transitions given by
\[\rho\to S^{\pm}(\rho)=\rho^{\prime\pm}=\frac{M_{\pm}\rho M_{\pm}^{\dagger}}{ \mathrm{Tr}\left(M_{\pm}\rho M_{\pm}^{\dagger}\right)}, \tag{1}\]
using two Kraus operators:
\[M_{\pm}=\frac{1}{\sqrt{2}}\left(\mathbb{I}-\frac{1}{2}c^{\dagger}cdt\pm c\sqrt{ dt}\right), \tag{2}\]
where \(c=\alpha_{z}\sigma_{z}\), with real scalar parameter \(\alpha_{z}\) designated as the strength of measurement. The probabilities for the selection of one of the two possible outcomes \(\rho^{\prime\pm}\) after an infinitesimal timestep \(dt\) are
\[p_{\pm}(\rho)=\mathrm{Tr}\left(M_{\pm}\rho M_{\pm}^{\dagger}\right)=\frac{1}{2} \left(1\pm C\sqrt{dt}\right), \tag{3}\]
where \(C=\mathrm{Tr}\left(\rho\left(c+c^{\dagger}\right)\right)\). The quantum map in Eq. (1) preserves the trace of \(\rho\). Furthermore, since the Kraus operators in Eq. (2) differ incrementally from (a multiple of) the identity, the positive definiteness of \(\rho\) is maintained [24]. The operator identity \(M_{+}^{\dagger}M_{+}+M_{-}^{\dagger}M_{-}=\mathbb{I}\) is also satisfied. This scheme defines a stochastic dynamics representing the effect of a coupled measuring device on the two level system, whereby the eigenstates of \(\sigma_{z}\) are stationary, i.e. \(p_{+}(\rho_{+}^{e})=p_{-}(\rho_{-}^{e})=1\), \(p_{-}(\rho_{+}^{e})=p_{+}(\rho_{-}^{e})=0\), and \(S^{+}(\rho_{+}^{e})=\rho_{+}^{e}\), \(S^{-}(\rho_{-}^{e})=\rho_{-}^{e}\).
The two possible increments \(d\rho^{\pm}=\rho^{\prime\pm}-\rho\) available in the timestep \(dt\) under the dynamics are
\[d\rho^{\pm}=\left(c\rho c^{\dagger}-\frac{1}{2}\rho c^{\dagger}c- \frac{1}{2}c^{\dagger}c\rho\right)dt-\left(\rho c^{\dagger}+c\rho-C\rho\right) Cdt\] \[\qquad\qquad\pm\left(\rho c^{\dagger}+c\rho-C\rho\right)\sqrt{dt}, \tag{4}\]
and by evaluating the mean and variance of this increment in \(\rho\) it may be shown that the evolution can also be represented by the Ito process
\[d\rho=\left(c\rho c^{\dagger}-\frac{1}{2}\rho c^{\dagger}c-\frac{1}{2}c^{ \dagger}c\rho\right)dt+\left(\rho c^{\dagger}+c\rho-C\rho\right)dW, \tag{5}\]
where \(dW\) is a Wiener increment with mean \(\left\langle dW\right\rangle=0\) and variance \(\left\langle dW^{2}\right\rangle=dt\), with the brackets representing an average over the stochasticity. Note that terms of higher order than linear in \(dt\) will be neglected throughout.
Such a process of averaging over the stochasticity then leads to the standard Lindblad equation [42]:
\[\frac{d\bar{\rho}}{dt}=c\bar{\rho}c^{\dagger}-\frac{1}{2}\bar{\rho}c^{\dagger} c-\frac{1}{2}c^{\dagger}c\bar{\rho}, \tag{6}\]
with \(\bar{\rho}=\left\langle\rho\right\rangle\), suggesting that such a (deterministic) equation describes the average dynamical behaviour of an ensemble of density matrices. The actual trajectory followed by a system as it responds to external interactions, however, is specified by the stochastic Lindblad equation (5) [43; 44]. The environment disturbs the system in a manner represented by one of the transformations or moves given in (1), selected at random with probabilities (3) that arise from the underspecification of the environmental state and hence of \(\rho_{\mathrm{world}}\).
Furthermore, if we represent the density matrix in the form \(\rho=\frac{1}{2}\left(\mathbb{I}+r_{z}\sigma_{z}\right)\), it may be shown that the dynamics of Eq. (5) correspond to the evolution of the real stochastic variable \(r_{z}(t)\) according to [13]
\[dr_{z}=2\alpha_{z}\left(1-r_{z}^{2}\right)dW. \tag{7}\]
Example realisations of such behaviour, starting from the fully mixed state at \(r_{z}(0)=0\), are shown in Fig. 2. Notice that \(r_{z}\) evolves asymptotically towards \(\pm 1\), corresponding to density matrices \(\rho_{\pm}^{e}\), and note also that the average increment \(\left\langle dr_{z}\right\rangle\) over the ensemble satisfies \(\left\langle dr_{z}\right\rangle=d\langle r_{z}\rangle=2\alpha_{z}\left(1- \langle r_{z}^{2}\rangle\right)\left\langle dW\right\rangle=0\), implying that \(\langle r_{z}\rangle\) is time independent and that \(\left\langle\rho\right\rangle\) is as well. A similar conclusion can be reached simply by evaluating the right hand side of Eq. (6).
The standard Lindblad equation cannot capture system 'collapse' to an eigenstate, but instead describes the average behaviour of an ensemble of collapsing systems. For a closer consideration of the dynamics and thermodynamics of collapse, we need to 'unravel' the standard Lindblad equation into its stochastic version (5), using it to generate an ensemble of trajectories that model possible physical evolutions of the open quantum system.
Using It!\(\acute{\iota}\)oe's lemma, it can be shown that the purity of the state, \(P=\mathrm{Tr}\rho^{2}=\frac{1}{2}\left(1+r_{z}^{2}\right)\), evolves according to
\[dP=8\alpha_{z}^{2}\left(1-P\right)^{2}dt+4\alpha_{z}r_{z}\left(1-P\right)dW. \tag{8}\]
The dynamics take the purity asymptotically towards a fixed point at \(P=1\), or the density matrix towards one of \(\rho_{\pm}^{e}\), which is clearly a necessary consequence of the process of measurement.
Figure 2: Four stochastic trajectories \(r_{z}(t)\) derived from Eq. (7) with strength of measurement \(\alpha_{z}=1\). Starting at \(r_{z}(0)=0\), they evolve towards eigenstates of the \(\sigma_{z}\) observable at \(r_{z}=\pm 1\).
The Fokker-Planck equation describing the evolution of the probability density function (pdf) \(p(r_{z},t)\) for the system variable \(r_{z}\) is
\[\frac{\partial p}{\partial t}=\frac{\partial^{2}}{\partial r_{z}^{2}}\left(2 \alpha_{z}^{2}\left(1-r_{z}^{2}\right)^{2}p\right), \tag{9}\]
and this provides further insight into the dynamics. Figure 3 illustrates the development starting from a gaussian pdf centred on the maximally mixed state at \(r_{z}=0\). The ensemble of density matrices is separated by the dynamics into equal size groups that evolve asymptotically towards the eigenstates of \(\sigma_{z}\) at \(r_{z}=\pm 1\). The preservation of the ensemble average of \(r_{z}\) is apparent.
### Stochastic entropy production
The (total) stochastic entropy production associated with the evolution of a stochastic variable in a certain time interval is defined in terms of probabilities for the generation of a 'forward' set of moves in the phase space and the corresponding 'backward' set [4]. For the coordinate \(r_{z}\), and the time interval \(dt\), we need to consider the quantity
\[d\Delta s_{\text{tot}}(r_{z},t\to r_{z}+dr_{z},t+dt) \tag{10}\] \[=\ln\left(\text{Prob(forward)/Prob(backward)}\right)\] \[=\ln\frac{p(r_{z},t)\Delta r_{z}(r_{z})T(r_{z}\to r_{z}+dr_{z})}{p( r_{z}+dr_{z},t+dt)\Delta r_{z}(r_{z}+dr_{z})T(r_{z}+dr_{z}\to r_{z})},\]
where the \(T\) are conditional probabilities for the transitions indicated. For stochastic variables that are odd under time reversal symmetry, additional features have to be included in this definition, but since \(r_{z}\) is even we can ignore such complications [7; 45].
It may be shown that the expectation or mean of \(d\Delta s_{\text{tot}}\) is never negative, which ultimately provides an underpinning for the second law of thermodynamics [4].
We shall discuss the contributions to \(d\Delta s_{\text{tot}}\) involving the pdf \(p(r_{z},t)\) and the volume increment \(\Delta r_{z}(r_{z})\) shortly, but first let us consider the ratio of conditional probabilities. The two choices of forward move \(\rho\to\rho^{\prime\pm}\) in Eqs. (1) and (2) are selected with probabilities
\[p_{\pm}=\frac{1}{2}\left(1\pm 2\alpha_{z}r_{z}\sqrt{dt}\right). \tag{11}\]
The corresponding backward moves \(\rho^{\prime\pm}\to\rho\) are described by the quantum maps
\[\rho=\frac{\tilde{M}_{\mp}\rho^{\prime\pm}\tilde{M}_{\mp}^{\dagger}}{\text{ Tr}\left(\tilde{M}_{\mp}\rho^{\prime\pm}\tilde{M}_{\mp}^{\dagger}\right)}, \tag{12}\]
in terms of reverse Kraus operators \(\tilde{M}_{\mp}\) that can be identified from the condition that the initial density matrix is recovered. Inserting Eq. (1) into Eq. (12) we have
\[\rho=\frac{\tilde{M}_{\mp}M_{\pm}\rho M_{\pm}^{\dagger}\tilde{M}_{\mp}^{ \dagger}}{\text{Tr}\left(\tilde{M}_{\mp}M_{\pm}\rho M_{\pm}^{\dagger}\tilde{M }_{\mp}^{\dagger}\right)}, \tag{13}\]
which requires \(\tilde{M}_{\mp}M_{\pm}\) to be proportional to the identity, up to linear order in \(dt\). For \(c=c^{\dagger}\) this can be achieved using
\[\tilde{M}_{\mp}=\frac{1}{\sqrt{2}}\left(\mathbb{I}-\frac{1}{2}c^{2}dt\mp c \sqrt{dt}\right)=M_{\mp}, \tag{14}\]
and specifically for \(c=\alpha_{z}\sigma_{z}\) we have
\[\tilde{M}_{\mp}M_{\pm}=\frac{1}{2}\left(1-2\alpha_{z}^{2}dt\right)\mathbb{I}. \tag{15}\]
Hence the probabilities for the backward moves are
\[p_{\mp}^{\prime}=\text{Tr}\left(\tilde{M}_{\mp}\rho^{\prime\pm}\tilde{M}_{\mp }^{\dagger}\right)=\frac{\text{Tr}\left(M_{\mp}M_{\pm}\rho M_{\pm}^{\dagger} M_{\mp}^{\dagger}\right)}{\text{Tr}\left(M_{\pm}\rho M_{\pm}^{\dagger}\right)}, \tag{16}\]
leading to
\[p_{\mp}^{\prime}=\frac{\left(1-4\alpha_{z}^{2}dt\right)}{2\left(1\pm 2\alpha_{z }r_{z}\sqrt{dt}\right)}. \tag{17}\]
The ratio of conditional probabilities \(T(r_{z}\to r_{z}+dr_{z}^{\pm})/T(r_{z}+dr_{z}^{\pm}\to r_{z})\) is then
\[\frac{p_{\pm}}{p_{\mp}^{\prime}}=1\pm 4\alpha_{z}r_{z}\sqrt{dt}+4\alpha_{z}^{2} \left(1+r_{z}^{2}\right)dt. \tag{18}\]
The two possible increments in \(r_{z}\) are
\[dr_{z}^{\pm} =\text{Tr}\left(\rho^{\prime\pm}\sigma_{z}\right)-r_{z}\] \[=-4\alpha_{z}^{2}r_{z}\left(1-r_{z}^{2}\right)dt\pm 2\alpha_{z} \left(1-r_{z}^{2}\right)\sqrt{dt}, \tag{19}\]
Figure 3: A probability density function \(p(r_{z},t)\), evolving according to the Fokker-Planck equation (9), describing the evolution of an ensemble of density matrices under measurement of \(\sigma_{z}\). A gaussian centred initially at the origin accumulates asymptotically at \(r_{z}=\pm 1\). This complements the direct computation of trajectories \(r_{z}(t)\) illustrated in Fig. 2.
and we note that the mean and variance over the two possibilities are
\[\langle dr_{z}\rangle=p_{+}dr_{z}^{+}+p_{-}dr_{z}^{-}=0, \tag{20}\]
and
\[\sigma_{r_{z}}^{2} =p_{+}\left(dr_{z}^{+}-\langle dr_{z}\rangle\right)^{2}+p_{-} \left(dr_{z}^{-}-\langle dr_{z}\rangle\right)^{2}\] \[=4\alpha_{z}^{2}\left(1-r_{z}^{2}\right)^{2}dt. \tag{21}\]
confirming that the evolution is consistent with the SDE for \(r_{z}\) in Eq. (7). The moves and their probabilities are illustrated in Fig. 4.
We now write
\[d\Delta s_{\rm tot}^{\pm}=d\Delta s_{A}^{\pm}+d\Delta s_{B}^{\pm}, \tag{22}\]
where
\[d\Delta s_{A}^{\pm}=\ln\left(\frac{T(r_{z}\to r_{z}+dr_{z}^{\pm})}{T(r_{z}+dr_{z }^{\pm}\to r_{z})}\right)=\ln\left(\frac{p_{\pm}}{p_{\mp}^{\prime}}\right), \tag{23}\]
and
\[d\Delta s_{B}^{\pm}=\ln\left(\frac{p(r_{z},t)\Delta r_{z}(r_{z})}{p(r_{z}+dr_{ z}^{\pm},t+dt)\Delta r_{z}(r_{z}+dr_{z}^{\pm})}\right). \tag{24}\]
Inserting Eq. (18) we have
\[d\Delta s_{A}^{\pm}=\pm 4\alpha_{z}r_{z}\sqrt{dt}+4\alpha_{z}^{2}\left(1-r_{z}^ {2}\right)dt, \tag{25}\]
which provides two choices of incremental contribution to the stochastic entropy production in the forward move. We can compute the mean of \(d\Delta s_{A}^{\pm}\):
\[\langle d\Delta s_{A}\rangle =p_{+}d\Delta s_{A}^{+}+p_{-}d\Delta s_{A}^{-}\] \[=\left(p_{+}-p_{-}\right)4\alpha_{z}r_{z}\sqrt{dt}+\left(p_{+}+p _{-}\right)4\alpha_{z}^{2}\left(1-r_{z}^{2}\right)dt\] \[=4\alpha_{z}^{2}\left(1+r_{z}^{2}\right)dt, \tag{26}\]
and the variance:
\[\sigma_{A}^{2} =p_{+}\left(d\Delta s_{A}^{+}-\langle d\Delta s_{A}\rangle \right)^{2}+p_{-}\left(d\Delta s_{A}^{-}-\langle d\Delta s_{A}\rangle\right)^ {2}\] \[=16\alpha_{z}^{2}r_{z}^{2}dt, \tag{27}\]
from which we conclude that the evolution can be represented by an Ito process for a stochastic variable \(\Delta s_{A}\):
\[d\Delta s_{A}=4\alpha_{z}^{2}\left(1+r_{z}^{2}\right)dt+4\alpha_{z}r_{z}dW. \tag{28}\]
We now consider the contribution \(d\Delta s_{B}^{\pm}\) to the stochastic entropy production given in Eq. (24). The volume \(\Delta r_{z}(r_{z})\) is the region bounded by increments \(\frac{1}{2}dr_{z}^{\pm}\) starting from \(r_{z}\). It is the patch of phase space associated with coordinate \(r_{z}\), as illustrated in Fig. 4. We write \(\Delta r_{z}=\frac{1}{2}\left(dr_{z}^{+}-dr_{z}^{-}\right)=2\alpha_{z}\left(1- r_{z}^{2}\right)\sqrt{dt}\) and then
\[d\Delta s_{B}^{\pm}=-d\ln p^{\pm}+d\Delta s_{C}^{\pm}, \tag{29}\]
where \(d\ln p^{\pm}=\ln p(r_{z}+dr_{z}^{\pm},t+dt)-\ln p(r_{z},t)\) and
\[d\Delta s_{C}^{\pm} =\ln\left(\frac{\Delta r_{z}(r_{z})}{\Delta r_{z}(r_{z}+dr_{z}^{ \pm})}\right)\] \[=4\alpha_{z}^{2}\left(1-r_{z}^{2}\right)dt\pm 4\alpha_{z}r_{z}\sqrt{dt}. \tag{30}\]
The mean of \(d\Delta s_{C}^{\pm}\) is
\[\langle d\Delta s_{C}\rangle =p_{+}d\Delta s_{C}^{+}+p_{-}d\Delta s_{C}^{-}\] \[=4\alpha_{z}^{2}\left(1+r_{z}^{2}\right)dt, \tag{31}\]
and the variance is
\[\sigma_{C}^{2} =p_{+}\left(d\Delta s_{C}^{+}-\langle d\Delta s_{C}\rangle \right)^{2}+p_{-}\left(d\Delta s_{C}^{-}-\langle d\Delta s_{C}\rangle\right)^ {2}\] \[=16\alpha_{z}^{2}r_{z}^{2}dt, \tag{32}\]
so the Ito process for this component of stochastic entropy production is
\[d\Delta s_{C}=4\alpha_{z}^{2}\left(1+r_{z}^{2}\right)dt+4\alpha_{z}r_{z}dW. \tag{33}\]
Similarly, it may be shown that the term \(-d\ln p^{\pm}\) in Eq. (29) makes a contribution of \(-d\ln p\) to the Ito process for \(d\Delta s_{\rm tot}\). Combining this with Eqs. (22), (28), (29) and (33), the stochastic entropy production can be shown to evolve according to the Ito process
\[d\Delta s_{\rm tot}=-d\ln p(r_{z},t)+8\alpha_{z}^{2}\left(1+r_{z}^{2}\right)dt+ 8\alpha_{z}r_{z}dW. \tag{34}\]
Note that the term \(-d\ln p(r_{z},t)\) is usually referred to as the stochastic entropy production of the system, \(d\Delta s_{\rm sys}\). The remaining terms are then regarded as stochastic entropy production in the environment (in this case the measuring device), and denoted \(d\Delta s_{\rm env}\) or \(d\Delta s_{\rm meas}.\) Note that the evolution of the stochastic entropy production in Eq. (34), with a system contribution that depends on the pdf \(p(r_{z},t)\) over the phase space of the density matrix, is continuous. This is in contrast to implementations of stochastic entropy production in quantum mechanics that involve the probability distribution over eigenstates of the measured operator in the formalism, or that invoke projective measurements causing discontinuities that are potentially infinite in magnitude [33].
Figure 4: Available moves on a discrete set of locations on the \(r_{z}\) axis according to the stochastic dynamics of measurement of \(\sigma_{z}\), illustrating Eqs. (11), (17) and (19). The size of the circles represents the local probability density \(p(r_{z},t)\). The shaded rectangle represents the volume \(\Delta r_{z}=\frac{1}{2}\left(dr_{z}^{\pm}-dr_{z}^{-}\right)\) of the continuum phase space associated with a given location \(r_{z}\).
### Derivation of \(d\Delta s_{\rm tot}\) from the dynamics
The derivation of \(d\Delta s_{\rm tot}\) in the previous section is intricate, but there is an alternative approach that is much more straightforward [6; 7] and does not require the identification of reverse Kraus operators [46]. Let us consider an Ito process for a stochastic variable \(x\) in the form
\[dx=\left(A^{\rm rev}(x,t)+A^{\rm irr}(x,t)\right)dt+B(x,t)dW, \tag{35}\]
where the terms proportional to \(A^{\rm rev}\) and \(A^{\rm irr}\) represent modes of deterministic dynamics that satisfy and violate time reversal symmetry, respectively. Then the stochastic entropy production is given by
\[d\Delta s_{\rm tot} = -d\ln p(x,t)+\frac{A^{\rm irr}}{D}dx-\frac{A^{\rm rev}A^{\rm irr}} {D}dt+\frac{\partial A^{\rm irr}}{\partial x}dt \tag{36}\] \[-\frac{\partial A^{\rm rev}}{\partial x}dt-\frac{1}{D}\frac{ \partial D}{\partial x}dx+\frac{(A^{\rm rev}-A^{\rm irr})}{D}\frac{\partial D }{\partial x}dt\] \[-\frac{\partial^{2}D}{\partial x^{2}}dt+\frac{1}{D}\left(\frac{ \partial D}{\partial x}\right)^{2}dt,\]
where \(D(x,t)=\frac{1}{2}B(x,t)^{2}\). This may not seem very intuitive, but for dynamics that possess a stationary state with zero probability current, characterised by a pdf \(p_{\rm st}(x)\), Eq. (36) reduces to the simpler expression \(d\Delta s_{\rm tot}=-d\ln\left(p(x,t)/p_{\rm st}(x)\right)\).
For the dynamics of \(r_{z}\) given by Eq. (7) we have \(A^{\rm rev}=A^{\rm irr}=0\) and \(B=2\alpha_{z}\left(1-r_{z}^{2}\right)\). Hence \(D=2\alpha_{z}^{2}(1-r_{z}^{2})^{2}\), leading to \(dD/dr_{z}=-8\alpha_{z}^{2}r_{z}(1-r_{z}^{2})\), \(d^{2}D/dr_{z}^{2}=-8\alpha_{z}^{2}(1-3r_{z}^{2})\), and
\[d\Delta s_{\rm tot} = -d\ln p-\frac{1}{D}\frac{dD}{dr_{z}}dr_{z}-\frac{d^{2}D}{dr_{z}^{ 2}}dt+\frac{1}{D}\left(\frac{dD}{dr_{z}}\right)^{2}dt \tag{37}\] \[= -d\ln p+8\alpha_{z}^{2}\left(1+r_{z}^{2}\right)dt+8\alpha_{z}r_{ z}dW.\]
This is the same as Eq. (34), but the derivation is much more direct. Extension to sets of coupled Ito processes for several stochastic variables \(\{x_{i}\}\) is straightforward, and we shall encounter an example of such a generalisation in Section III.
### Results
Let us now consider the character of the stochastic entropy production described by Eq. (37). It is straightforward to evaluate \(\Delta s_{\rm tot}(t)\) numerically, employing solutions to the Fokker-Planck equation (9) and the Ito process for \(r_{z}(t)\). Example evolutions of \(\Delta s_{\rm tot}(t)\) associated with trajectories \(r_{z}(t)\) are shown in Fig. 5, for \(\alpha_{z}=1\). The mean stochastic entropy production over a sample of trajectories appears to rise linearly in time. The increase reflects the fact that the pdf \(p(r_{z},t)\) does not reach a stationary state, but instead progressively sharpens towards two \(\delta\)-function peaks at \(r_{z}=\pm 1\). The system approaches one of the eigenstates but never quite reaches it. A system that continues to evolve in response to time reversal asymmetric dynamics (which includes the noise term as well as the deterministic contribution proportional to \(A^{\rm irr}\) in Eq. (35)) is characterised by never-ending stochastic entropy production.
The calculations of \(\Delta s_{\rm tot}\) in Fig. 5 were actually obtained after performing a transformation of the stochastic variable to avoid difficulties arising from the singularities in \(p(r_{z},t)\) as \(t\to\infty\). It is possible to do this since the stochastic entropy production is invariant under a coordinate transformation. Consider, then, the variable \(y=\tanh^{-1}r_{z}\), which evolves in time according to
\[dy=4\alpha_{z}^{2}\tanh y\,dt+2\alpha_{z}dW, \tag{38}\]
using It\(\tilde{\iota}_{\rm i}\)o's lemma. The phase space \(-1\leq r_{z}\leq 1\) maps to \(-\infty\leq y\leq\infty\). We identify \(A^{\rm rev}(y)=0\), \(A^{\rm irr}(y)=4\alpha_{z}^{2}\tanh y\) and \(D(y)=2\alpha_{z}^{2}\) and write
\[d\Delta s_{\rm tot} = -d\ln p(y,t)+\frac{A^{\rm irr}}{D}dy+\frac{dA^{\rm irr}}{dy}dt\] \[= -d\ln p(y,t)+4\alpha_{z}^{2}\left(1+\tanh^{2}y\right)dt+4\alpha_{ z}\tanh y\,dW,\]
where the pdf for \(y\) satisfies the Fokker-Planck equation
\[\frac{\partial p}{\partial t}=-4\alpha_{z}^{2}\frac{\partial}{\partial y}\left( \tanh y\,p\right)+2\alpha_{z}^{2}\frac{\partial^{2}p}{\partial y^{2}}. \tag{40}\]
Solving Eqs. (38), (39) and (40) numerically produces the trajectories in Fig. 5.
We can perform an analysis of the evolution at late times, where \(r_{z}\) is close to \(1\) or \(-1\) such that \(|y|\) is large. The dynamics are then approximated by
\[dy=\pm 4\alpha_{z}^{2}dt+2\alpha_{z}dW, \tag{41}\]
Figure 5: Four trajectories illustrating the stochastic entropy production \(\Delta s_{\rm tot}(t)\) for the dynamics of Eq. (7) in the interval \(1\leq t\leq 2\), starting from a gaussian pdf centred on \(r_{z}=0\) at \(t=0\), and with \(\alpha_{z}=1\). The mean over a sample of \(40\) trajectories is consistent with an asymptotic average rate of production equal to \(8\alpha_{z}^{2}\), as suggested in Eq. (47).
employing the plus sign if \(y>0\) and the negative if \(y<0\). The Fokker-Planck equation is
\[\frac{\partial p}{\partial t}=-4\alpha_{z}^{2}\text{sgn}(y)\frac{\partial p}{ \partial y}+2\alpha_{z}^{2}\frac{\partial^{2}p}{\partial y^{2}}, \tag{42}\]
which has an approximate asymptotic solution:
\[p(y,t)\!\propto\!\frac{1}{t^{1/2}}\!\left[\exp\left[-\frac{(y-4\alpha_{z}^{2}t )^{2}}{8\alpha_{z}^{2}t}\right]+\exp\left[-\frac{(y+4\alpha_{z}^{2}t)^{2}}{8 \alpha_{z}^{2}t}\right]\right], \tag{43}\]
consisting of two gaussians in the \(y\) phase space, moving with equal and opposite drift velocities towards \(\pm\infty\) and simultaneously broadening.
From Eq. (39) we obtain stochastic entropy production for a trajectory with \(y\gg 0\) of
\[d\Delta s_{\text{tot}}\approx-d\ln p_{+}(y,t)+8\alpha_{z}^{2}dt+4\alpha_{z}\, dW, \tag{44}\]
with
\[p_{+}\propto\frac{1}{t^{1/2}}\exp\left(-\frac{(y-4\alpha_{z}^{2}t)^{2}}{8 \alpha_{z}^{2}t}\right), \tag{45}\]
and hence
\[d\Delta s_{\text{tot}}\approx d\left(\frac{(y-4\alpha_{z}^{2}t)^{2}}{8\alpha_ {z}^{2}t}\right)+\frac{1}{2}d\ln t+8\alpha_{z}^{2}dt+4\alpha_{z}\,dW, \tag{46}\]
the average of which is
\[d\langle\Delta s_{\text{tot}}\rangle \approx\frac{1}{t}dt-\frac{\langle(y-4\alpha_{z}^{2}t)^{2}\rangle }{8\alpha_{z}^{2}t^{2}}dt+8\alpha_{z}^{2}dt\] \[=\frac{1}{t}dt-\frac{4\alpha^{2}t}{8\alpha^{2}t^{2}}dt+8\alpha_{z }^{2}dt, \tag{47}\]
which reduces to \(8\alpha_{z}^{2}dt\) as \(t\to\infty\). A similar conclusion can be reached if \(y\ll 0\), so we expect mean stochastic entropy production at a constant rate \(8\alpha_{z}^{2}\) as \(t\to\infty\), confirming the behaviour seen in Fig. 5.
### Contrast with von Neumann entropy
At this point we should consider whether stochastic entropy production is related to a change in the von Neumann entropy \(S_{\text{vN}}=-\text{Tr}\rho\ln\rho\), a more commonly employed expression for entropy in quantum mechanics.
The mean stochastic entropy production is the change in subjective uncertainty with regard to the quantum state adopted by the world. We are unable to make exact predictions when the dynamical influence of the environment on the system is not specified in detail. The dynamics then become effectively stochastic and our knowledge of the adopted state is reduced with time.
In contrast, the von Neumann entropy is the uncertainty presented by a quantum state with regard to the outcomes of projective measurement in a basis in which the density matrix is diagonal. It is a Shannon entropy \(-\sum_{i}P_{i}\ln P_{i}\) where \(P_{i}\) is the probability of projection into eigenstate \(i\) of the observable. For a two level system the number of such outcomes is two and so the von Neumann entropy has an upper limit of \(\ln 2\). In contrast, the upper limit of the mean stochastic entropy production, representing the change in uncertainty in the adopted quantum state of the world, is infinite, since there is a continuum of possible states that could be taken. The continued mean production of stochastic entropy associated with measurement, discussed in previous sections, represents this progressively greater uncertainty.
Also note that the stochastic entropy production we have been considering has no connection with heat transfer or work. The two level system under consideration does not possess a Hamiltonian \(H\) and the selection of one or other level through measurement does not involve a change in system energy: specifically \(\text{Tr}H\rho=0\) throughout. Stochastic entropy production is not necessarily associated with the dissipation of potential energy into heat. Indeed it need not be in classical mechanics, for example in the free expansion of an ideal gas. In both classical and quantum settings the purpose of entropy is to specify the degree of configurational uncertainty. In classical mechanics the configurations are described by sets of classical coordinates: in quantum mechanics they are specified by collections of (reduced) density matrix elements.
Von Neumann entropy does play a role in computing the thermodynamic entropy of a quantum system in a situation where it is subjected to projective measurement and thereafter regarded as occupying one of the eigenstates. However, it is not straightforward to involve von Neumann entropy in discussions of the second law and the arrow of time. This is evident if we consider that the von Neumann entropy \(-\text{Tr}\bar{\rho}\ln\bar{\rho}\) of the ensemble averaged density matrix \(\bar{\rho}\) remains constant under the measurement dynamics employed here (because \(\bar{\rho}\) remains constant). And the von Neumann entropy of a typical member of the considered ensemble of density matrices falls to zero under the dynamics. This is illustrated in Fig. 6 for the two level system where \(\rho\) evolves towards one of the \(\rho_{\pm}^{c}\): the latter are pure states with \(S_{\text{vN}}=0\). The mean von Neumann entropy change \(-\Delta\text{Tr}\langle\rho\ln\rho\rangle\) associated with the measurement process is negative. Neither of these outcomes makes it easy to argue that the von Neumann entropy has a role to play in the second law: the stochastic entropy production is a better candidate.
## III Simultaneous measurement of \(\sigma_{z}\) and \(\sigma_{x}\)
### Evolution towards purity
Now we turn our attention to a slightly more complicated case of stochastic entropy production associated with the dynamics of an open quantum system. We continue to use the framework of quantum state diffusion, involving transformations according to Eq. (1), but we
now represent the stochastic influence of the environment on the system using _two_ pairs of Kraus operators, given by
\[M_{1\pm} =\frac{1}{2}\left(\mathbb{I}-\frac{1}{2}c_{1}^{\dagger}c_{1}dt\pm c _{1}\sqrt{dt}\right)\] \[M_{2\pm} =\frac{1}{2}\left(\mathbb{I}-\frac{1}{2}c_{2}^{\dagger}c_{2}dt\pm c _{2}\sqrt{dt}\right), \tag{48}\]
with \(c_{1}=\alpha_{z}\sigma_{z}\) and \(c_{2}=\alpha_{x}\sigma_{x}\). The first and second pair describe the dynamics of continuous measurement of observables \(\sigma_{z}\) and \(\sigma_{x}\), respectively, and together therefore represent an attempt to perform simultaneous measurement. Since \(\sigma_{z}\) and \(\sigma_{x}\) do not commute, we expect this not to succeed, and quantum state diffusion provides an interesting illustration of what this means.
Probabilities of stochastic changes in the reduced density matrix of the system, brought about by interactions with the environment, may be deduced for these operators, and a stochastic Lindblad equation for its evolution may be derived:
\[d\rho =\sum_{i=1,2}\left(c_{i}\rho c_{i}^{\dagger}-\frac{1}{2}\rho c_{i }^{\dagger}c_{i}-\frac{1}{2}c_{i}^{\dagger}c_{i}\rho\right)dt\] \[\qquad+\left(\rho c_{i}^{\dagger}+c_{i}\rho-C_{i}\rho\right)dW_{ i}, \tag{49}\]
with \(C_{i}=\mathrm{Tr}\left(\left(c_{i}+c_{i}^{\dagger}\right)\rho\right)\). Upon inserting the representation \(\rho=\frac{1}{2}\left(\mathbb{I}+r_{z}\sigma_{z}+r_{x}\sigma_{x}\right)\), the dynamics can be expressed as
\[dr_{z} =2\alpha_{z}\left(1-r_{z}^{2}\right)dW_{z}-2\alpha_{x}^{2}r_{z}dt -2\alpha_{x}r_{z}r_{x}dW_{x}\] \[dr_{x} =2\alpha_{x}\left(1-r_{x}^{2}\right)dW_{x}-2\alpha_{z}^{2}r_{x}dt -2\alpha_{z}r_{x}r_{z}dW_{z}, \tag{50}\]
where \(dW_{x}\) and \(dW_{z}\) are independent Wiener increments. Example stochastic trajectories starting from the maximally mixed state at \(r_{x}=r_{z}=0\) are shown in Fig. 7. The purity \(P=\mathrm{Tr}\rho^{2}=\frac{1}{2}\left(1+r^{2}\right)\), where \(r^{2}=r_{x}^{2}+r_{z}^{2}\), evolves according to
\[dP =4\left(\alpha_{x}^{2}\left(1-r_{x}^{2}\right)+\alpha_{z}^{2} \left(1-r_{z}^{2}\right)\right)\left(1-P\right)dt\] \[+4\alpha_{x}r_{x}\left(1-P\right)dW_{x}+4\alpha_{z}r_{z}\left(1- P\right)dW_{z}, \tag{51}\]
such that \(P=1\) is a fixed point that is reached asymptotically in time. Examples of such system purification are shown in Fig. 8.
The dynamics can be recast in terms of \(Y=\tanh^{-1}r^{2}\), which tends to \(\infty\) as \(r\to 1\), and an angle
Figure 8: Evolution of purity for the system trajectories in Fig. 7.
Figure 6: Evolution of the von Neumann entropy of the reduced density matrix of the two level system, for 10 stochastic trajectories governed by the dynamics of Eq. (7) with \(\alpha_{z}=1\). Mean behaviour is also shown. Asymptotic values of zero imply that the system is purified under measurement.
Figure 7: Two trajectories of the density matrix coordinates \((r_{x}(t),r_{z}(t))\) generated by the dynamics of simultaneous measurement of \(\sigma_{x}\) and \(\sigma_{z}\), Eq. (50), starting from the maximally mixed state at the origin and for equal strengths of measurement \(\alpha_{x}\) and \(\alpha_{z}\). The black circle represents a condition of purity, towards which the system evolves. Eigenstates of \(\sigma_{x}\) and \(\sigma_{z}\) lie at \(\theta=\pm\pi/2\) and \(\theta=0,\pi\) on the circle, respectively.
\(\tan^{-1}(r_{x}/r_{z})\). For \(\alpha_{x}=\alpha_{z}=\alpha\) the SDEs are
\[dY =\frac{4\alpha^{2}}{(1+\tanh Y)^{2}}\left(2+\tanh Y+3\tanh^{2}Y \right)dt\] \[\qquad+\frac{4\alpha\sqrt{\tanh Y}}{1+\tanh Y}dW_{Y}\] \[d\theta =2\alpha\,dW_{\theta}/\sqrt{\tanh Y}, \tag{52}\]
where \(dW_{Y}=r^{-1}\left(r_{z}dW_{z}+r_{x}dW_{x}\right)\) and \(dW_{\theta}=r^{-1}\left(-r_{x}dW_{z}+r_{z}dW_{x}\right)\) are independent Wiener increments. As \(t\rightarrow\infty\), Eq. (51) implies that \(r^{2}\to 1\) and hence \(\tanh Y\to 1\), in which case we can write
\[dY\approx 6\alpha^{2}dt+2\alpha\,dW_{Y}, \tag{53}\]
and for late times we have \(Y\approx 6\alpha^{2}t+2\alpha W_{Y}+\text{const}\). The SDE for \(\theta\) in this limit is \(d\theta=2\alpha dW_{\theta}\), such that the pdf becomes uniform over \(\theta\) at late times. We write \(p(Y,\theta,t)\rightarrow(2\pi)^{-1}F(Y,t)\), in terms of a travelling and broadening gaussian in \(Y\):
\[F(Y,t)=\frac{1}{\left(8\pi\alpha^{2}t\right)^{1/2}}\exp\left[-\frac{(Y-6 \alpha^{2}t)^{2}}{8\alpha^{2}t}\right]. \tag{54}\]
The stochastic entropy production can now be computed using the framework of \(Y\) and \(\theta\) coordinates. We shall do so first for late times where \(Y\to 1\) and the dynamical equations (52) become independent. We can identify coefficients \(A_{Y}^{\text{tr}}=6\alpha^{2}\), \(A_{Y}^{\text{rev}}=0\), \(D_{Y}=2\alpha^{2}\), and \(A_{\theta}^{\text{irr}}=0\), \(A_{\theta}^{\text{rev}}=0\), \(D_{\theta}=2\alpha^{2}\) and use Eq. (36) to identify contributions to the stochastic entropy production. The system stochastic entropy production can be computed using the pdf in Eq. (54). After some manipulation we find that
\[d\Delta s_{\text{tot}}\approx 18\alpha^{2}dt+6\alpha\,dW_{Y}, \tag{55}\]
and thus the stochastic entropy production increases at a mean rate of \(18\alpha^{2}\). This is more than twice the mean rate of production in Eq. (46) for the case of measurement of \(\sigma_{z}\) alone. The continued increase is once again a consequence of the non-stationary character of the evolution: the dynamics have the effect of purifying the system, but only as \(t\rightarrow\infty\).
For the more general situation, without taking \(t\) to be large, it is possible to compute the stochastic entropy production numerically, based on the more elaborate coefficients of the SDEs in Eqs. (52), and a general solution to the associated Fokker-Planck equation. Mean stochastic entropy production over an ensemble of 10 trajectories is given in Fig. 9, separating \(\langle\Delta s_{\text{tot}}\rangle\) into contributions \(\langle\Delta s_{\text{sys}}\rangle=-\Delta\langle\ln p\rangle\) and \(\langle\Delta s_{\text{meas}}\rangle=\langle\Delta s_{\text{tot}}\rangle- \langle\Delta s_{\text{sys}}\rangle\). The significance of this separation is that
\[-\Delta\langle\ln p\rangle =-\int p(Y,\theta,t)\ln p(Y,\theta,t)dYd\theta\] \[+\int p(Y,\theta,0)\ln p(Y,\theta,0)dYd\theta, \tag{56}\]
is the change in Gibbs entropy \(\Delta S_{G}\) of the system when described using the pdf in \(Y,\theta\) coordinates. Note that the Gibbs entropy is coordinate frame dependent and is therefore a measure of the uncertainty of adopted coordinates in a specific frame. In contrast, the mean stochastic entropy production is independent of coordinate frame.
### Measurement of two non-commuting observables for a pure state
Simultaneous measurement of \(\sigma_{z}\) and \(\sigma_{x}\) leads asymptotically to a pure state located on a circle of radius \(r=\sqrt{r_{x}^{2}+r_{z}^{2}}=1\) in the \((r_{x},r_{z})\) coordinate space. It is of interest now to consider how the pdf of this pure state over the angle \(\theta\) (shown in Fig. 7) depends on the ratio of the strengths of measurement of the two observables, and to compute the stochastic entropy production arising from changes in this ratio.
We therefore return to Eq. (50), set \(r_{x}=\sin\theta\), \(r_{z}=\cos\theta\) and derive an SDE for \(\theta\) in the form
\[d\theta =\left(\alpha_{x}^{2}-\alpha_{z}^{2}\right)\sin 2\theta dt+2 \alpha_{x}\cos\theta\,dW_{x}-2\alpha_{z}\sin\theta\,dW_{z}\] \[=\left(\alpha_{x}^{2}-\alpha_{z}^{2}\right)\sin 2\theta dt+2\left( \alpha_{x}^{2}\cos^{2}\theta+\alpha_{z}^{2}\sin^{2}\theta\right)^{1/2}dW, \tag{57}\]
which depends on the two measurement strengths \(\alpha_{x}\) and \(\alpha_{z}\), and where \(dW\) is a Wiener increment. The Fokker-Planck equation for the pdf \(p(\theta,t)\) reads
\[\frac{\partial p(\theta,t)}{\partial t} =-\frac{\partial}{\partial\theta}\Big{[}\left(\alpha_{x}^{2}- \alpha_{z}^{2}\right)\sin 2\theta\,p(\theta,t) \tag{58}\] \[-2\frac{\partial}{\partial\theta}\left(\alpha_{x}^{2}\cos^{2} \theta+\alpha_{z}^{2}\sin^{2}\theta\right)p(\theta,t)\Big{]}, \tag{59}\]
Figure 9: Mean stochastic entropy production \(\langle\Delta s_{\text{tot}}\rangle\) for simultaneous measurement of observables \(\sigma_{x}\) and \(\sigma_{z}\), separated into contributions associated with the system and measuring device, \(\langle\Delta s_{\text{sys}}\rangle\) and \(\langle\Delta s_{\text{meas}}\rangle\), respectively. The strengths of measurement \(\alpha_{x}\) and \(\alpha_{z}\) are both set to unity and the numerically generated ensemble consisted of ten trajectories. The mean stochastic entropy production is consistent with the estimate in Eq. (55).
and this has stationary solutions (with zero probability current) given by
\[p_{\rm st}(\theta)=\frac{\sqrt{2}\mu^{2}\left(1+\mu^{2}-\left(1-\mu^{2}\right) \cos 2\theta\right)^{-3/2}}{E\left(1-\mu^{2}\right)+\mu E\left(1-\mu^{-2} \right)}, \tag{60}\]
where \(E(x)=\int_{0}^{\pi/2}\left(1-x\sin^{2}\phi\right)^{1/2}d\phi\) is the complete elliptical integral of the second kind and \(\mu=\alpha_{x}/\alpha_{z}\) is the ratio of the two measurement strengths. Examples of stationary pdfs for various values of \(\mu\) are shown in Fig. 10. Clearly a greater strength of measurement of observable \(\sigma_{x}\) produces higher probability density in the vicinity of the eigenstates of \(\sigma_{x}\) at \(\theta=\pm\pi/2\) than in the vicinity of the eigenstates of \(\sigma_{z}\) at \(\theta=0\) and \(\pi\), and vice versa.
Note that a form of Heisenberg uncertainty is exhibited by the stationary pdf. In quantum state diffusion, \(r_{x}=\mathrm{Tr}(\rho\sigma_{x})\) and \(r_{z}=\mathrm{Tr}(\rho\sigma_{z})\) are real physical properties of the quantum state that are correlated in their evolution. The expectation value of each in the stationary state is zero:
\[\langle r_{z}\rangle =\int_{-\pi}^{\pi}\cos\theta\,p_{\rm st}(\theta)d\theta=0\] \[\langle r_{x}\rangle =\int_{-\pi}^{\pi}\sin\theta\,p_{\rm st}(\theta)d\theta=0, \tag{61}\]
while the variances \(\langle r_{z}^{2}\rangle-\langle r_{z}\rangle^{2}=\int_{-\pi}^{\pi}\cos^{2} \theta\,p_{\rm st}(\theta)d\theta\) and \(\langle r_{x}^{2}\rangle-\langle r_{x}\rangle^{2}=\int_{-\pi}^{\pi}\sin^{2} \theta\,p_{\rm st}(\theta)d\theta\) sum to unity. A higher measurement strength for one observable drives up the variance of the associated variable (namely the adopted values lie close to either \(1\) or \(-1\)) while driving down the variance of the other variable (the value lies close to zero).
The stochastic entropy production associated with the dynamics of \(\theta\) is specified by \(A_{\theta}^{\rm rev}=0\), \(A_{\theta}^{\rm irr}=\left(\alpha_{x}^{2}-\alpha_{z}^{2}\right)\sin 2\theta\), and \(D_{\theta}=2\left(\alpha_{x}^{2}\cos^{2}\theta+\alpha_{z}^{2}\sin^{2}\theta\right)\) which leads to
\[d\Delta s_{\rm tot} =\left(6\left(\alpha_{x}^{2}-\alpha_{z}^{2}\right)\cos 2\theta+ \frac{9\left(\alpha_{x}^{2}-\alpha_{z}^{2}\right)^{2}\sin^{2}2\theta}{2\left( \alpha_{x}^{2}\cos^{2}\theta+\alpha_{z}^{2}\sin^{2}\theta\right)}\right)dt\] \[+\frac{3\left(\alpha_{x}^{2}-\alpha_{z}^{2}\right)\sin 2\theta}{ \left(\alpha_{x}^{2}\cos^{2}\theta+\alpha_{z}^{2}\sin^{2}\theta\right)^{1/2}} dW-d\ln p(\theta,t). \tag{62}\]
The dynamic and entropic consequences of changing the ratio of measurement strengths, for an initially pure state, can be established by solving Eqs. (57), (58) and (62) for a given protocol. But we instead focus attention on a case with an analytic result. The asymptotic mean production of stochastic entropy for a transition from a uniform stationary pdf over \(\theta\), at equal measurement strengths \(\alpha_{x}^{i}=\alpha_{z}^{i}\), to a final stationary state brought about by an abrupt change in measurement strengths to \(\alpha_{x}^{f}=\mu\alpha_{z}^{f}\) at \(t=0\), takes the form of a Kullback-Leibler divergence:
\[\langle\Delta s_{\rm tot}\rangle_{\infty}=\int p_{\rm st}^{i}(\theta)\ln\left( p_{\rm st}^{i}(\theta)/p_{\rm st}^{f}(\theta)\right)d\theta, \tag{63}\]
where the \(p_{\rm st}^{i,f}(\theta)\) correspond to Eq. (60) with the insertion of \(\alpha_{x}^{i,f}\) and \(\alpha_{z}^{i,f}\). This can be derived by noting that \(d\Delta s_{\rm tot}=-d\ln\left(p(\theta,t)/p_{\rm st}(\theta)\right)\) in this case. We plot \(\langle\Delta s_{\rm tot}\rangle_{\infty}\) for various ratios of final measurement strengths \(\mu\) in Fig. 11. Note that elevation of the measurement strength of one of the observables relative to the other leads to positive mean stochastic entropy production, in accordance with the second law, and the effect for enhanced measurement of \(\sigma_{x}\) relative to \(\sigma_{z}\) is the same as for similarly enhanced measurement of \(\sigma_{z}\), i.e. the same production emerges for ratios \(\mu\) and \(1/\mu\).
Figure 11: The asymptotic mean stochastic entropy production brought about by an abrupt change in the ratio \(\mu=\alpha_{x}/\alpha_{z}\), starting from equal measurement strengths \(\alpha_{x}=\alpha_{z}\). The initial and final stationary pdfs for \(\mu^{2}=0.2\), \(2\) and \(5\), from Fig. 10, together with arrows indicating the change in shape in the process, are shown in the insets.
Interpretation
We return now to the physical interpretation of stochastic entropy production in open quantum systems. By analogy with situations in classical dynamics, the average of the stochastic entropy production \(\Delta s_{\rm tot}\) that accompanies the evolution expresses change in subjective uncertainty concerning the details of the quantum state of the world. We have argued that this uncertainty is generated in the same way as in classical physics. Namely that the dynamical evolution of the world is deterministic, but chaotic, and that we do not or cannot attempt to solve the equations of motion for the coordinates exactly. We instead coarse-grain aspects of the description and employ a set of stochastic equations that capture the resulting unpredictability in evolution, again just as in a classical situation. Such modelling methods can only provide statistical predictions, and hence are characterised by an increase in entropy of (our perception of) the world. This is not a physical effect, but merely a measure of the absence of subjective knowledge, again just as in classical thermodynamics. The key point is that we take the quantum state vector of the world, and hence the reduced density matrix of an open system, to be the appropriate fundamental physical description, analogous to classical phase space coordinates.
It is possible to derive such a stochastic model from an underlying Hamiltonian describing the system and environment [38], but here we have adopted a more direct approach, using a framework of quantum state diffusion to represent the environmental disturbances. The resulting Markovian stochastic rules of evolution, specified by Kraus operators, are designed to drive a system continuously and (pseudo)randomly towards one of its eigenstates. This is our conception of the process of quantum measurement, instead of instantaneous projection. The resulting evolution of the reduced density matrix resembles a path taken by a Brownian particle, and it can be described using a Fokker-Planck equation for a pdf over a suitable phase space, or an Ito process that specifies a stochastic trajectory.
But what is the point of stochastic entropy production? Its main purpose, in both classical and quantum systems, is in providing a measure of the apparent irreversibility of evolution and hence an arrow of time. Both of these depend on the scale of the coarse-graining. The definition in Eq. (10) involves a comparison between the likelihoods, computed according to the stochastic model employed, of forward and backward sequences of events. A departure of \(\Delta s_{\rm tot}\) from zero indicates that the model dynamics generate one of these sequences preferentially; that the dynamics are effectively irreversible in the sense of breaking time reversal symmetry. And since the stochastic model is intended to capture the effects of chaos in the underlying dynamics, the preferred sequences will exhibit effects such as dispersion rather than assembly.
Nevertheless, parts of the world can become better defined as time evolves according to these models. Entropy production in a quantum framework can be used to characterise the approach of an open system towards a stationary state as well as the selection of an eigenstate under measurement. The latter is not so very different from classical measurement, which can also be shown to have an entropic cost in simple models [47]. Furthermore, we can conceive of quantum processes that are reversible, in the sense that the average of \(\Delta s_{\rm tot}\) is zero. This would arise, as in classical circumstances, when the driving of the system, for example the rate of change of coupling to a measuring device, becomes quasistatic. Hence quantum measurement need not be irreversible, neither in the dynamic nor in the entropic sense.
## V Conclusions
Entropy production represents increasing subjective uncertainty of microscopic configuration brought about by employing stochastic models of the dynamics instead of the underlying deterministic equations of motion that are responsible for typical chaotic, dispersive behaviour. These ideas can apply to quantum systems, where we regard the reduced density matrix as a physical property analogous to a set of physical coordinates of a classical system. The reduced density matrix evolves pseudorandomly through interactions with an underspecified environment, which we represent in a minimal fashion using Kraus operators and a framework of Markovian quantum state diffusion. We concern ourselves with the uncertainty regarding the reduced density matrix that is actually adopted by the system. Stochastic entropy production can then be computed using analysis of the relative probabilities of forward and backward Brownian trajectories of the reduced density matrix.
The usual features of quantum mechanics are captured by the dynamics, in particular the stochastic selection of an eigenstate according to the Born rule. A further feature has been explored, for a simple two level system, where the simultaneous measurement of two observables represented by non-commuting operators can be considered. The system is prevented from selecting an eigenstate of either operator, as expected, and instead adopts a pure state with correlated stationary uncertainty with respect to the two observables.
The models of measurement used here have the effect of purifying the system, i.e. eliminating any initial entanglement between the system and its environment. Such entanglement is often considered to be the outcome of measurement, so this is perhaps an unusual viewpoint, though perfectly in line with the idea that a system is left in an eigenstate after the process of measurement. The final state of the environment (the measuring device) is correlated with the final state of the system even in the absence of entanglement, and hence is able to convey information about the system and preserve a record of the measurement.
We suggest that the reduced density matrix typically
used to describe an open quantum system is an average over an ensemble of adoptable states; pure as well as those entangled with the environment. Moreover, the ensemble average is not a suitable description for eigenstate selection. This problem is usually overcome by introducing a process of projective measurement that takes place outside the regular dynamics, but such a difficulty is not present in quantum state diffusion.
The dynamics therefore conceptualise quantum mechanics as the evolution of real physical properties that behave in a complex but relatively unmysterious fashion. The quantum state is more than a provider of information about probabilities of projective measurement outcomes. The reduced density matrix, and by implication the quantum state vector of the world, are treated as physical coordinates and not merely bearers of information.
However, the main purpose of this paper has been to use such a conceptual framework to provide explicit examples of stochastic entropy production for a simple open quantum system, and to suggest that this quantity is the most appropriate extension into the quantum regime of the modern concept of entropy production. We have studied stochastic entropy production for scenarios involving the measurement of one and then two observables. Mean stochastic entropy production in this context measures changes in subjective uncertainty concerning the adopted quantum state of the world. It never decreases, thus satisfying the second law of thermodynamics. The von Neumann entropy is a measure of uncertainty in measurement outcome, but compared to mean stochastic entropy production it plays a rather different role. The connections between the two will be worth exploring further.
###### Acknowledgements.
This work was supported by the U.K. Engineering and Physical Sciences Research Council through the Centre for Doctoral Training in Delivering Quantum Technologies at UCL, grant number 1489394.
|
2306.03065 | LibAUC: A Deep Learning Library for X-Risk Optimization | This paper introduces the award-winning deep learning (DL) library called
LibAUC for implementing state-of-the-art algorithms towards optimizing a family
of risk functions named X-risks. X-risks refer to a family of compositional
functions in which the loss function of each data point is defined in a way
that contrasts the data point with a large number of others. They have broad
applications in AI for solving classical and emerging problems, including but
not limited to classification for imbalanced data (CID), learning to rank
(LTR), and contrastive learning of representations (CLR). The motivation of
developing LibAUC is to address the convergence issues of existing libraries
for solving these problems. In particular, existing libraries may not converge
or require very large mini-batch sizes in order to attain good performance for
these problems, due to the usage of the standard mini-batch technique in the
empirical risk minimization (ERM) framework. Our library is for deep X-risk
optimization (DXO) that has achieved great success in solving a variety of
tasks for CID, LTR and CLR. The contributions of this paper include: (1) It
introduces a new mini-batch based pipeline for implementing DXO algorithms,
which differs from existing DL pipeline in the design of controlled data
samplers and dynamic mini-batch losses; (2) It provides extensive benchmarking
experiments for ablation studies and comparison with existing libraries. The
LibAUC library features scalable performance for millions of items to be
contrasted, faster and better convergence than existing libraries for
optimizing X-risks, seamless PyTorch deployment and versatile APIs for various
loss optimization. Our library is available to the open source community at
https://github.com/Optimization-AI/LibAUC, to facilitate further academic
research and industrial applications. | Zhuoning Yuan, Dixian Zhu, Zi-Hao Qiu, Gang Li, Xuanhui Wang, Tianbao Yang | 2023-06-05T17:43:46Z | http://arxiv.org/abs/2306.03065v1 | # LibAUC: A Deep Learning Library for X-Risk Optimization
###### Abstract.
This paper introduces the award-winning deep learning (DL) library called _LibAUC_ for implementing state-of-the-art algorithms towards optimizing a family of risk functions named _X-risks_. X-risks refer to a family of _compositional functions_ in which the loss function of each data point is defined in a way that contrasts the data point with a large number of others. They have broad applications in AI for solving classical and emerging problems, including but not limited to classification for imbalanced data (CID), learning to rank (LTR), and contrastive learning of representations (CLR). The motivation of developing LibAUC is to address the convergence issues of existing libraries for solving these problems. In particular, existing libraries may not converge or require very large mini-batch sizes in order to attain good performance for these problems, due to the usage of the standard mini-batch technique in the empirical risk minimization (ERM) framework. Our library is for _deep X-risk optimization (DXO)_ that has achieved great success in solving a variety of tasks for CID, LTR and CLR. The contributions of this paper include: (1) It introduces a new mini-batch based pipeline for implementing DXO algorithms, which differs from existing DL pipeline in the design of _controlled data samplers and dynamic mini-batch losses_, (2) It provides extensive benchmarking experiments for ablation studies and comparison with existing libraries. The LibAUC library features scalable performance for millions of items to be contrasted, faster and better convergence than existing libraries for optimizing X-risks, seamless PyTorch deployment and versatile APIs for various loss optimization. Our library is available to the open source community at [https://github.com/Optimization-AI/LibAUC](https://github.com/Optimization-AI/LibAUC), to facilitate further academic research and industrial applications.
Deep learning, Library, X-Risk, Optimization +
Footnote †: leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*] †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*]Footnote †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: thanks: [leftmargin=*] †: thanks: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: thanks: [leftmargin=*] †: thanks: thanks: thanks: [leftmargin=*] †: thanks: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: thanks: [leftmargin=*] †: thanks: thanks: thanks: [leftmargin=*] †: thanks: thanks: thanks: [leftmargin=*] †: thanks: thanks: [leftmargin=*] †: thanks: thanks: thanks: [leftmargin
the 1st Place at the Stanford CheXpert Competition (Zhang et al., 2017) and MIT AICures Challenge (Zhang et al., 2017). Hence, it deserves in-depth discussions about the design principles and unique features to facilitate future research and development for DXO.
This paper aims to present the underlying design principles of the LibAUC library and provide a comprehensive study of the library regarding its unique features of design and superior performance compared to existing libraries. The unique design features of the LibAUC library include (i) _dynamic mini-batch losses_, which are designed for computing the stochastic gradients of X-risks by automatic differentiation to ensure the convergence; (ii) _controlled data samplers_, which differ from standard random data samplers in that the ratio of the number of positive data to the number of negative data can be controlled and tuned to boost the performance. The superiority of the LibAUC library lies in: (i) it is scalable to millions of items to be ranked or contrasted with respect to an anchor data; (ii) it is robust to small mini-batch sizes due to that all implemented algorithms have theoretical convergence guarantee regardless of mini-batch sizes; and (iii) it converges faster and to better solutions than existing libraries for optimizing a variety of compositional losses/measures suitable for CID, LTR and CLR.
To the best of our knowledge, LibAUC is the first DL library that provides easy-to-use APIs for optimizing a wide range of X-risks. Our main contributions for this work are summarized as follows:
* We propose a novel DL pipeline to support efficient implementation of DXO algorithms, and provide implementation details of two unique features of our pipeline, namely dynamic mini-batch losses and controlled data samplers.
* We present extensive empirical studies to demonstrate the effectiveness of the unique features of the LibAUC library, and the superior performance of LibAUC compared to existing DL libraries/approaches for solving the three tasks, i.e., CID, LTR and CLR.
## 2. Deep X-Risk Optimization (DXO)
This section provides necessary background about DXO. We refer readers to (Zhang et al., 2017) for more discussions about theoretical guarantees.
### A Brief History
The min-max optimization for deep AUROC maximization was studied in several earlier works (Zhou et al., 2017; Zhang et al., 2017). Later, deep AUPRC/AP maximization was proposed by Qi et al. (Qi et al., 2017), which formulates the problem as a novel class of finite-sum coupled compositional optimization (FCCO) problem. The algorithm design and analysis for FCCO were improved in subsequent works (Zhang et al., 2017; Zhang et al., 2017; Zhang et al., 2017). Recently, the FCCO techniques were used for partial AUC maximization (Zhang et al., 2017), NDCG and top-\(K\) NDCG optimization (Zhang et al., 2017), and stochastic optimization of global contrastive losses with a small batch size (Zhou et al., 2017). More recently, Yang et al. (Yang et al., 2017) proposed the X-risk optimization framework, which aims to provide a unified venue for studying the optimization of different X-risks. The difference between this work and these previous works is that we aim to provide a technical justification for the library design towards implementing DXO algorithms for practical usage, and comprehensive studies of unique features and superiority of LibAUC over existing DL libraries.
### Notations
For **CID**, let \(\mathcal{S}=\{(\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{n},y_{n})\}\) denote a set of training data, where \(\mathbf{x}_{i}\in\mathcal{X}\subset\mathbb{R}^{d_{in}}\) denotes the input feature vector and \(y_{i}\in\{1,-1\}\) denotes the corresponding label. Let \(\mathcal{S}_{+}=\{\mathbf{x}_{i}:y_{i}=1\}\) contain \(n_{+}\) positive examples and \(\mathcal{S}_{-}=\{\mathbf{x}_{i}:y_{i}=-1\}\) contain \(n_{-}\) negative examples. Denote by \(h_{\mathbf{w}}(\mathbf{x}):\mathcal{X}\rightarrow\mathbb{R}\) a parametric predictive function (e.g., a deep neural network) with a parameter \(\mathbf{w}\in\mathbb{R}^{d}\). We use \(\mathbb{E}_{\mathbf{x}\sim\mathcal{S}}=\frac{1}{|\mathcal{S}|}\sum_{\mathbf{ x}\in\mathcal{S}}\) interchangeably below.
For **LTR**, let \(\mathcal{Q}\) denote a set of \(N\) queries. For a query \(q\in\mathcal{Q}\), let \(\mathcal{S}_{q}=\{\mathbf{x}_{i}^{q},i=1,\ldots,\mathbf{x}_{q}\}\) denote a set of \(N_{q}\) items (e.g., documents, movies) to be ranked. For each \(\mathbf{x}_{i}^{q}\in\mathcal{S}_{q}\), let \(y_{i}^{q}\in\mathbb{R}^{+}\) denote its relevance score, which measures the relevance between query \(q\) and item \(\mathbf{x}_{i}^{q}\). Let \(\mathcal{S}_{q}^{+}\subseteq\mathcal{S}_{q}\) denote a set of \(N_{q}^{+}\) (positive) items _relevant_ to \(q\), whose relevance scores are _non-zero_. Let \(\mathcal{S}=\{(q,\mathbf{x}_{i}^{q}),q\in\mathcal{Q},\mathbf{x}_{i}^{q}\in \mathcal{S}_{q}^{+}\}\) denote all relevant query-item (Q-I) pairs. Denote by \(h_{\mathbf{w}}(\mathbf{x};q):\mathcal{X}\times\mathcal{Q}\rightarrow\mathbb{R}\) a parametric predictive function that outputs a predicted relevance score for \(\mathbf{x}\) with respect to \(q\).
For **CLR**, let \(\mathcal{S}=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\}\) denote a set of anchor data, and let \(\mathcal{S}_{i}^{-}\) denote a set containing all negative samples with respect to \(\mathbf{x}_{i}\). For unimodal SSL, \(\mathcal{S}_{i}^{-}\) can be constructed by applying different data augmentations to all data excluding \(\mathbf{x}_{i}\). For bimodal SSL, \(\mathcal{S}_{i}^{-}\) can be constructed by including the different view of all data excluding \(\mathbf{x}_{i}\). The goal of representation learning is to learn a feature encoder network \(h_{\mathbf{w}}(\cdot)\in\mathbb{R}^{d_{q}}\) parameterized by a vector \(\mathbf{w}\in\mathbb{R}^{d}\) that outputs an encoded feature vector for an input data.
### The X-Risk Optimization Framework
We use the following definition of X-risks given by (Zhang et al., 2017).
Definition 1 ((Zhang et al., 2017)) **X-risks** refer to a family of compositional measures in which the loss function of each data point is defined in a way that contrasts the data point with a large number of others. Mathematically, X-risk optimization can be cast into the following abstract optimization problem:
\[\min_{\mathbf{w}\in\mathbb{R}^{d}}F(\mathbf{w})=\frac{1}{|\mathcal{S}|}\sum \sum\nolimits_{\mathbf{z}_{i}\in\mathcal{S}_{i}}f_{i}(g(\mathbf{w};\mathbf{z}_ {i},\mathcal{S}_{i})), \tag{1}\]
where \(g:\mathbb{R}^{d}\mapsto\mathcal{R}\) is a mapping. \(f_{i}:\mathcal{R}\mapsto\mathbb{R}\) is a simple deterministic function, \(\mathcal{S}=\{\mathbf{z}_{1},\ldots,\mathbf{z}_{m}\}\) denotes a target set of data points, and \(\mathcal{S}_{i}\) denotes a reference set of data points dependent or independent of \(\mathbf{z}_{i}\).
The most common form of \(g(\mathbf{w};\mathbf{z},\mathcal{S})\) is the following:
\[g(\mathbf{w};\mathbf{z}_{i},\mathcal{S}_{i})=\frac{1}{|\mathcal{S}_{i}|}\sum \nolimits_{\mathbf{z}_{j}\in\mathcal{S}_{i}}\ell(\mathbf{w};\mathbf{z}_{i}, \mathbf{z}_{j}), \tag{2}\]
where \(\ell(\mathbf{w};\mathbf{z}_{i},\mathbf{z}_{j})=\ell(h_{\mathbf{w}}(\mathbf{z}_ {i}),h_{\mathbf{w}}(\mathbf{z}_{j}))\) is a pairwise loss.
As a result, many DXO problems will be formulated as **FCCO**(Zhang et al., 2017):
\[\min_{\mathbf{w}}\frac{1}{|\mathcal{S}|}\sum\nolimits_{\mathbf{z}_{i}\in \mathcal{S}_{i}}f_{i}\left(\frac{1}{|\mathcal{S}|}\sum\nolimits_{\mathbf{z}_{j} \in\mathcal{S}_{i}}\ell(h_{\mathbf{w}}(\mathbf{z}_{i}),h_{\mathbf{w}}(\mathbf{z}_ {j}))\right). \tag{3}\]
The FCCO problem is subtly different from the traditional stochastic compositional optimization (Zhou et al., 2017) due to the coupling of a pair of data in the inner function. Almost all X-risks considered in this paper, including AUROC, AUPRC/AP, pAUC, NDCG, top-\(K\) NDCG, listwise CE loss, GCL, can be formulated as FCCO or its variants.
Besides the common formulation above, in the development of LibAUC library two other optimization problems are also used, including the min-max optimization and multi-block bilevel optimization. The min-max formulation is used to formulate a family of
surrogate losses of AUROC, and the multi-block bilevel optimization is useful for formulating ranking performance measures defined only on top-\(K\) items in the ranked list, including top-\(K\) NDCG, precision at a certain recall level, etc. In summary, we present a mapping of different X-risks to different optimization problems in Figure 1, which is a simplified one from (Kumar et al., 2018).
### X-risks in LibAUC
Below, we discuss how different X-risks are formulated for developing their optimization algorithms in the LibAUC library.
**Area Under the ROC Curve (AUROC).** Two formulations have been considered for AUROC maximization in the literature. A standard formulation is the pairwise loss minimization (Kumar et al., 2018):
\[\min_{\mathbf{w}\in\mathbb{R}^{d}}\mathbb{E}_{\mathbf{x}_{i}\in\mathcal{S}_{i }}\mathbb{E}_{\mathbf{x}_{j}\in\mathcal{S}_{i}}\ell(h_{\mathbf{w}}(\mathbf{x} _{j})-h_{\mathbf{w}}(\mathbf{x}_{i})),\]
where \(\ell(\cdot)\) is a surrogate loss. Another formulation is following the min-max optimization (Kumar et al., 2018; Kumar et al., 2018):
\[\min_{\mathbf{w},a,b}\max_{\alpha\in\Omega} \mathbb{E}_{\mathbf{x}_{i}\sim\mathcal{S}_{i}}[(h_{\mathbf{w}}( \mathbf{x}_{i})-a)^{2}]+\mathbb{E}_{\mathbf{x}_{j}\sim\mathcal{S}_{i}}[(h_{ \mathbf{w}}(\mathbf{x}_{j})-b)^{2}]\] \[+\alpha(\mathbb{E}_{\mathbf{x}_{j}\sim\mathcal{S}_{i}}[h_{ \mathbf{w}}(\mathbf{x}_{j})]-\mathbb{E}_{\mathbf{x}_{i}\sim\mathcal{S}_{i}}[h _{\mathbf{w}}(\mathbf{x}_{i})]+c)-\frac{\alpha^{2}}{2},\]
where \(c>0\) is a margin parameter and \(\Omega\subset\mathbb{R}\). **In LibAUC,** we have implemented an efficient algorithm (PESG) for optimizing the above min-max AUC margin (AUCM) loss with \(\Omega=\mathbb{R}_{+}\)(Kumar et al., 2018). The comparison between optimizing the pairwise loss formulation and the min-max formulation can be found in (Kumar et al., 2018).
**Partial Area Under ROC Curve (pAUC)** is defined as area under the ROC Curve with a restriction on the range of false positive rate (FPR) and/or true positive rate (TPR). For simplicity, we only consider PAUC with FPR restricted to be less than \(\beta\in(0,1]\). Let \(\mathcal{S}^{1}\left[k_{1},k_{2}\right]\subset\mathcal{S}\) be the subset of examples whose rank in terms of their prediction scores in the descending order are in the range of \([k_{1},k_{2}]\), where \(k_{1}\leq k_{2}\). Then, optimizing pAUC with FPR\(\leq\beta\) can be cast into:
\[\min_{\mathbf{w}}\frac{1}{n_{+}}\frac{1}{k}\sum_{\mathbf{x}_{i}\in\mathcal{S}_ {i}}\sum_{\mathbf{x}_{j}\in\mathcal{S}_{i}}\sum_{\mathbf{x}_{j}\in\mathcal{S} ^{1}\left[1,k\right]}\ell(h_{\mathbf{w}}(\mathbf{x}_{j})-h_{\mathbf{w}}( \mathbf{x}_{i})),\]
where \(k=\lfloor n-\beta\rfloor\). To tackle challenge of handling \(\mathcal{S}^{1}_{-}[1,k]\) for data selection, we consider the following FCCO formulation (Kumar et al., 2018):
\[\min_{\mathbf{w}}\frac{1}{n_{+}}\sum_{\mathbf{x}_{i}\in\mathcal{S}_{i}}\lambda \log\mathbb{E}_{\mathbf{x}_{j}\in\mathcal{S}_{i}}\exp(\frac{\ell(h_{\mathbf{w }}(\mathbf{x}_{j})-h_{\mathbf{w}}(\mathbf{x}_{i}))}{\lambda}), \tag{3}\]
where \(\lambda>0\) is a temperature parameter that plays a similar role of \(k\). Let \(g(\mathbf{w};\mathbf{x}_{i},\mathcal{S}_{-})=\mathbb{E}_{\mathbf{x}_{j}\in \mathcal{S}_{i}}\exp(\ell(h_{\mathbf{w}}(\mathbf{x}_{j})-h_{\mathbf{w}}( \mathbf{x}_{i}))/\lambda)\) and \(f_{i}(g)=\lambda\log(g)\). Then (3) is a special case of FCCO. **In LibAUC,** we have implemented SOPAs for optimizing the above objective of one-way pAUC with FPR\(\leq\beta\) and SOTAs for optimizing a similarly formed surrogate loss of two-way pAUC with FRP\(\leq\beta\) and TPR\(\geq\alpha\) as proposed in (Kumar et al., 2018).
**Area Under Precision-Recall Curve (AUPRC)** is an aggregated measure of precision of the model at all recall levels. A non-parametric estimator of AUPRC is Average Precision (AP) (Bordes et al., 2017):
\[\mathrm{AP}=\frac{1}{n_{+}}\sum_{\mathbf{x}_{i}\in\mathcal{S}_{i}}\frac{ \sum_{\mathbf{x}_{j}\in\mathcal{S}_{i}}\mathbb{I}(h_{\mathbf{w}}(\mathbf{x}_{ j})\geq h_{\mathbf{w}}(\mathbf{x}_{i}))}{\sum_{\mathbf{x}_{j}\in\mathcal{S}} \mathbb{I}(h_{\mathbf{w}}(\mathbf{x}_{j})\geq h_{\mathbf{w}}(\mathbf{x}_{i}))}.\]
By using a differentiable surrogate loss \(\ell(h_{\mathbf{w}}(\mathbf{x}_{j})-h_{\mathbf{w}}(\mathbf{x}_{i}))\) in place of \(\mathbb{I}(h_{\mathbf{w}}(\mathbf{x}_{j})\geq h_{\mathbf{w}}(\mathbf{x}_{i}))\), we consider the following FCCO formulation for AP maximization:
\[\min_{\mathbf{w}}\frac{1}{n_{+}}\sum_{\mathbf{x}_{i}\in\mathcal{S}_{i}}f(g_{1 }(\mathbf{w};\mathbf{x}_{i},\mathcal{S}_{+}),g_{2}(\mathbf{w};\mathbf{x}_{i}, \mathcal{S})),\]
where \(g_{1}(\mathbf{w};\mathbf{x}_{i},\mathcal{S}_{+})=\sum_{\mathbf{x}_{j}\in \mathcal{S}_{i}}\ell(h_{\mathbf{w}}(\mathbf{x}_{j})-h_{\mathbf{w}}(\mathbf{x}_ {i}))\), \(g_{2}(\mathbf{w};\mathbf{x}_{i},\mathcal{S})=\sum_{\mathbf{x}_{j}\in\mathcal{S} }\ell(h_{\mathbf{w}}(\mathbf{x}_{j})-h_{\mathbf{w}}(\mathbf{x}_{i}))\), and \(f(g_{1},g_{2})=-\frac{g_{1}}{g_{2}}\). **In LibAUC**, we implemented the SOAP algorithm with a momentum SGD or Adam-style update (Kumar et al., 2018), which is a special case of SOX analyzed in (Kumar et al., 2018).
**Normalized Discounted Cumulative Gain (NDCG)** is a ranking performance metric for LTR tasks. The averaged NDCG over all queries can be expressed by
\[\frac{1}{N}\sum_{q\in Q}\frac{1}{Z_{q}}\sum_{\mathbf{x}_{i}^{q}\in\mathcal{S}_{q }^{q}}\frac{2y_{i}^{q}-1}{\log_{2}(r(\mathbf{w};\mathbf{x}_{i}^{q},\mathcal{S} _{q})+1)},\]
where \(r(\mathbf{w};\mathbf{x},\mathcal{S}_{q})=\sum_{\mathbf{x}^{\prime}\in\mathcal{S }_{q}}\mathbb{I}(h_{\mathbf{w}}(\mathbf{x}^{\prime},q)-h_{\mathbf{w}}(\mathbf{x },q)\geq 0)\) denotes the rank of \(\mathbf{x}\) in the set \(\mathcal{S}_{q}\) respect to \(q\), and \(Z_{q}\) is the DCG score of a perfect ranking of items in \(\mathcal{S}_{q}\), which can be pre-computed. For optimization, the rank function \(r(\mathbf{w};\mathbf{x}_{i}^{q},\mathcal{S}_{q})\) is replaced by a differentiable surrogate loss, e.g., \(g(\mathbf{w};\mathbf{x}_{i},\mathcal{S}_{q})=\sum_{\mathbf{x}^{\prime}\in \mathcal{S}_{q}}\ell(h_{\mathbf{w}}(\mathbf{x}^{\prime},q)-h_{\mathbf{w}}( \mathbf{x},q))\). Hence, NDCG optimization is formulated as FCCO. **In LibAUC**, we implemented the SONG algorithm with a momentum or Adam-style update for NDCG optimization (Kumar et al., 2018), which is a special case of SOX analyzed in (Kumar et al., 2018).
**Top-\(K\) NDCG** only computes the corresponding score for those that are ranked in the top-\(K\) positions. We follow (Kumar et al., 2018) to formulate top-\(K\) NDCG optimization as a multi-block bilevel optimization:
\[\min_{\mathbf{w}}-\frac{1}{N}\sum_{q=1}^{N}\frac{1}{Z_{q}^{K}}\sum_{\mathbf{x }_{i}^{q}\in\mathcal{S}_{q}^{q}}\frac{\sigma(h_{q}(\mathbf{x}_{i}^{q},\mathbf{w})- \lambda_{q}(\mathbf{x}_{i}^{q}-1))(2y_{i}^{q}-1)}{\log_{2}(g(\mathbf{w}; \mathbf{x}_{i}^{q},\mathcal{S}_{q})+1)},\]
\[\lambda_{q}(\mathbf{w})=\arg\min_{\lambda}L(\lambda,\mathbf{w};K,\mathcal{S}_{q}), \forall q\in\mathcal{Q},\]
where \(\sigma(\cdot)\) is a sigmoid function, \(Z_{q}^{K}\) is the top-\(K\) DCG score of a perfect ranking of items, and \(\lambda_{q}(\mathbf{w})\) is an approximation of the \((K+1)\)-th largest score of data in the set \(\mathcal{S}_{q}\). The detailed formulation of lower-level problem \(L\) can be found in (Kumar et al., 2018). **In LibAUC**, we implemented the K-SONG algorithm with
\(\mathbb{E}_{\mathbf{x}\in\mathcal{S}_{q}}\exp(h_{\mathbf{w}}(\mathbf{x};q)-h_{ \mathbf{w}}(\mathbf{x}_{i}^{q};q))\) and \(f_{q,i}(g)=P(y_{i}^{q})\log(g)\). **In LibAUC**, we implemented an optimization algorithm, similar to SONG, for optimizing listwise CE loss.
**Global Contrastive Losses (GCL)** are the global variants of contrastive losses used for unimodal and bimodal SSL. For unimodal SSL, GCL can be formulated as:
\[\min_{\mathbf{w}}\mathbb{E}_{\mathbf{x}_{i},\mathbf{x}_{i}^{*}}\tau\log \mathbb{E}_{\mathbf{x}_{j}\sim\mathcal{S}_{i}^{-}}\exp\left(\frac{h_{\mathbf{ w}}(\mathbf{x}_{i})^{\top}h_{\mathbf{w}}(\mathbf{x}_{j})-h_{\mathbf{w}}(\mathbf{x}_{i} )^{\top}h_{\mathbf{w}}(\mathbf{x}_{i}^{*})}{\tau}\right),\]
where \(\tau\succ 0\) is a temperature parameter and \(\mathbf{x}_{i}^{*}\) denotes a positive of \(\mathbf{x}_{i}\). Different from (Cordes and Lazar, 2017; Chen et al., 2018), GCL use all possible negative samples \(\mathcal{S}_{i}^{-}\) for each anchor data instead of mini-batch samples \(\mathcal{B}\)(Chen et al., 2018), which helps address the large-batch training challenge in (Cordes and Lazar, 2017). **In LibAUC**, we implemented an optimization algorithm called SogCLR(Chen et al., 2018) for optimizing both unimodal/bimodal GCL.
As of June 4, 2023, the LibAUC library has been downloaded 36,000 times. We also implemented two additional algorithms namely MIDAM for solving multi-instance deep AUROC maximization (Zhou et al., 2019) and iSoCLR (Zhou et al., 2019) for optimizing GCL with individualized temperature parameters, which are not studied in this paper.
## 3. Library Design of LibAUC
The pipeline of training a DL model in the LibAUC library is shown in Figure 2, which consists of five modules, namely Dataset, Data Sampler, Model, Mini-batch Loss, and Optimizer. The Dataset module allows us to get a training sample which includes its input and output. The Data Sampler module provides tools to sample a mini-batch of examples for training at each iteration. The Model module allows us to define different deep models. The Mini-batch Loss module defines a loss function on the selected mini-batch data for backpropagation. The Optimizer module implements methods for updating the model parameter given the computed gradient from backpropagation. While the Dataset, Model, and Optimizer modules are similar to those in existing libraries, the key differences lie in the Mini-batch Loss and Data Sampler modules. The Mini-batch Loss module in LibAUC is referred to as Dynamic Mini-batch Loss, which uses dynamically updated variables to adjust the mini-batch loss. The dynamic variables will be defined in the dynamic mini-batch loss, which can be evaluated by forward propagation. In contrast, we refer to the Mini-batch Loss module in existing libraries as Static Mini-batch Loss, which only uses the sampled data to define a min-batch loss in the same way of the objective but on mini-batch data. TheData Sampler module in LibAUC is referred to as Controled Data Sampler, which differ from standard random data samplers in that the ratio of the number of positive data to the number of negative data can be controlled and tuned to boost the performance. Next, we provide more details of these two and other modules.
### Dynamic Mini-batch Loss
We first present the stochastic gradient estimator of the objective function, which directly motivates our design of Dynamic Mini-batch Loss module.
For simplicity of exposure, we will mainly use the FCCO problem of pAUC optimization (3) to demonstrate the core ideas of the library design. The designs of other algorithms follow in a similar manner. The key challenge is to estimate the gradient using a mini-batch of samples. To motivate the stochastic gradient estimator, we first consider the full gradient given by
\[\nabla F(\mathbf{w})=\mathbb{E}_{\mathbf{x}\in\mathcal{S}_{u}}\nabla f(g( \mathbf{w};\mathbf{x}_{i},\mathcal{S}_{-}))\left(\mathbb{E}_{\mathbf{x}_{j}\in \mathcal{S}_{-}}\nabla\exp(\ell(\mathbf{w};\mathbf{x}_{i},\mathbf{x}_{j})/ \lambda)\right).\]
To estimate the full gradient, the outer average over all data in \(\mathcal{S}_{+}\) can be estimated by sampling a mini-batch of data \(\mathcal{B}_{1}\subset\mathcal{S}_{+}\). Similarly, the average over \(x_{j}\in\mathcal{S}_{-}\) in parentheses can be also estimated by sampling a mini-batch of data \(\mathcal{B}_{2}\subset\mathcal{S}_{-}\). A technical issue arises when estimating \(g(\mathbf{w};\mathbf{x}_{i},\mathcal{S}_{-})\) inside \(f\). A naive mini-batch approach is to simply estimate \(g(\mathbf{w};\mathbf{x}_{i},\mathcal{S}_{-})\) by using a mini-batch of data in \(\mathcal{B}_{2}\subset\mathcal{S}_{-}\), i.e., \(g(\mathbf{w};\mathbf{x}_{i},\mathcal{B}_{2})=\frac{1}{|\mathcal{B}_{2}|}\sum_{ \mathbf{x}_{j}\in\mathcal{B}_{2}}\exp(\ell(\mathbf{w};\mathbf{x}_{i},\mathbf{x }_{j})/\lambda)\). However, the problem is that the resulting estimator \(\nabla f(g(\mathbf{w};\mathbf{x}_{i},\mathcal{B}_{2}))\) is biased due to that \(f\) is a non-linear function, whose estimation error will depend on the batch size \(|\mathcal{B}_{2}|\). As a result, the algorithm will not converge unless the batch size \(|\mathcal{B}_{2}|\) is very large. To address this issue, a moving average estimator is used to estimate \(g(\mathbf{w}_{i};\mathbf{x}_{i},\mathcal{S}_{-})\) at the \(t\)-th iteration (Shou et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018), which is updated for sampled data \(\mathbf{x}_{i}\in\mathcal{B}_{1}^{t}\) according to:
\[\mathbf{u}_{i}^{t+1}=(1-\gamma)\mathbf{u}_{i}^{t}+\gamma g(\mathbf{ w}_{i};\mathbf{x}_{i},\mathcal{B}_{2}^{t})\] \[=(1-\gamma)\mathbf{u}_{i}^{t}+\gamma\frac{1}{|\mathcal{B}_{2}^{t }|}\sum_{\mathbf{x}_{j}\in\mathcal{B}_{2}^{t}}\exp(\ell(h_{\mathbf{w}_{i}}( \mathbf{x}_{j})-h_{\mathbf{w}_{i}}(\mathbf{x}_{i}))/\lambda),\]
where \(\gamma\in(0,1)\) is a hyper-parameter. It has been proved that the averaged estimation error of \(\mathbf{u}_{i}^{t+1}\) for \(g(\mathbf{w}_{i};\mathbf{x}_{i},\mathcal{S}_{-})\) is diminishing in the long run. With the moving average estimators, the gradient of the objective function is estimated by 1:
Footnote 1: For theoretical analysis \(\mathbf{u}_{i}^{t+1}\) is replaced by \(\mathbf{u}_{i}^{t}\) in (Chen et al., 2018; Chen et al., 2018)
\[G_{t}=\mathbb{E}_{\mathbf{x}_{i}\in\mathcal{B}_{1}^{t}}\nabla f( \mathbf{u}_{i}^{t+1})\nabla g_{i}(\mathbf{w}_{i};\mathbf{x}_{i},\mathcal{B}_{2}^ {t})\] \[=\mathbb{E}_{\mathbf{x}_{i}\in\mathcal{B}_{1}^{t},\mathbf{x}_{j} \in\mathcal{B}_{2}^{t}}\nabla f(\mathbf{u}_{i}^{t+1})\nabla_{\mathbf{w}}\exp( (t(h_{\mathbf{w}_{i}}(\mathbf{x}_{j})-h_{\mathbf{w}_{i}}(\mathbf{x}_{i}))/ \lambda).\]
The key steps of SOPAs for optimizing pAUC loss are in Algorithm 1 (Zhou et al., 2019). To facilitate the implementation of computing the gradient estimator \(G_{t}\), we design a dynamic mini-batch loss. The motivation of this design is to enable us to simply use the automatic differentiation of PyTorch or TensorFlow for calculating the gradient estimator \(G_{t}\). In particular, on PyTorch we aim to define a loss such that we can directly call loss.backward() to compute \(G_{t}\). To this end, we define a dynamic variable \(p_{i}=\nabla f(\mathbf{u}_{i}^{t+1})\) for \(\mathbf{x}_{i}\in\mathcal{B}_{1}^{t}\) and then define a dynamic mini-batch loss as loss
Figure 2. The pipeline of LibAUC modules. Highlighted blocks denote the unique modules of the LibAUC library.
\(\frac{1}{|\mathcal{B}_{1}^{t}|}\sum_{\mathbf{x}_{i}\in\mathcal{B}_{1}^{t}}\frac{1} {|\mathcal{B}_{1}^{t}|}\sum_{\mathbf{x}_{j}\in\mathcal{B}_{2}^{t}}p_{i}\exp( \ell(h_{\mathbf{w}_{t}}(\mathbf{x}_{j})-h_{\mathbf{w}_{t}}(\mathbf{x}_{i}))/ \lambda)\). However, since \(p_{i}\) depends on \(\mathcal{B}_{i}^{t+1}\) that is computed based on \(\mathbf{w}_{t}\), directly calling loss.backward() for this loss may cause extra differentiation of \(p_{i}\) in term of \(\mathbf{w}_{t}\). To avoid this, we apply the detach operator \(\mathsf{p.detach}()\) to separate each \(p_{i}\) from the computational graph by returning a new tensor that does not require a gradient. The high-level pseudo code of defining and using the dynamic mini-batch loss for pAUC is given in Algorithm 2, where we use a variable change to define the loss, i.e., \(p_{i}=\nabla f(\mathbf{w}_{i}^{t+1})\exp(\ell(h_{\mathbf{w}_{t}}(\mathbf{x}_{ j})-h_{\mathbf{w}_{t}}(\mathbf{x}_{i}))/\lambda)/\lambda\).
Below, we give another example of code snippet to implement the dynamic mini-batch contrastive loss for optimizing GCL.
```
1for\(t=0,\ldots,T\)do
2 Draw two subsets \(\mathcal{B}_{1}^{t}\subset\mathcal{S}_{+}\) and \(\mathcal{B}_{2}^{t}\subset\mathcal{S}_{-}\)for\(i\in\mathcal{B}_{1}^{t}\)do
3\(|\mathbf{u}_{1}^{t+1}=(1-\gamma)\mathbf{u}_{1}^{t}+\gamma g_{1}(\mathbf{w}_{t}; \mathbf{x}_{i},\mathcal{B}_{2}^{t})|\)\(p_{i}^{t}=\nabla f(\mathbf{u}_{i}^{t+1})=\lambda/\mathbf{u}_{i}^{t+1}\)
4 end for
5 Compute the gradient estimator \(G_{t}\) by \(\frac{1}{|\mathcal{B}_{1}^{t}|}\sum_{\mathbf{x}_{i}\in\mathcal{B}_{1}^{t}} \frac{1}{|\mathcal{B}_{1}^{t}|}\sum_{\mathbf{x}_{j}\in\mathcal{B}_{2}^{t}}p_{ i}^{t}\nabla\exp(\ell(h_{\mathbf{w}_{t}}(\mathbf{x}_{i}),h_{\mathbf{w}_{t}}( \mathbf{x}_{j}))/\lambda)\)
6 Update the model parameter by an optimizer
7 end for
```
**Algorithm 1**SOPAs for solving pAUCLoss.
### Controlled Data Sampler
Unlike traditional ERM, DXO requires sampling to estimate the outer average and the inner average. In the example of pAUC optimization by SOPAs, we need to sample two mini-batches \(\mathcal{B}_{1}^{t}\subset\mathcal{S}_{+}\) and \(\mathcal{B}_{2}^{t}\subset\mathcal{S}_{-}\) at each iteration \(t\). We notice that this is common for optimizing areas under curves and ranking measures. For some losses/measures (e.g., AUPRC/AP, NDCG, top-\(K\) NDCG, Listwise CE), both sampled positive and negative samples will be used for estimating the inner functions. According to our theoretical analysis (Sandel, 2017), balancing the mini-batch size for outer average and that for the inner average could be beneficial for accelerating convergence. Hence, we design a new Data Sampler module to ensure that both positive and negative samples will be sampled and the proportion of positive samples in the mini-batch can be controlled by a hyper-parameter.
For CID problems, we introduce DualSampler, which takes as input hyper-parameters such as batch_size and sampling_rate, to generate the customized mini-batch samples, where sampling_rate controls the number of positive samples in the mini-batch according to the formula # positives = batch_size*sampling_rate. For LTR problems, we introduce TriSampler, which has hyper-parameters sampled_tasks to control the number of sampled queries for backpropogation, batch_size_per_task to adjust mini-batch size for each query, and sampling_rate_per_task to control the ratio of positives in each mini-batch per query. The TriSampler can be also used for multi-label classification problems with many labels such that sampling labels becomes necessary, which makes the library extendable for our future work. To improve the sampling speed, we have implemented an index-based approach that eliminates the need for computationally intensive operations such as concatenation and append. Figure 4 shows an example of DualSampler for constructing mini-batch data with even positive and negative samples on an imbalanced dataset with 4 positives and 9 negatives. We maintain two lists of indices for the positive data and negative data, respectively. At the beginning, we shuffle the two lists and then take the first 4 positives and 4 negatives to form a mini batch. Once the positive list is used up, we only reshuffle the positive list and take 4 shuffled positives to pair with next 4 negatives in the negative list as a mini-batch. Once the negative list is used up (an "epoch" is done), we re-shuffle both lists and repeat the same process as above. For TriSampler, the main difference is that we first randomly select some queries/labels before sampling the positive and negative data for each query/label. The following code snippet shows how to define DualSampler and TriSampler.
Figure 4. Illustration of DualSampler for an imbalanced dataset with 4 positives * and 9 negatives *.
Figure 3. Left: SOPAs for optimizing pAUC; Right: its pseudo code using automatic differentiation of a dynamic mini-batch loss. The corresponding parts of the algorithm and pseudocode are highlighted in the same color.
fromlibauc.samplerimportDualSanpler,TriSamplerdualsampler=DualSampler(trainSet,batch_size=32,sampling_rate=0.1)trisampler=TriSampler(trainSet,batch_size=per_task=32,sampled_tasks=5,sampling_rate_per_task=0.1) ```
### Optimizer
With a calculated gradient estimator, the updating rule for the model parameter of different algorithms for DXO follow similarly as (momentum) SGD or Adam (King and Ba, 2014; Kingma and Ba, 2014; Kingma and Ba, 2014; Kingma and Ba, 2014; Kingma and Ba, 2014; Kingma and Ba, 2014). Hence, the optimizer.step() is essentially the same as that in existing libraries. In addition to our built-in optimizer, users can also utilize other popular optimizers from the PyTorch/TensorFlow library, such as Adagrad, AdamW, RMSprop, and RAdam (King and Ba, 2014; Kingma and Ba, 2014; Kingma and Ba, 2014; Kingma and Ba, 2014). Hence, we provide an optimizer wrapper that allows users to extend and choose appropriate optimizers. For the naming of the optimizer wrapper, we use the name of optimization algorithms corresponding to each specific X-risk for better code readability. An example of the optimizer wrapper for pAUC optimization is given below, where nodes='adam' allows user to use Adam-style update. Another mode is 'SGD', which takes a momentum parameter as an argument to use the momentum SGD update.
```
#Anexampleofoptimizerwrapperfromlibauc.optimizersimportSOPAsoptimizer=SOPAs(model.parameters(),lr=0.1,mode='adam',weight_decay=1e-4) ```
### Other Modules
In addition, we provide useful functionalities in other modules, including libauc.datasets, libauc.models, and libauc.metrics, to help users improve their productivity. The libauc.datasets module provides pre-processing functions for several widely-used datasets, including CIFAR (Krizhevsky et al., 2015), CheXpert (Krizhevsky et al., 2015), and MovieLens (Krizhevsky et al., 2015), allowing users to easily adapt these datasets for use with LibAUC in benchmarking experiments. It is important to note that the definition of the Dataset class is slightly different from that in existing libraries. An example is given below, where __getitem_ returns a triplet that consists of input data, its label and its corresponding index in the dataset, where the index is returned for accommodating DXO algorithms for updating the \(\mathbf{u}_{i}^{t+1}\) estimators. The libauc.models module offers a range of pre-defined models for various tasks, including ResNet(He et al., 2016) and DenseNet(Krizhevsky et al., 2015) for classification and NeuMF(Krizhevsky et al., 2015) for recommendation. libauc.metrics module offers evaluation wrappers based on scikit-learn for various metrics, such as AUC, AP, pAUC, and NDCG@K. Moreover, it provides an all-in-one wrapper (shown below) to evaluate multiple metrics simultaneously to improve the production efficiency.
``` classImageDataset(torch.utils.data.Dataset): """AnexampleofDatasetclass""" def__init__(self,inputs,targets): self.inputs=inputs self.targets=targets def__len.(self): return(self.inputs) def__getitem.(self,index): data=self.inputs[index] target=self.targets[index] returndata,target,index ```
## 4. Experiments
In this section, we provide extensive experiments on three tasks CID, LTR and CLR. Although individual algorithms have been studied in their original papers for individual tasks, our empirical studies serves as complement to prior studies in that (i) ablation studies of the two unique features for all three tasks provide coherent insights of the library for optimizing different X-risks; (ii) comparison with an existing optimization-oriented library TFC0 (King and Ba, 2014; Kingma and Ba, 2014) for optimizing AUPRC is conducted; (iii) a larger scale dataset is used for LTR, and re-implementation of our algorithms for LTR is done on TensorFlow for fair comparison with the TF-Ranking library (King and Ba, 2014); (iv) evaluation of different DXO algorithms based on different areas under the curves is performed exhibiting useful insights for practical use; (v) larger image-text datasets are used for evaluating SogCLR for bimodal SSL. Another difference from prior works (King and Ba, 2014; Kingma and Ba, 2014; Kingma and Ba, 2014) is that all experiments for CID and LTR are conducted in an end-to-end training fashion without using a pretraining strategy. However, we did observe the pretraining generally helps improve performance (cf. the Appendix).
\begin{table}
\begin{tabular}{c c c c} \hline Loss Function & Data Sampler & Optimizer wrapper & Reference \\ libauc.losses & libauc.sampler & libauc.optimizers & \\ \hline AIGNLoss & DualSampler & P5G & (King and Ba, 2014) \\ pAUC@cs*(’N’) & DualSampler & SOPA & (King and Ba, 2014) \\ pAUC@cs*(’N’) & DualSampler & SOPA & (King and Ba, 2014) \\ pAUC@s(’Zw’) & DualSampler & SOTAs & (King and Ba, 2014) \\ \hline NDCGLoss & TriSampler & SONG & (King and Ba, 2014) \\ NCGLoss(top=5) & TriSampler & SONG & (King and Ba, 2014) \\ ListsistoseCLoss & TriSampler & SONG & (King and Ba, 2014) \\ \hline GCLoss(’unimodal’) & RandomSampler & SogCLR & (King and Ba, 2014) \\ GCLoss(’bimodal’) & RandomSampler & SogCLR & (King and Ba, 2014) \\ \hline \hline \end{tabular}
* An evaluator wrapperfromlibauc.metricsimportevaluator scores=evaluator(pred,true,metrics=[’auc’,’ap’,’pauc’])
\end{table}
Table 1. The list of losses, corresponding samplers and optimizer wrappers in libauc. For a complete list, please refer to the documentation of LibAUC.
### Classification for Imbalanced Data
We choose three datasets from different domains, namely CIFAR10 - a natural image dataset (Krizhevsky et al., 2015), CheXpert - a medical image dataset (Krizhevsky et al., 2015) and OGB-HIV - a molecular graph dataset (Krizhevsky et al., 2015). For CIFAR10, we follow the original paper (Zhu et al., 2017) to construct an imbalanced training set with a positive sample ratio (referred as imratio) of 1%. For evaluation, we sample 5% data from training set as validation set and re-train the model using full training set after selecting the parameters and finally report the performance on testing set with balanced positive and negative classes. For CheXpert, we follow the original work (Zhu et al., 2017) by conducting experiments on 5 selected diseases, i.e., Cardiomegaly (imratio=12.2%), Edema (imratio=32.2%), Consolidation (imratio=6.8%), Atelectasis (imratio=31.2%), Pleural Effusion (imratio=40.3%), with an average of imratio of 24.54%. We use the downsized \(224\times 224\) frontal images only for training. Due to the unavailability of testing set, we report the averaged results of 5 tasks on the official validation set. For OGB-HIV, the dataset has an imratio of 1.76% and we use official train/valid/test split for experiments and report the final performance on testing set. For each setting, we repeat experiments three times using different random seeds and report the final results in meanstd.
For modeling, we use ResNet20, DenseNet121, and DeepGCN (He et al., 2016; He et al., 2017; He et al., 2018) for the three datasets, respectively. We consider optimizing three losses, namely AUCMLoss, APLoss, pAUCLoss by using PESG, SOAP, SOPAS, respectively. For the latter two, we use the pairwise squared hinge loss with a margin parameter in their definition. Thus, all losses have a margin parameter, which is tuned in [0.1, 0.3, 0.5, 0.7, 0.9, 1.0]. For APLoss and pAUCLoss, we tune the moving average estimator parameter \(\gamma\) in the same range. For pAUCLoss, we also tune the temperature parameter in [0.1, 1.0, 10.0]. For DualSampler, we tune sampling_rate in [0.1, 0.3, 0.5]. For baselines, we compare two popular loss functions used in the literature, i.e., CE loss and Focal loss. For Focal loss, we tune \(\hat{\alpha}\) in [1,2,5] and \(\hat{\gamma}\) in [0.25, 0.5, 0.75]. For optimization, we use the momentum SGD optimizer for all methods with a default momentum parameter 0.9 and tuned initial learning rate in [0.1, 0.05, 0.01]. We decay learning rate by 10 times at 50% and 75% of total training iterations. For CIFAR10, we run all methods using a batch size of 128 for 100 epochs. For CheXpert, we train models using a batch size of 32 for 2 epochs. For OGB-HIV, we train models using a batch size of 512 for 100 epochs. To evaluate the performance, we adopt three different metrics, i.e., AUROC, AP, and pAUC (FPR+0.3). We select the best configuration based on the performance metric to be optimized, e.g., using AUROC for model selection of AUCLoss. The results are summarized in the Table 2.
We have several interesting observations. Firstly, directly optimizing performance metrics leads to better performance compared to baseline methods based on ERM framework. For example, PESG, SOAP, and SOPAS outperform CE and Focal Loss by a large margin in all datasets. This is consistent with prior works. Secondly, optimizing a specific metric does not necessarily has the best performance for other metrics. For example, on OGB-HIV dataset PESG has the highest AUROC but the lowest AP score, while SOAP has the highest AP score but lowest AUROC and pAUC, and SOPAS has the highest pAUC score. This confirms the importance of choosing appropriate methods in LibAUC for corresponding metrics. Thirdly, on CheXpert, it seems that optimizing pAUC is more beneficial than optimizing full AUROC. SOPAS achieves better performance than PESG and SOAP in all three metrics.
**Comparison with the TFCO library.** We compare LibAUC (SOAP) with TFCO (Zhu et al., 2017; He et al., 2017) for optimizing AP. We run both methods using batch size of 128 for 100 epochs with Adam optimizer and learning rate of 1e-3 and weight decay of 1e-4 on constructed CIFAR10 with imratio=(1%,2%). We plot the learning curves on training and testing sets in Figure 5. The results indicate that LibAUC consistently performs better than TFCO.
### Learning to Rank
We evaluate LibAUC on a LTR task for movie recommendation. The goal is to rank movies for users according to their potential interests of watching based on their historical ratings of movies. We compare the LibAUC library for optimizing ListwiseCLoss, NDCLoss and top-\(K\) NDCG loss denoted by NDCGloss(K) against the TF-Ranking library (Zhu et al., 2017) for optimizing ApproxNDCG, GumbelNDCG, ListMLE, on two large-scale movie datasets MovieLens20M and MovieLens25M from MovieLens website (Krizhevsky et al., 2015). MovieLens20M contains 20 millions movie ratings from 138,493 users and MovieLens25M contains 25 millions movie ratings from 162,541 users. Each user has at least 20 rated movies. Different from (Krizhevsky et al., 2015), we re-implement the SONG and K-SONG (its practical version) on TensorFlow for optimizing the three losses for a fair comparison of running time with TF-Ranking since it is implemented in TensorFlow. To construct training/validation/testing set, we first sort the ratings based on timestamp for each user from oldest to newest. Then, we put 5 most recent ratings in testing set, and the next 5 most recent items in validation set. For training, at each iteration we randomly sample 256 users, and for each user sample 5 positive items from the remaining rated movies and 300 negatives from all unrated movies. For computing validation and testing performance, we sample 1000 negative items from the movie list similar to (Krizhevsky et al., 2015).
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{CIFAR10 (imratio=1\%)} & \multicolumn{3}{c|}{CheXpert (imratio=24.54\%)} & \multicolumn{3}{c}{OGB-HIV (imratio=1.76\%)} \\ \cline{2-11} & AUROC & AP & pAUC (Fpr=0.3) & AUROC & AP & pAUC (Fpr=0.3) & AUROC & AP & pAUC (Fpr=0.3) \\ \hline CE & 0.687\(\pm\)0.008 & 0.681\(\pm\)0.005 & 0.619\(\pm\)0.003 & 0.853\(\pm\)0.006 & 0.687\(\pm\)0.012 & 0.769\(\pm\)0.011 & 0.765\(\pm\)0.002 & 0.250\(\pm\)0.013 & 0.721\(\pm\)0.004 \\ Focal & 0.678\(\pm\)0.006 & 0.671\(\pm\)0.009 & 0.610\(\pm\)0.007 & 0.879\(\pm\)0.004 & 0.737\(\pm\)0.010 & 0.800\(\pm\)0.006 & 0.758\(\pm\)0.004 & 0.241\(\pm\)0.009 & 0.722\(\pm\)0.003 \\ \hline PESG & 0.712\(\pm\)0.009 & 0.706\(\pm\)0.011 & 0.639\(\pm\)0.009 & 0.890\(\pm\)0.002 & 0.759\(\pm\)0.009 & 0.820\(\pm\)0.003 & **0.805\(\pm\)0.009** & 0.199\(\pm\)0.009 & 0.745\(\pm\)0.007 \\ SOAP & 0.711\(\pm\)0.027 & **0.717\(\pm\)0.016** & 0.648\(\pm\)**0.013** & 0.875\(\pm\)0.048 & 0.757\(\pm\)0.074 & 0.813\(\pm\)0.059 & 0.709\(\pm\)0.008 & **0.293\(\pm\)0.004** & 0.699\(\pm\)0.001 \\ SOPAS & **0.717\(\pm\)0.005** & 0.713\(\pm\)0.002 & 0.645\(\pm\)0.003 & **0.894\(\pm\)0.003** & **0.767\(\pm\)0.008** & **0.823\(\pm\)0.006** & 0.786\(\pm\)0.007 & 0.249\(\pm\)0.019 & **0.747\(\pm\)0.004** \\ \hline \end{tabular}
\end{table}
Table 2. Results on three classification tasks. Best results are marked in bold and second-best results are marked in underline.
Figure 5. Comparison of TFCO and LibAUC.
For modeling, we use NeuMF (Wang et al., 2017) as backbone network for all methods. We use the Adam optimizer (Kingmae and Ba, 2014) for all methods with an initial learning rate of 0.001 and weight decay of 1e-7 for 120 epochs by following similar settings in (Krizhevsky et al., 2012). During training, we decrease learning rate at 50% and 75% of total iterations by 10 times. For evaluation, we compute and compare NDCG@5 and NDCG@20 for all methods. For NDCGLoss, NDCGLoss(K) and ListwiseSCLoss, we tune moving average estimator parameter \(\gamma\) in range of [0.1, 0.3, 0.5, 0.7, 0.9, 1.0]. For NDCGLoss(K), we tune \(K\) in [50, 100, 300]. We repeat the experiments three times using different random seeds and report the final results in mean\(\pm\)std. To measure the training efficiency, we conduct the experiments on a NVIDIA V100 GPU and compute the average training times over 10 epochs.
As shown in the Figure 6 (left), LibAUC achieves better performance on both datasets. It is worth mentioning that the results of all methods we reported are generally worse than those reported in (Krizhevsky et al., 2012), likely due to different negative items being used for evaluation. In addition, optimizing NDCGLoss(\(K\)) is not as competitive as optimizing NDCGLoss, which is because that we did not use the pretraining strategy used in (Krizhevsky et al., 2012). In Appendix, we show that using pretraining is helpful for boosting the performance of optimizing NDCGLoss(\(K\)). The runtime comparison, where we report the average runtime in seconds per epoch, is shown in Figure 6 (right). The results show that our implementation of LibAUC on TensorFlow is even faster than three methods in TF-Ranking. It is interesting to note that LibAUC for optimizing ListwiseSC loss is 1.6\(\times\) faster than TF-Ranking for optimizing GumbelLoss yet has better performance.
### Contrastive Learning of Representations
In this section, we demonstrate the effectiveness of LibAUC (SogCLR) for optimizing GCLoss on both unimodal and bimodal SSL tasks. For unimodal SSL, we use two scales of the ImageNet dataset: a small subset of ImageNet with 100 randomly selected classes (about 128k images) denoted as ImageNet-100, and the full version of ImageNet(about 1.2 million images) denoted as ImageNet-1000 (Krizhevsky et al., 2012). For bimodal SSL, we use MS-CCCO and CC3M (Krizhevsky et al., 2012; Li et al., 2013) for experiments. MS-CCCO is a large-scale image recognition dataset containing over 118,000 images and 80 object categories, and each image is associated with 5 captions describing the objects and their interactions in the image. CC3M is a large-scale image captioning dataset that contains almost 3 million image-caption pairs. For evaluation, we compare the feature quality of pretrained encoder on ImageNet-1000 validation set, which consists of 50,000 images that belong to 1000 classes. For unimodal SSL, we conduct linear evaluation by fine-tuning a new classifier in a supervised fashion after pretraining. For bimodal SSL, we conduct zero-shot evaluation by computing similarity scores between the embeddings of the prompt text and images. Due to the high training cost, we only run each experiment once. It is worth noting that the two bimodal datasets were not used in (Krizhevsky et al., 2012).
For unimodal SSL, we follow the same settings in SimCLR (Chen et al., 2016). We use ResNet-50 with a two-layer non-linear head with a hidden size of 128. We use LARS optimizer (Krizhevsky et al., 2012) with an initial learning rate of \(0.075\times\sqrt{batch\_size}\) and weight decay of 1e-6. We use a cosine decay strategy to decrease learning rate. We use a batch size of 256 to train ImageNet-1000 for 800 epochs and ImageNet-100 for 400 epochs with a 10-epoch warm-up. For linear evaluation, we train the classifier for additional 90 epochs using the momentum SGD optimizer with no weight decay. For bimodal SSL, we use a transformer (Krizhevsky et al., 2012; Li et al., 2013) as the text encoder (cf appendix for structure parameters) and ResNet-50 as the image encoder (Krizhevsky et al., 2012). Similarly, we use LARS optimizer with the same learning rate strategy and weight decay. We use a batch size of 256 for 30 epochs, with a 3-epoch warm-up. For zero-shot evaluation, we compute the accuracy based on the cosine similarities between image embeddings and text embeddings using 80 different prompt templates similar to (Krizhevsky et al., 2012). Note that we randomly sample one out of five text captions to construct text-image pair for pretraining on MS-COCO. We compare SogCLR with SimCLR for unimodal SSL and with CLIP for bimodal SSL tasks. For SogCLR, we tune \(\gamma\) in [0.1, 0.3, 0.5, 0.7, 0.8, 0.9, 1.0] and tune temperature \(\tau\) in [0.07, 0.1]. All experiments are run on 4-GPU (NVIDIA A40) machines. The results are summarized in Table 3.
The results demonstrate that SogCLR outperforms SimCLR and CLIP for optimizing mini-batch contrastive losses in both tasks. In particular, SogCLR improves 2.2%, 2.9% over SimCLR on ImageNet datasets, and improves 0.5%, 1.6% over CLIP on two bimodal datasets. It is notable that the pretraining for ImageNet lasts up to 800 epochs, while the pretraining on the two bimodal datasets is only performed for 30 epochs due to limited computational resources. According to theorems in (Krizhevsky et al., 2012), the optimization error of SogCLR will diminish as the training epochs increase. We expect that SogCLR exhibit have larger improvements over CLIP with longer epochs.
### Ablation Studies
In this section, we present more ablation studies to demonstrate the effectiveness of our design and superiority of our library.
Figure 6. Left: Results on MovieLens datasets. Right: Comparison of training time for LibAUC and TF-Ranking.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Loss} & \multicolumn{3}{c}{MovieLen20M} & \multicolumn{3}{c}{MovieLen25M} \\ \cline{2-6} & NDCG@5 & NDCG@20 & NDCG@5 & NDCG@20 \\ \hline ListwiseC (TF-Ranking) & 0.2841\(\pm\)0.0007 & 0.9063\(\pm\)0.0004 & 0.3771\(\pm\)0.0003 & 0.4072\(\pm\)0.0003 & SONG \\ \multirow{2}{*}{_AportsmDCG_ (TF-Ranking)} & 0.3131\(\pm\)0.0001 & 0.4362\(\pm\)0.0001 & 0.3960\(\pm\)0.0003 & 0.5273\(\pm\)0.0001 & K\(\odot\)SONG \\ \multirow{2}{*}{_GumbelNOC_ (TF-Ranking)} & 0.3179\(\pm\)0.0003 & 0.4444\(\pm\)0.001 & 0.4022\(\pm\)0.0002 & 0.5285\(\pm\)0.0013 & Appro
#### 4.4.1. Effectiveness of Dynamic Mini-batch Losses
To verify the effectiveness of the dynamic mini-batch losses, we compare them with conventional static mini-batch losses. To this end, we focus on SOAP, SOPAs, SONG and SogCLR, and compare their performance with different values of \(\gamma\) in our framework. When setting \(\gamma=1\), our algorithms will degrade into their conventional mini-batch versions. We directly use the best hyper-parameters tuned in Section 4.1, 4.2 except for \(\gamma\), which is tuned from 0.1 to 1.0. The performance is evaluated using AP (SOAP), pAUC (SOPAs), NDCG@5 (SONG), and Top-1 Accuracy (SogCLR), respectively. The final results of this comparison are summarized in Table 4. Overall, we find that all methods achieve the best performance when \(\gamma\) is less than 1.
#### 4.4.2. Effectiveness of Data Sampler
We vary the positive sampling rate (denoted as sr) in the DualSampler for CID by optimizing AUCMLoss, and in the TriSampler for LTR by optimizing NDCGLoss. For CID, we use three datasets: CIFAR10 (1%), CheXpert, and OGB-HIV, and tune sr={original, 10%, 30%, 50%}, where sr=original means that we simply use the random data sampler without any control. Other hyper-parameters are fixed to those found as in Section 4.1. The results are evaluated in AUROC and summarized in Table 5. For LTR, we use MovieLens20M dataset. We fix the number of sampled queries (i.e., users) to 256 in each mini-batch and vary the number of positive and negative items, which are tuned in [1, 5, 10] and {1, 5, 10, 100, 300, 500, 1000}, respectively. We fix \(\gamma=0.1\) and train the model for 120 epochs with the same learning rate, weight decay and learning rate decaying strategies as in section 4.2. The results are evaluated in NDCG@5 and are shown in Table 6. Both results demonstrate that tuning the positive sampling rate is beneficial for performance improvement.
The results reveal that DualSampler largely boosts the performance for AUCMLoss on CIFAR10 and OGB-HIV when sampling rate (sr) is set to 10%. It is interesting to note that balancing the data (sr=50%) did not necessarily improve performance on three cases. However, generally speaking using a sampling ratio higher than the original imbalance ratio is useful. For LTR with TriSampler, we observe a dramatic performance increase when increasing the number of positive samples from 1 to 10, and the number of negative samples from 1 to 300. However, when further increasing the number of negatives from 300 to 1000, the improvement is saturated.
#### 4.4.3. The Impact of Batch Size
We study the impact of the batch sizes on our methods (SOAP, SOPAs, SONG, SogCLR) using dynamic mini-batch losses and that using static mini-batch losses (i.e., \(\gamma=1\)). We follow the same experiment settings as in previous section and only vary the batch size. For each batch size, we tune \(\gamma\) correspondingly as theories indicate its best value depends on batch size. For SogCLR, we train ResNet50 on ImageNet1000 for 800 epochs using batch sizes in \(\{8192,2048,512,128\}\). For SOAP and SOPAs, we train ResNet20 on OGB-HIV for 100 epochs using batch sizes in \(\{512,256,128,64\}\). For SONG, we train NeuMF for 120 epochs on MovieLens20M using batch sizes in \(\{256,128,64,32\}\). The results are shown in Figure 7, which demonstrates our design is more robust to the mini-batch size.
#### 4.4.4. Convergence Speed
Finally, we compare the convergence curves of selected algorithms on the OGB-HIV, MovieLens20M, and ImageNet100 datasets. We use the tuned parameters from previous sections to plot the convergence curves on the testing sets. The results are illustrated in Figure 8. In terms of classification, it is observed that PESG, and SOPAs converge much faster than optimizing CE and Focal loss. For MovieLens20M dataset, we find that SONG has fastest convergence speed compared to all other methods, and K-SONG (without pretraining) is faster than the other baselines but slower than SONG. In the case of SSL, we observe that SogCLR and SimCLR achieve similar performance at the beginning stage, however, SogCLR gradually outperforms SimCLR as the training time goes longer.
## 5. Conclusion & Future Works
In this paper, we have introduced _LibAUC_, a deep learning library for X-risk optimization. We presented the design principles of LibAUC and conducted extensive experiments to verify the design principles. Our experiments demonstrate that the LibAUC library is superior to existing libraries/approaches for solving a variety of tasks including classification for imbalanced data, learning to rank, and contrastive learning of representations. Finally, we note that our current implementation of the LibAUC library is by no means exhaustive. In the future, we plan to implement more algorithms for more X-risks, including performance at the top, such as recall at top-\(K\) positions, precision at a certain recall level, etc.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Pos/Neg & 1 & 5 & 10 & 100 & 300 & 500 & 1000 \\ \hline
1 & 0.1315 & 0.1617 & 0.1725 & 0.1972 & 0.2039 & 0.2067 & 0.2078 \\
5 & 0.1609 & 0.2289 & 0.2608 & 0.3354 & 0.3480 & 0.3509 & **0.3522** \\
10 & 0.1568 & 0.2083 & 0.2374 & 0.3260 & 0.3417 & 0.3472 & 0.3506 \\ \hline \hline \end{tabular}
\end{table}
Table 6. Tuning the sampling rate is beneficial for NDCGLoss on MovieLens20M.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Method & Dataset & \(\gamma=0.1\) & \(\gamma=0.3\) & \(\gamma=0.5\) & \(\gamma=0.7\) & \(\gamma=0.9\) & \(\gamma=1.0\) \\ \hline SOAP & OGB-HIV & 0.2745 & 0.2906 & 0.2881 & **0.2930** & 0.2904 & 0.2864 \\ SOPAs & OGB-HIV & 0.6404 & 0.7414 & 0.7413 & **0.7467** & 0.7337 & 0.7383 \\ SONG & MovieLens & **0.3476** & 0.3431 & 0.3384 & 0.3339 & 0.3308 & 0.3290 \\ SogCLR & ImageNet100 & 0.8018 & 0.7956 & **0.8032** & 0.7974 & 0.7944 & 0.7956 \\ SogCLR & CC3M & **0.2138** & 0.2029 & 0.1931 & 0.1873 & 0.1825 & 0.1778 \\ \hline \hline \end{tabular}
\end{table}
Table 4. The \(\gamma<1\) is better.
Figure 8. Convergence curves of LibAUC algorithms.
Figure 7. Impact of batch size.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Dataset & imratio & sr-original & sr-10\% & sr-30\% & sr-50\% \\ \hline CIFAR10 & 1\% & 0.7071 & **0.7124** & 0.7087 & 0.7110 \\ Cardiomegaly & 12.2\% & 0.8469 & 0.8515 & **0.8566** & 0.8378 \\ Edema & 32.2\% & 0.9341 & 0.9366 & **0.9420** & 0.9337 \\ Consolidation & 6.8\% & 0.8888 & **0.9069** & 0.8832 & 0.8636 \\ Alextelexis & 31.9\% & 0.8231 & 0.8269 & 0.830 & **0.8353** \\ Pleural Effusion & 40.3\% & 0.9265 & 0.9258 & 0.9249 & **0.9311** \\ OGB-HIV & 1.8\% & 0.7642 & **0.8054** & 0.7786 & 0.7752 \\ \hline \hline \end{tabular}
\end{table}
Table 5. Tuning the sampling rate is beneficial for AUCMLoss. |
2304.12711 | Random model of flow decorrelation | The collective flow generated in relativistic heavy-ion collisions fluctuates
from event to event. The fluctuations lead to a decorrelation of flow vectors
measured in separate bins in phase space. These effects have been measured in
experiments and observed in numerical simulations in hydrodynamic models. We
present a simple random model of flow decorrelation in pseudorapidity.
Analytical expressions for the flow factorization breaking coefficients for
flow vectors, flow vector magnitudes, and flow angles are derived. The model
explains the relations between different factorization breaking coefficients
found in experimental data and model simulations. In particular, it is found
that the flow angle decorrelation constitutes about one half of the total flow
vector decorrelation. | Piotr Bozek, Hadi Mehrabpour | 2023-04-25T10:46:03Z | http://arxiv.org/abs/2304.12711v2 | # Random model of flow decorrelation
###### Abstract
The collective flow generated in relativistic heavy-ion collisions fluctuates from event to event. The fluctuations lead to a decorrelation of flow vectors measured in separate bins in phase-space. These effects have been measured in experiments and observed in numerical simulations in hydrodynamic models. We present a simple random model of flow decorrelation. Analytical expressions for the flow factorization breaking coefficients for flow vectors, flow vector magnitudes, and flow angles are derived. The model explains the relations between different factorization breaking coefficients found in experimental data and model simulations.
+
Footnote †: preprint: MITP-23-014
## I Introduction
One of the methods of investigation of the hot and dense matter created in the interaction region of relativistic heavy-ion collisions is the analysis of the collective flow from the spectra of emitted particles [1; 2; 3; 4; 5]. Strong pressure gradients in the fireball cause a rapid expansion. The azimuthal asymmetry of the collective flow can be quantified using the harmonic flow coefficients, describing the magnitude and the azimuthal direction of the flow. The dynamical model commonly used to describe the generation of the collective flow in the rapid expansion of the source is the relativistic viscous hydrodynamics [6].
The collective flow generated reflects the properties of the initial state of the collision. The initial conditions fluctuate from event to event and the final flow observables fluctuate as well [7; 8; 9; 6; 10]. One of the aspects of such fluctuations is the decorrelation of the flow harmonics measured in different phase-space bins. Model calculations predict a deviation from unity of the correlation coefficient between two flow vectors in separate pseudorapidity or transverse momentum bins [11; 12]. This correlation coefficient of flow vectors is called the flow factorization breaking coefficient. It has been measured in experiments [13; 14; 15; 16; 17] and qualitatively reproduced in models [18; 19; 20; 21; 22; 23; 24; 25; 26].
Correlations of higher moments of flow vectors in different phase-space bins have been studied as well, both in experiment [14; 17] and in models [27; 28; 29; 21; 23]. The measurement of the decorrelation using four-particle correlators allows to estimate separately the flow vector magnitude and flow angle decorrelation in different bins. Experimental and model calculation show that the total flow vector decorrelation is composed in approximately equal parts from the flow magnitude and flow angle decorrelation. The second observation is that the decorrelation of higher power of flow vectors is stronger than the decorrelation of simple flow vectors. The factorization breaking coefficient for the second or third power of the flow vector factorization breaking coefficient. Finally, it has been observed in model calculations that the decorrelation is the strongest for events (or classes of events) where the overall flow is the smallest.
These effects have been observed in numerical simulations, but no simple understanding has been given. In this paper, we present a simple model of flow decorrelation in different phase space bins. We assume that the flow vector in a small pseudorapidity bin can be written as a sum of the overall flow (averaged over the whole event) and of a random vector component. Assuming the independence of the directions of random component of the flow and of the average flow, the model can explain qualitatively the effects observed in model simulation and in the experimental data. We present a number of analytical results for the factorization coefficients for flow vectors, for powers of flow vectors, for flow magnitudes and for flow angles. We show that qualitatively similar relations between different factorization breaking coefficients are found in simulations using a realistic hydrodynamic model.
We study 3- and 4-bin measures of the flow decorrelation, used in experimental analyses [13; 14]. The random model of flow decorrelation can describe qualitatively the relations between different 3- and 4-bin measures of the flow vector, flow magnitude, and flow angle decorrelation observed in model calculations and in the experimental data. Also in this case a relation between the decorrelation of the second or third power of flow vectors and the second or third power of the simple flow vector decorrelation is found.
## II Flow correlation
The flow in an event can be defined using the harmonic coefficients of the azimuthal distribution of emitted particles. We use the notation \(V_{n}=v_{n}e^{in\Psi_{n}}\), where \(v_{n}\) and \(\Psi_{n}\) are the magnitude and event-plane angle for the \(n\)-th order harmonic flow. The flow vector \(V_{n}\) cannot be
reconstructed in each event. Rotationally invariant combinations of moments of flow vectors can be estimated from the moments of the corresponding \(q_{n}\) vectors in a phase space region \(A\),
\[q_{n}(A)=\frac{1}{N}\sum_{k\in A}e^{in\phi_{k}}, \tag{1}\]
where the sum runs over all \(N\) hadrons in the selected phase space region \(A\), and \(\phi_{k}\) are their azimuthal angles. The event average of the \(q_{n}\) vector moments is an estimator of the corresponding moments of the flow vectors, e.g.
\[\langle(V_{n})^{m}(V_{n}^{*})^{m}\rangle=\langle q_{n}^{m}(q_{n}^{*})^{m}\rangle, \tag{2}\]
where the angular brackets denote an average over the events.
The factorization breaking of the collective flow means that the flow moment calculated in different regions in phase space (\(A\) and \(B\)),
\[\langle V_{n}(A)V_{n}^{*}(B)\rangle=\frac{1}{N_{A}N_{B}}\sum_{k\in A,j\in B}e^{ in(\phi_{k}-\phi_{j})}, \tag{3}\]
does not factorize into flow moments calculated [11; 12] in the same bin
\[\langle V_{n}(A)V_{n}^{*}(A)\rangle=\frac{1}{N_{A}(N_{A}-1)}\sum_{k\neq j\in A }e^{in(\phi_{k}-\phi_{j})}\, \tag{4}\]
which gives
\[\langle V_{n}(A)V_{n}^{*}(B)\rangle\neq\sqrt{\langle V_{n}(A)V_{n}^{*}(A) \rangle\langle V_{n}(B)V_{n}^{*}(B)\rangle}. \tag{5}\]
The factorization breaking coefficient is the correlation coefficient of flow vectors in two-phase space regions
\[\rho_{V_{n}}(A,B)=\frac{\langle V_{n}(A)V_{n}^{*}(B)\rangle}{\sqrt{\langle V_{ n}(A)V_{n}^{*}(A)\rangle\langle V_{n}(B)V_{n}^{*}(B)\rangle}}. \tag{6}\]
If the flow dominates the multiparticle correlation we have \(\rho_{V_{n}}(A,B)\leq 1\). The flow correlation coefficient (factorization breaking coefficient) can be used as a measure of flow decorrelation for two bins in pseudorapidity [11] or in transverse momentum [12]. The factorization breaking coefficient \(r_{n}(p_{1},p_{2})\) in transverse momentum can be measured in experiment [30; 31; 13] and calculated in models [12; 18; 19; 20; 21; 22].
The decorrelation of Eq. 6 in pseudorapidity contains a significant contribution from non-flow effects [11]. A modified factorization breaking coefficient has been proposed [32] defined as a ratio of two flow vector covariances taken for two different pairs of bins (Fig. 1),
\[\mathcal{R}_{n;V}^{(1)}=\frac{\langle V_{n}(-\eta)V_{n}^{*}(\eta_{ref}) \rangle}{\langle V_{n}(\eta)V_{n}^{*}(\eta_{ref})\rangle}, \tag{7}\]
where \(\eta_{ref}\) is the reference pseudorapidity common to the numerator and the denominator. The bins are placed such that both \(|\eta_{ref}-\eta|\) and \(|\eta_{ref}+\eta|\) are large enough to suppress non-flow correlations. The experimentally measured [32; 33; 34] decorrelation in pseudorapidity using Eq. 7 can be qualitatively reproduced in hydrodynamic and cascade models [35; 24; 36]. Besides the simple factorization breaking coefficient of Eq. 6, factorization breaking coefficients for higher moments of flow vectors can be
Figure 1: Using the pseudorapidity bins, the factorization breaking coefficients for the collective flow are defined (three or all four bins depending on the case).
defined [14; 17; 21; 29; 37]. Based on these factorization-breaking coefficients of higher order flow moments, the flow magnitude and flow angle decorrelation can be estimated separately [21; 23]. In the following, we discuss a simple model of flow decorrelation that explains qualitatively the relations between different factorization breaking coefficients. To illustrate the effects in model calculations, we use a 3+1-dimensional viscous hydrodynamic model [6] with particle emission through statistical hadronization [38] at freeze-out. Details of the 3-dimensional fluctuating initial conditions and hydrodynamic model can be found in Ref [27]. In the present work, we use Pb+Pb collisions at \(\sqrt{s_{NN}}=5.02\)TeV for two centralities \(0-5\%\) and \(30-40\%\) as a numerical example of the analytical identities discusses in the random model of flow decorrelation.
## III Random model of flow decorrelation
The flow in two separate phase-space regions in the same event is slightly decorrelated. To understand the basic features of this effect, we study a simple random flow decorrelation model. The model is simple enough, so that in the limit of small flow decorrelations, many observables, such as the factorization breaking coefficients and flow correlations can be estimated analytically.
In a given event, the flow measured in two specific bins differs. Also, the flow \(V_{i}=V+K_{i}\) defined in a small bin \(i\) in phase-space differs from the flow \(V\) averaged over the whole acceptance region depicted in Fig. 1. The flow in a small bin is composed of the average flow in the whole acceptance region and a random component \(K_{i}\). Both \(V\) and \(K_{i}\) have nontrivial event-by-event distributions. In the following, we assume that all the small bins considered are of the same size and that in each bin \(i\) the flow distribution is the same. It is a good approximation for the flow defined in pseudorapidity bins. Alternatively, one can transfer the definition of the model to the scaled flow \(V_{i}/\langle V_{i}\rangle\), e.g. for the modeling of flow factorization breaking in transverse momentum.
The combined probability distribution of two components of the flow \(V\) and \(K_{i}\), \(i=1,\ldots,N\) can be very complicated in principle. We make one simplifying assumption:
\[\langle V^{m}(K_{i}^{*})^{m}\rangle=0\ \,\quad m=1,2,\ldots. \tag{8}\]
Note that the above properties are fulfilled under the assumption of random relative orientation of flow vectors \(V\) and \(K_{i}\). On the other hand, the magnitudes could still be correlated,
\[\langle v^{m}|K_{i}|^{l}\rangle\neq 0. \tag{9}\]
The local random component of the flow \(K_{i}\) fulfills the constraint
\[\langle(V+K_{i})(V+K_{i})^{\star}\rangle=Var(V)+Var(K_{i})=Var(V)+C^{2}\, \tag{10}\]
where \(C^{2}\) is a constant defined as the difference between flow vector variance in a small bin and flow vector variance in the whole acceptance region. Nontrivial flow correlation in two different bins is encoded in the correlation of the random components \(K_{i}\) and \(K_{j}\) of the flow in the two bins. Phenomenologically, one expects that as the separation between two bins increases the random components of flow vectors \(K_{i}\) and \(K_{j}\) become less aligned. This means that the covariance
\[\langle K_{i}K_{j}^{\star}\rangle \tag{11}\]
decreases with increasing bin separation. Another correlation between random components comes from global constraint:
\[\sum_{i}K_{i}=0. \tag{12}\]
For the study of the covariances between flows in two bins, we define:
\[A=\frac{K_{i}+K_{j}}{2}\quad\text{and}\quad\Delta=\frac{K_{i}-K_{j}}{2}. \tag{13}\]
Note that due to the properties of \(K_{i}\) and \(K_{j}\), both \(A\) and \(\Delta\) depend on the separation between the bins. The distribution of \(\Delta\) gets wider as the bin separation increases (reflecting the increasing decorrelation of \(K_{i}\) and \(K_{j}\)). On the other hand, the distribution of \(A\) gets narrower
\[\langle AA^{\star}\rangle=C^{2}-\langle\Delta\Delta^{\star}\rangle, \tag{14}\]
due to the constraint in Eq. 10. We have
\[\langle A\Delta\rangle=0. \tag{15}\]
For bins in pseudorapidity, we specify \(V(\eta)=V+K_{i}\) for the flow vector in a bin of fixed size around the pseudorapidity \(\eta\). As indicated above the distribution of \(A\) and \(\Delta\) depends only on the pseudorapidity separation \(|\eta_{1}-\eta_{2}|\) between the two bins. Besides the mathematical constraint in Eq. 12 or global momentum conservation constraints [39], the correlations between the two random components \(K_{i}\) and \(K_{j}\) can have a nontrivial physical origin. In the hydrodynamic model, fluctuations in the initial distribution in space-time rapidity generate fluctuations in the final flow in pseudorapidity [35]. Local fluctuations of the initial flow increase the decorrelation of the final flow [24]. Finally, dynamical hydrodynamic fluctuations [40] lead to fluctuations of the final flow of a finite range in rapidity [41].
## IV Factorization breaking coefficient of the collective flow
The factorization breaking coefficient of the collective flow, which is equivalent to the correlation coefficient for
the complex variables \((V_{n}(\eta))^{m}\) and \((V_{n}(-\eta))^{m}\), can be generalized to any power of the flow:
\[\rho^{(m)}_{V_{n}}(\eta,-\eta)=\frac{\langle(V(\eta))^{m}(V^{\star}(-\eta))^{m} \rangle}{\sqrt{\langle(V(\eta))^{m}(V^{\star}(\eta))^{m}\rangle\langle(V(-\eta ))^{m}(V^{\star}(-\eta))^{m}\rangle}}\,. \tag{16}\]
The factorization breaking coefficient of the \(m\)-th power of the flow vector in Eq. 16 can be estimated in principle from an average of the moments of the experimentally measured \(q\) vectors,
\[\rho^{(m)}_{V_{n}}(\eta,-\eta)=\frac{\langle(q(\eta))^{m}(q^{\star}(-\eta))^{m }\rangle}{\sqrt{\langle(q(\eta))^{m}(q^{\star}(\eta))^{m}\rangle\langle(q(- \eta))^{m}(V^{\star}(-\eta))^{m}\rangle}}\,. \tag{17}\]
However, for correlations in pseudorapidity the measure is strongly influenced by non-flow effects [11]. In the models results shown here, the non-flow correlations are reduced by oversampling the final state hadrons for each hydrodynamic evolution event. On the other hand, such observables can be used as an experimental estimate for higher moment factorization breaking coefficients in transverse momentum [21; 17; 29], if rapidity gaps are used to reduce non-flow correlations, and similar relations can be studied for the factorization breaking coefficients in transverse momentum.
In the random model of flow decorrelation, the flow in each bin is decomposed into global and local random
Figure 2: A comparison of the factorization breaking coefficient for the second power of flow magnitude (black lines, dots) with the factorization breaking coefficient for the second power of flow vectors (blue lines, diamonds) is shown in the top panels (central and semicentral collisions, panels a) and b) respectively). The red lines with squares represents the flow angle factorization breaking coefficient, Eq. 28. The factorization breaking coefficients for the fourth power of the flow are shown in panels c) and d). All results are obtained from the viscous hydrodynamic model for Pb-Pb collisions at \(\sqrt{s_{NN}}=5.02\)TeV.
components. The factorization breaking coefficient takes the form:
\[\rho^{(m)}_{V_{n}}(\eta,-\eta) =\frac{\langle(V_{n}+A_{n}+\Delta_{n})^{m}(V_{n}^{\star}+A_{n}^{ \star}-\Delta_{n}^{\star})^{m}\rangle}{\sqrt{\langle(V_{n}+A_{n}+\Delta_{n})^{ m}(V_{n}^{\star}+A_{n}^{\star}+\Delta_{n}^{\star})^{m}\rangle}\langle(V_{n}+A_{n}- \Delta_{n})^{m}(V_{n}^{\star}+A_{n}^{\star}-\Delta_{n}^{\star})^{m}\rangle}\] \[=\frac{\langle(V_{n}^{\prime}+\Delta_{n})^{m}(V_{n}^{\prime\ast}- \Delta_{n}^{\star})^{m}\rangle}{\langle(V_{n}^{\prime}+\Delta_{n})^{m}(V_{n}^ {\prime\ast}+\Delta_{n}^{\star})^{m}\rangle}\, \tag{18}\]
where \(V_{n}^{{}^{\prime}}=V_{n}+A_{n}\). If the random component of the flow is relatively small \(\delta_{n}=|\Delta_{n}|\ll|V_{n}|\) (then also \(\delta_{n}=|\Delta_{n}|\ll|V_{n}^{{}^{\prime}}|\)), the factorization breaking coefficient can be expanded to second order in \(\Delta\),
\[\rho^{(m)}_{V_{n}}(\eta,-\eta)\simeq 1-2m^{2}\frac{\langle v_{n}^{2m-2}\delta^{2 }\rangle}{\langle v_{n}^{2m}\rangle}. \tag{19}\]
The above formula shows the generic properties of the factorization-breaking coefficient. On general grounds, one expects that the lowest order dependence of \(\Delta\) on the bin separation \(\Delta\eta\) is linear [11; 42]. This leads to a quadratic dependence of the factorization-breaking coefficient on the bin separation,
\[\rho^{(m)}_{V_{n}}(\eta_{1},\eta_{2})\simeq 1-2m^{2}\kappa(\eta_{1}-\eta_{2})^{ 2}. \tag{20}\]
The flow moment decorrelation, i.e. the deviation of the factorization breaking coefficient from unity, increases with the rank \(m\) of the flow moment. The formula can be further simplified if the factorization of moments \(v\) and
Figure 3: Same as in Fig. 2 but for the triangular flow.
\(\delta\) is assumed:
\[\rho_{V_{n}}^{(m)}(\eta,-\eta) \simeq 1-2m^{2}\frac{\langle v_{n}^{2m-2}\rangle\langle\delta^{2} \rangle}{\langle v_{n}^{2m}\rangle}\] \[\simeq 1-2m\frac{\langle\delta^{2}\rangle}{\langle v_{n}^{2} \rangle}\, \tag{21}\]
where the last equality is obtained for fluctuation-dominated flow. If the collective flow is dominated by fluctuations in the initial state, as for the triangular flow or the elliptic flow in central collisions, we have \(\langle v_{n}^{2m}\rangle=m!\langle v_{n}^{2}\rangle^{m}\).
The deviation of the factorization breaking coefficient from one comes from two effects, the flow angle decorrelation and flow magnitude decorrelation as noted in [37; 43]. Both effects can be studied separately in model calculations [21]. The factorization breaking coefficient for the flow magnitudes is:
\[\rho_{v_{n}}^{(m)}(\eta,-\eta)=\frac{\langle v_{n}(\eta)^{m}v_{n}(-\eta)^{m} \rangle}{\sqrt{\langle v_{n}(\eta)^{2m}\rangle\langle v_{n}(-\eta)^{2m} \rangle}}. \tag{22}\]
The factorization breaking coefficient for the flow magnitudes could be estimated experimentally (at least in principle) for _even_\(m=2k\):
\[\rho_{v_{n}}^{(m)}(\eta,-\eta)=\frac{\langle(q_{n}(\eta))^{k}(q_{n}^{*}(\eta)) ^{k}(q_{n}(-\eta))^{k}(q_{n}^{*}(-\eta))^{k}\rangle}{\sqrt{\langle(q_{n}( \eta))^{2k}(q_{n}^{*}(\eta))^{2k}\rangle\langle(q_{n}(\eta))^{2k}(q_{n}^{*}( \eta))^{2k}\rangle}}. \tag{23}\]
The flow magnitude factorization breaking can be calculated using our model with a random local flow component,
\[\rho_{v_{n}}^{(m)}(\eta,-\eta) =\frac{\langle|V_{n}+A_{n}+\Delta_{n}|^{m}|V_{n}+A_{n}-\Delta_{n} |^{m}\rangle}{\sqrt{\langle|V_{n}+A_{n}+\Delta_{n}|^{2m}\rangle\langle|V_{n}+ A_{n}-\Delta_{n}|^{2m}\rangle}}\] \[=\frac{\langle|V_{n}^{{}^{\prime}}+\Delta_{n}|^{m}|V_{n}^{{}^{ \prime}}-\Delta_{n}|^{m}\rangle}{\langle|V_{n}^{{}^{\prime}}+\Delta_{n}|^{2m }\rangle}. \tag{24}\]
Expansion to the second order in \(\delta/v\) gives:
\[\rho_{v_{n}}^{(m)}(\eta,-\eta)\simeq 1-m^{2}\frac{\langle v_{n}^{2m-2}\delta^{ 2}\rangle}{\langle v_{n}^{2m}\rangle}. \tag{25}\]
In the random model of flow decorrelation, the decorrelation of the flow vector magnitudes is approximately one-half of the decorrelation of the flow vectors. The same property can be observed in the viscous hydrodynamic model results (Figs. 2 and 3).
To estimate the flow angle decorrelation between two bins centered at \(\eta\) and \(-\eta\) a simple average of the cosine of flow angle difference could be used :
\[\langle\cos\left(mn(\Psi_{n}(\eta)-\Psi_{n}(-\eta))\right)\rangle=\left\langle \frac{\left(V_{n}(\eta)V_{n}^{*}(-\eta)\right)^{m}}{\left(|V_{n}(\eta)||V_{n}(- \eta)|\right)^{m}}\right\rangle. \tag{26}\]
The flow angle decorrelation defined above cannot be directly measured in the experiment, unlike the flow vector factorization breaking coefficient in Eq. 16 (for any \(m\)) or the flow magnitude factorization breaking coefficient of Eq. 22 (for even \(m\)). Note that in the definition of flow factorization breaking the event average is taken separately in the numerator and the denominator. In contrast, for the angle decorrelation defined in Eq. 26, the event average is taken for the whole ratio. The expansion of the flow angle decorrelation for small \(\delta/v\) gives:
\[\langle\cos\left(mn(\Psi_{n}(\eta)-\Psi_{n}(-\eta))\right)\rangle\simeq 1-m^{2 }\left\langle\frac{\delta_{n}^{2}}{v_{n}^{2}}\right\rangle. \tag{27}\]
The simple flow angle decorrelation of order \(m\) in Eq. 26 should not be confused with the angular component of the flow factorization breaking \(\rho_{V_{n}}^{(m)}(\eta,-\eta)\) of Eq. 16. Under the event average, the flow magnitude to power \(m\) is present in the flow factorization breaking coefficient of order \(m\). It has been noticed that the flow angle decorrelation is strongly anticorrelated with flow magnitude in the event [27]. Therefore, the relevant flow angle factor
Figure 4: Flow factorization breaking coefficients for different moments of flow vectors calculated in the hydrodynamic model. Panels a) and b) are for central (\(0-5\%\)) and semi-central (\(30-40\%\)) collisions, respectively. The factorization breaking coefficients \(\rho^{(1)}\), \(\rho^{(2)}\), \(\rho^{(3)}\), and \(\rho^{(4)}\) are denoted with circles, squares, diamonds, and triangles respectively. The powers of the first order coefficient \([\rho^{(1)}]^{m}\) are represented with dashed lines of the same color as the corresponding coefficients \(\rho^{(m)}\).
ization breaking coefficient in order \(m\) is defined as :
\[\rho^{(m)}_{\Psi_{n}}(\eta,-\eta)=\frac{\langle v^{2m}\cos\left(mn(\Psi_{n}(\eta)- \Psi_{n}(-\eta))\right)\rangle}{\langle v^{2m}\rangle}. \tag{28}\]
In experiment, the flow angle factorization breaking coefficient can be estimated as the ratio of the flow vector and flow vector magnitude factorization breaking coefficients :
\[\rho^{(m)}_{\Psi_{n}}(\eta,-\eta)=\frac{\rho^{(m)}_{V_{n}}(\eta,-\eta)}{\rho^{ (m)}_{v_{n}}(\eta,-\eta)}\, \tag{29}\]
with almost the same results as from the definition in Eq. 28. For small flow decorrelation, we have :
\[\rho^{(m)}_{\Psi_{n}}(\eta,-\eta)\simeq 1-m^{2}\frac{\langle v^{2m-2}\delta^{2} \rangle}{\langle v^{2m}\rangle}. \tag{30}\]
In Figs. 2 and 3, we compare the results for measures of flow decorrelation in two pseudorapidity bins : the flow factorization breaking coefficient, the flow magnitude factorization breaking coefficient, and the flow angle factorization breaking coefficient in the hydrodynamic model. The numerical results follow the relation obtained in our random flow decorrelation model. In particular, the flow factorization breaking can be decomposed into the flow magnitude factorization breaking and the flow angle decorrelation,
\[\rho^{(m)}_{V_{n}}(\eta,-\eta)\simeq\rho^{(m)}_{v_{n}}(\eta,-\eta)\rho^{(m)}_{ \Psi_{n}}(\eta,-\eta), \tag{31}\]
where, with considerable accuracy, we find :
\[[1-\rho^{(m)}_{\Psi_{n}}(\eta,-\eta)]\simeq[1-\rho^{(m)}_{v_{n}}(\eta,-\eta)] \simeq[1-\rho^{(m)}_{V_{n}}(\eta,-\eta)]/2. \tag{32}\]
The angle decorrelation is roughly the same as the flow magnitude factorization breaking coefficient (compare red and black lines in Figs. 2 and 3). If the above relation holds, Eq 32 could be used as an experimental estimate of the flow angle decorrelation also for odd \(m\).
Finally, we test the relation of Eq. 21 between flow factorization coefficients of different order \(m\). When the event by event flow follows the Bessel-Gaussian distribution [44; 45] (central collisions for the elliptic flow and all centralities for the triangular flow) and for small deviations of the factorization breaking coefficients from one, one finds:
\[\rho^{(m)}_{V_{n}}(\eta,-\eta)\approx\left[\rho^{(1)}_{V_{n}}(\eta,-\eta) \right]^{m}\,. \tag{33}\]
Numerical results from the viscous hydrodynamic model confirm these relations as illustrated in Figs. 4 and 5. The factorization breaking coefficient of order \(m=2,3\) can be approximated as a power of the first order factorization breaking coefficient for the elliptic flow at 0-5% centrality and the triangular flow at both centralities studied. The elliptic flow in semi-central events (30-40%) is not dominated by fluctuations and the relation of Eq. 21 for the factorization breaking coefficients of different orders is not true. \(\rho^{(4)}\) deviates strongly from one and the approximate relation in Eq. 33 is broken.
## V Flow angle decorrelation and the overall flow magnitude
The flow angle decorrelation is stronger if the overall flow is small. The largest flow decorrelation occurs for the triangular flow and for the elliptic flow in central events [11]. Moreover, for a given centrality class, in events with a smaller magnitude of the overall flow, the angle decorrelation is the largest [21; 27]. In Figs. 6 and 7 are shown the scatter plots of the overall flow magnitude in the event and of the cosine of the flow decorrelation for a sample of central and semi-central events. The increase of the flow angle decorrelation with decreasing overall flow magnitude \(v\) can be understood from Eq. 27. Taking
\[\langle\cos\left(n(\Psi_{n}(\eta)-\Psi_{n}(-\eta))\right)\rangle\simeq 1- \left\langle\frac{\delta_{n}^{2}}{v_{n}^{2}}\right\rangle, \tag{34}\]
at a fixed value of the flow magnitude, \(v_{n}\) one has:
\[\langle\cos\left(n(\Psi_{n}(\eta)-\Psi_{n}(-\eta))\right)\rangle|_{v_{n}} \simeq 1-\frac{\langle\delta_{n}^{2}\rangle}{v_{n}^{2}}. \tag{35}\]
Figure 5: same as in Fig. 4 but for the triangular flow.
The above anti-correlation of the flow angle decorrelation \(\Psi_{n}(\eta)-\Psi_{n}(-\eta)\) with \(v_{n}\) is denoted with red points in Figs. 6 and 7. The random model of flow decorrelation explains the anti-correlation between the overall flow magnitude \(v_{n}\) and the angle correlation \(\langle\cos\left(n(\Psi_{n}(\eta)-\Psi_{n}(-\eta))\right)\rangle\) observed in numerical simulations of the hydrodynamic model.
## VI 3-bin and 4-bin measures of flow decorrelation
The 2-bin factorization breaking coefficients studied in Sect. IV are very sensitive to non-flow effects. Alternatively, a 3-bin measure of flow factorization breaking in the longitudinal direction can be used [13] (Eq. 7). This measure can be generalized to higher moments of the flow [34; 37]
\[\mathcal{R}^{(m)}_{n;V}(\eta)=\frac{\langle V_{n}^{m}(-\eta)V_{n}^{*m}(\eta_{ ref})\rangle}{\langle V_{n}^{m}(\eta)V_{n}^{*m}(\eta_{ref})\rangle}. \tag{36}\]
An analogous formula can be written for the factorization breaking coefficient of the flow magnitude as follows :
\[\mathcal{R}^{(m)}_{n;V}(\eta)=\frac{\langle v_{n}^{m}(-\eta)v_{n}^{m}(\eta_{ ref})\rangle}{\langle v_{n}^{m}(\eta)v_{n}^{m}(\eta_{ref})\rangle}. \tag{37}\]
A third measure can be defined to estimate the simple flow angle decorrelation :
\[\mathcal{R}^{*(m)}_{n;\Psi}(\eta)=\frac{\langle\cos\left(mn(\Psi_{n}(-\eta)- \Psi_{n}(\eta_{ref}))\right)\rangle}{\langle\cos\left(mn(\Psi_{n}(\eta)-\Psi _{n}(\eta_{ref}))\right)\rangle}. \tag{38}\]
Using the local random component (K) for the flow in a given bin \(V(\eta)=V+K(\eta)\) and defining
\[A_{1}=\frac{K(\eta)+K(\eta_{ref})}{2}\quad\text{and}\quad\Delta_{1}=\frac{K( \eta)-K(\eta_{ref})}{2}\, \tag{39}\]
and
\[A_{2}=\frac{K(-\eta)+K(\eta_{ref})}{2}\quad\text{and}\quad\Delta_{2}=\frac{K( -\eta)-K(\eta_{ref})}{2}\, \tag{40}\]
Figure 6: Scatter plot of the scaled elliptic flow in an event \(v/\langle v\rangle\) versus \(\cos(n(\Psi_{n}(\eta)-\Psi_{n}(-\eta))\) for central (panel a)) and semi-central (panel b)) collisions. All the points are from the viscous hydrodynamic model. The red points represent the expected flow angle decorrelation as a function of the fixed value of the flow in the event (Eq. 35).
Figure 7: Same as in Fig. 6 but for the triangular flow.
one can write:
\[\mathcal{R}^{(m)}_{n;V}(\eta)=\frac{\langle(V+A_{2}+\Delta_{2})^{m}(V+A_{2}- \Delta_{2})^{*m}\rangle}{\langle(V+A_{1}+\Delta_{1})^{m}(V+A_{1}-\Delta_{1})^{* m}\rangle}\, \tag{41}\]
\[\mathcal{R}^{(m)}_{n;v}(\eta)=\frac{\langle|V+A_{2}+\Delta_{2}|^{m}|V+A_{2}- \Delta_{2}|^{m}\rangle}{\langle|V+A_{1}+\Delta_{1}|^{m}|V+A_{1}-\Delta_{1}|^{m }\rangle}. \tag{42}\]
The expansion of the factorization breaking coefficients to second order in \(\delta^{2}\), taking into account that \(\langle|A|^{2}\rangle=C^{2}-\delta^{2}\), takes the form:
\[\mathcal{R}^{(m)}_{n;V}(\eta)=1-2m^{2}\frac{\langle v^{2m-2}\delta^{2}_{2} \rangle-\langle v^{2m-2}\delta^{2}_{1}\rangle}{\langle v^{2m}\rangle}, \tag{43}\]
for the flow vector decorrelation and
\[\mathcal{R}^{(m)}_{n;v}(\eta)=1-m^{2}\frac{\langle v^{2m-2}\delta^{2}_{2} \rangle-\langle v^{2m-2}\delta^{2}_{1}\rangle}{\langle v^{2m}\rangle}, \tag{44}\]
for the flow magnitude decorrelation. We find that the deviation of the flow factorization breaking coefficient \(R^{(m)}_{n;V}\) from one is twice as large as the deviation of the flow magnitude coefficient \(R^{(m)}_{n;v}\) from one:
\[[1-R^{(m)}_{n;v}(\eta)]\simeq[1-R^{(m)}_{n;V}(\eta)]/2. \tag{45}\]
The above relation is approximately fulfilled in numerical simulations and the experimental data [27]. Moreover, we can find the general formula for the ratio of the simple flow angle decorrelation (38) as follows:
\[\mathcal{R}^{s(m)}_{n;\Psi}=1-m^{2}\langle\frac{\delta^{2}_{2}}{v^{2}}\rangle +m^{2}\langle\frac{\delta^{2}_{1}}{v^{2}}\rangle. \tag{46}\]
This is not the flow decorrelation that is estimated in experiment. The simple flow angle decorrelation is as strong or stronger than the full flow vector decorrelation (Figs. 8 and 9). Similar as for the 2-bin correlator, the correct flow angle decorrelation weighted with the power
Figure 8: The factorization breaking coefficient of the elliptic flow in three pseudorapidity bins (central and semi-central collisions, panels a) and b) respectively. The flow vector factorization breaking coefficients, Eq. 43, are denoted with filled circles, the flow magnitude factorization breaking coefficients, Eq. 44, are denoted with squares, and the simple flow angle factorization breaking coefficients, Eq. 38 are denoted with diamonds. The blue dash lines indicate an estimate of the flow vector factorization breaking coefficient, Eq. 37, as a product of the flow magnitude factorization breaking coefficient and the weighted flow angle factorization breaking coefficient in Eq. 47.
Figure 9: Same as in Fig. 8 but for the triangular flow.
of the flow magnitude can be defined as
\[\mathcal{R}^{(m)}_{n;\Psi}(\eta)=\frac{\langle v_{n}^{2m}\cos\left(mn(\Psi_{n}(- \eta)-\Psi_{n}(\eta_{ref}))\right)\rangle}{\langle v_{n}^{2m}\cos\left(mn(\Psi_{ n}(\eta)-\Psi_{n}(\eta_{ref}))\right)\rangle}. \tag{47}\]
Expanding to the order \(\delta_{j}^{2}\) we find
\[\mathcal{R}^{(m)}_{n;\Psi}\approx 1-m^{2}\frac{\langle v^{m-2}\delta_{2}^{2} \rangle}{\langle v^{2m}\rangle}+m^{2}\frac{\langle v^{2m-2}\delta_{1}^{2} \rangle}{\langle v^{2m}\rangle}. \tag{48}\]
Comparing the above results with the results for the flow vector decorrelation Eq. 43 and flow magnitude decorrelation Eq. 44, one observes
\[\mathcal{R}^{(m)}_{V}\approx\mathcal{R}^{(m)}_{v}\mathcal{R}^{(m)}_{\Psi}. \tag{49}\]
We expect that the above factorization of the flow vector coefficient works well for both elliptic and triangular flows and all centralities. This factorization is observed in numerical calculations [27] (Figs. 8 and 9). It means that the ration \(\mathcal{R}^{(m)}_{V}/\mathcal{R}^{(m)}_{v}\) can be used as an estimate of the weighted angle decorrelation \(\mathcal{R}^{(m)}_{n;\Psi}\) (Eq. 47). On the other hand, the simple angle decorrelation of Eq. 38 is significantly larger.
In the experiment, due to non-flow correlations, only the 3-bin flow vector correlator \(\mathcal{R}^{(m)}_{n;V}(\eta)\) can be measured. There is another practical way to estimate flow angle decorrelation at different pseudorapidities using a four-bin correlator [34],
\[\mathcal{R}_{n;VV}=\frac{\langle V_{n}(-\eta_{ref})V_{n}(-\eta)V_{n}^{*}(\eta) V_{n}^{*}(\eta_{ref})\rangle}{\langle V_{n}(-\eta_{ref})V_{n}^{*}(-\eta)V_{n}( \eta)V_{n}^{*}(\eta_{ref})\rangle}. \tag{50}\]
In the random model of flow decorrelation, the above 4-bin correlator takes the form (in the lowest order in \(\delta\)) :
\[\mathcal{R}_{n;VV}\approx 1-4\frac{\langle v^{2}\delta_{2}^{2}\rangle}{ \langle v^{4}\rangle}+4\frac{\langle v^{2}\delta_{1}^{2}\rangle}{\langle v^{ 4}\rangle}. \tag{51}\]
The 4-bin correlator is an estimate of the weighted flow angle factorization breaking coefficient (Eq. 47). The flow vector factorization breaking coefficient \(\mathcal{R}_{V}\) approximately factorizes into the flow magnitude and flow angle factorization breaking coefficient
\[\mathcal{R}^{(2)}_{V}\approx\mathcal{R}_{n;VV}\mathcal{R}^{(2)}_{v}. \tag{52}\]
Numerical results from the viscous hydrodynamic model for the correlators \(\mathcal{R}_{n;VV}\), \(\mathcal{R}^{(2)}_{V}\), and the above factorization are shown in Fig. 10. The agreement with the experimental data is qualitatively correct. There is a good consistency between the results of the hydrodynamic simulations and the factorization given in Eq. 52 (red dashed line) in second- and third-order harmonics for both central (not shown) and semi-central collisions.
Finally, we test in the hydrodynamic model the scaling of the factorization breaking coefficients for different moments of the harmonic flow vectors:
\[\mathcal{R}^{(m)}_{V}\approx\left[\mathcal{R}^{(1)}_{V}\right]^{m}. \tag{53}\]
Fig. 11 present a comparison of the factorization breaking coefficients for the second and third moment of the flow vector \(\mathcal{R}^{(m)}_{V}\) (for \(m=2-3\)) (red and blue lines respectively) with the respective powers of the factorization breaking coefficients of the flow vectors \(\left[\mathcal{R}^{(1)}_{V}\right]^{m}\) for \(m=2-3\) (red and blue dashed lines). The above relation is expected for fluctuation dominated flow (triangular flow) from Eq. 43 derived in the random model of flow decorrelation.
## VII Conclusions
We analyze the decorrelation of the flow vectors in separate rapidity bins. A simple random model of flow decorrelation is able to reproduce qualitatively scaling relations between factorization breaking coefficients. Numerical simulations in the hydrodynamic model show that the flow vector magnitude and flow vector angle
Figure 10: Comparison of 3-bin (red lines) and 4-bin (black lines) flow vector factorization breaking coefficients for the elliptic flow (panel a)) and for the triangular flow (panel b)), for \(30-40\%\) centrality. Red dash lines show the decomposition of the flow vector factorization breaking coefficient \(\mathcal{R}^{(2)}_{V}\) into the flow magnitude and flow angle decorrelation using Eq. 52.
decorrelations are approximately equal and sum up in the total flow vector decorrelation. The same relation can be obtained in a random model of flow decorrelation, where the flow in a small rapidity bin is written as a sum of the average flow in the event and of a random vector component. Assuming that the random component direction is independent of the average flow and its magnitude is much smaller than the average flow, analytical expressions for the factorization breaking coefficients for flow vectors, flow magnitudes and flow angles are given, with similar relations between them as in the full hydrodynamic simulation.
The factorization breaking coefficients for higher powers of the flow vectors shows a large deviation from one, reflecting the stronger decorrelation for the second or third power of the flow vectors than for the flow vectors only. In the random model of flow decorrelation this property comes from the general analytical expressions for the correlations of different moments of flow vectors. In the case when the flow is fluctuation dominated (triangular flow or elliptic flow in central collisions) the factorization breaking coefficients of different powers \(m\) of the flow are related
\[\rho^{(m)}(-\eta,\eta)\simeq\left[\rho^{(1)}(-\eta,\eta)\right]^{m}. \tag{54}\]
The above relation is found analytically in the random decorrelation model, as well as in numerical simulations in the hydrodynamic model.
The flow angle decorrelation is larger if the overall flow is small. This can be observed on event by event basis in the hydrodynamic simulations and is encoded in the analytical expressions obtained in the random model of flow decorrelation. As a consequence the flow decorrelation is larger for the triangular flow and for the elliptic flow in central collisions than for the elliptic flow in semi-central collisions.
Analytical expression of the factorization breaking coefficients are given for the 3- and 4-bin measures of flow decorrelation in pseudorapidity. Such measures are used in experimental analyses in order to reduce non-flow effects. The random model of flow decorrelation reproduces qualitatively the relation observed in experimental data and in hydrodynamic simulations :
* the flow angle decorrelation is approximately one half of the flow vector decorrelation,
* the decorrelation of the second or third power of the flow is given as the second or third power of the flow vector decorrelation, when the overall flow is fluctuation dominated.
The proposed random model of flow decorrelation is surprisingly simple, yet it reproduces qualitatively a number of phenomenological relations observed in experimental data or in realistic hydrodynamic simulations. The model can serve as a way to understand the fluctuations inherent in models of heavy-ion collisions. Moreover, any deviations from these simple scalings in the experimental data could suggest the existence of specific correlations between the average flow and the random flow decorrelations, which could be interesting in the study of realistic initial conditions for the hydrodynamic simulations of heavy-ion collisions.
## Acknowledgment
P.B. acknowledges support from the National Science Centre grant 2018/29/B/ST2/00244. H.M. thanks CERN-TH group for the support. H.M. is funded by the Cluster of Excellence _Precision Physics, Fundamental Interactions, and Structure of Matter_ (PRISMA\({}^{+}\) EXC 2118/1) funded by the German Research Foundation (DFG) within the German Excellence Strategy (Project ID 39083149).
Figure 11: The factorization breaking coefficients for the first (black line and dots), second (red line and squares), and third (blue line and diamonds) power of the flow vector \(\mathcal{R}_{V}^{(m)}\) compared with the second and third power \(\left[\mathcal{R}_{V}^{(1)}\right]^{m}\) (dashed lines). Results for the elliptic flow and the triangular flow are shown in panels a) and b) respectively, for the centrality \(30-40\%\). |
2302.01385 | Hyper-parameter Tuning for Fair Classification without Sensitive
Attribute Access | Fair machine learning methods seek to train models that balance model
performance across demographic subgroups defined over sensitive attributes like
race and gender. Although sensitive attributes are typically assumed to be
known during training, they may not be available in practice due to privacy and
other logistical concerns. Recent work has sought to train fair models without
sensitive attributes on training data. However, these methods need extensive
hyper-parameter tuning to achieve good results, and hence assume that sensitive
attributes are known on validation data. However, this assumption too might not
be practical. Here, we propose Antigone, a framework to train fair classifiers
without access to sensitive attributes on either training or validation data.
Instead, we generate pseudo sensitive attributes on the validation data by
training a biased classifier and using the classifier's incorrectly (correctly)
labeled examples as proxies for minority (majority) groups. Since fairness
metrics like demographic parity, equal opportunity and subgroup accuracy can be
estimated to within a proportionality constant even with noisy sensitive
attribute information, we show theoretically and empirically that these proxy
labels can be used to maximize fairness under average accuracy constraints. Key
to our results is a principled approach to select the hyper-parameters of the
biased classifier in a completely unsupervised fashion (meaning without access
to ground truth sensitive attributes) that minimizes the gap between fairness
estimated using noisy versus ground-truth sensitive labels. | Akshaj Kumar Veldanda, Ivan Brugere, Sanghamitra Dutta, Alan Mishler, Siddharth Garg | 2023-02-02T19:45:50Z | http://arxiv.org/abs/2302.01385v2 | # Hyper-parameter Tuning for
###### Abstract
Fair machine learning methods seek to train models that balance model performance across demographic subgroups defined over sensitive attributes like race and gender. Although sensitive attributes are typically assumed to be known during training, they may not be available in practice due to privacy and other logistical concerns. Recent work has sought to train fair models without sensitive attributes on training data. However, these methods need extensive hyper-parameter tuning to achieve good results, and hence assume that sensitive attributes are known on validation data. However, this assumption too might not be practical. Here, we propose Antigone, a framework to train fair classifiers without access to sensitive attributes on either training or validation data. Instead, we generate pseudo sensitive attributes on the validation data by training a biased classifier and using the classifier's incorrectly (correctly) labeled examples as proxies for minority (majority) groups. Since fairness metrics like demographic parity, equal opportunity and subgroup accuracy can be estimated to within a proportionality constant even with noisy sensitive attribute information, we show theoretically and empirically that these proxy labels can be used to maximize fairness under average accuracy constraints. Key to our results is a principled approach to select the hyper-parameters of the biased classifier in a completely unsupervised fashion (meaning without access to ground truth sensitive attributes) that minimizes the gap between fairness estimated using noisy versus ground-truth sensitive labels.
Machine Learning, ICML
## 1 Introduction
Deep neural networks have achieved state-of-the-art accuracy on a wide range of real-world tasks. But, prior work (Hovy & Sogaard, 2015; Oren et al., 2019; Hashimoto et al., 2018a) has found that state-of-the-art networks exhibit unintended biases towards specific population groups, especially harming minority groups. Seminal work by (Buolamwini & Gebru, 2018) demonstrated, for instance, that commercial face recognition systems had lower accuracy on darker skinned women than other groups. A body of work has sought to design fair machine learning algorithms that account for a model's performance on a per-group basis (Prost et al., 2019; Liu et al., 2021; Sohoni et al., 2020).
Much of the prior work assumes that demographic attributes like gender and race on which we seek to train fair models, referred to as _sensitive attributes_, are available on training and validation data (Sagawa* et al., 2020; Prost et al., 2019). However, there is a growing body of literature (Veale & Binns, 2017; Holstein et al., 2019) highlighting real-world settings in which sensitive attributes may not be available. For example, data subjects may abstain from providing sensitive information for privacy reasons or to eschew potential discrimination in the future (Markos et al., 2017). Alternately, the attributes on which the model discriminates might not be known or available during training and only identified post-deployment (Citron & Pasquale, 2014; Pasquale, 2015). For instance, recent work shows that fair natural language processing models trained on western datasets discriminate based on last names when re-contextualized to geo-cultural settings like India (Bhatt et al., 2022). Similarly, Nikon's face detection models were reported as repeatedly identify Asian faces as blinking, a bias that was only identified retrospectively (Leslie, 2020). Unfortunately, by this point at least some harm is already incurred.
Hence, recent work seeks to train fair classifiers without access to sensitive attributes on the training set (Liu et al., 2021; Creager et al., 2021; Nam et al., 2020; Hashimoto et al., 2018a). Although the details vary, these methods all work in two stages. In the first stage, sub-groups in the training data potentially being discriminated against are identified. In the second, a fair training procedure is used to train a fair model with respect to these sub-groups. For example, JTT's (Liu et al., 2021) stage 1 uses mis-classified examples of a standard empirical risk minimization (ERM) model as a proxy for minority sub-groups. In stage 2, JTT retrains the model by up-weighting these mis-classified examples.
However, Liu et al., (2021) have shown that these methods are highly sensitive to choice of hyper-parameters; the up-weighting factor in JTT, for example, can have a significant impact on the resulting model's fairness. JTTs results without proper hyper-parameter tuning can be even less fair that standard ERM (Liu et al., 2021). Therefore, JTT and other methods (except for GEORGE (Sohoni et al., 2020)) assume access to sensitive attributes on the validation data for hyper-parameter tuning.
However, sensitive information on the validation dataset may not be available for the same reasons they are hard to acquire on training data.
In this paper, we propose Antigone, a principled approach that enables hyper-parameter tuning for fairness without access to sensitive attributes on validation data. Antigone can be used in conjunction with prior methods like JTT and GEORGE that train fair models without sensitive attributes on training data (Liu et al., 2021), and for several fairness metrics including demographic parity, equal opportunity and worst sub-group accuracy.
Antigone builds on the same intuition as in prior work: mis-classified examples of a model trained with standard ERM loss serve as an effective proxy for minority groups. Accordingly, Antigone uses an ERM classifier to obtain pseudo sensitive attribute labels on the validation dataset using correctly and incorrectly classified validation data as proxies for majority and minority groups, respectively. That is, the ERM classifier can be viewed as a noisy sensitive attribute labeler on the validation dataset. But this raises a key question: _how do we select the hyper-parameters of the noisy sensitive attribute labeler?_
Intuitively, to obtain accurate sensitive attribute labels, we seek to maximize the fraction of true minority (majority) samples in the ERM classifier's incorrect (correct) set. In other words, the sensitive attribute labels of the ERM classifier will be accurate if (perhaps counter-intuitively) the classifier is itself biased. Figure 1 illustrates this intuition using the CelebA dataset in which blond men are discriminated against; hence, the incorrect set for the blond class has a greater fraction of men than its correct set. Since the classifier's bias cannot be measured directly, Antigone uses the distance between the data distributions of the correct and incorrect sets as a proxy. Specifically, Antigone uses the Euclidean distance between the means (EDM) of the two sets as a distance measure. The mean images of each set for CelebA are shown in Figure 1 and visually capture the ERM model's biases.
We provide theoretical justification for Antigone in an idealized setting in which a fraction of sensitive attribute labels from the minority group are contaminated with labels from majority group (and vice-versa). This is referred to as the mutually contaminated (MC) noise model in literature (Scott et al., 2013), and prior work shows that common fairness metrics can be estimated up to a proportionality constant under this model (Lamy et al., 2019). We show that maximizing Antigone's EDM metric equivalently maximizes this proportionality constant, thus providing the most reliable estimates of fairness.
We evaluate Antigone in conjunction with JTT (Liu et al., 2021) and GEORGE (Sohoni et al., 2020) on the CelebA, Waterbirds and UCI Adult datasets using demographic parity, equal opportunity, and worst subgroup accuracy as fairness metrics. Empirically, we find that (1) Antigone produces more accurate sensitive attribute labels on validation data compared to GEORGE; (2) used with JTT, Antigone comes close to matching the fairness of JTT with ground-truth sensitive attribute labels on validation data;
Figure 1: Antigone on CelebA dataset with hair color as target label and gender as (unknown) sensitive attribute. Blond men are discriminated against. Correspondingly, the mean image of the Blond class’ incorrect (row 4) has more male features than that of its correct set (row 1), reflecting this bias. Similarly, a bias against non-blond women is also reflected.
and (3) improves fairness of GEORGE when its sensitive attribute labels are replaced with Antigone's. Ablation studies demonstrate the effectiveness of Antigone's EDM metric versus alternatives.
## 2 Proposed Methodology
We now describe Antigone, starting with the problem formulation (Section 2.1) followed by a description of the Antigone algorithm (Section 2.2).
### Problem Setup
Consider a data distribution over set \(\mathcal{D}=\mathcal{X}\times\mathcal{A}\times\mathcal{Y}\), the product of input data (\(\mathcal{X}\)), sensitive attributes (\(\mathcal{A}\)) and target labels (\(\mathcal{Y}\)) triplets. We are given a training set \(D^{tr}=\{x_{i}^{tr},a_{i}^{tr},y_{i}^{tr}\}_{i=1}^{N^{tr}}\) with \(N^{tr}\) training samples, and a validation set \(D^{val}=\{x_{i}^{val},a_{i}^{val},y_{i}^{val}\}_{i=1}^{N^{val}}\) with \(N^{val}\) validation samples. We will assume binary sensitive attributes (\(\mathcal{A}\in\{0,1\}\)) and target labels (\(\mathcal{Y}\in\{0,1\}\)). We note that for now Antigone is limited to binary sensitive attributes, but can be extended to multiple target labels.
We seek to train a machine learning model, say a deep neural network (DNN), which can be represented as a parameterized function \(f_{\theta}:\mathcal{X}\rightarrow\mathcal{Y}\in\{0,1\}\), where \(\theta\in\Theta\) are the trainable parameters, e.g., DNN weights and biases.
Standard fairness unaware empirical risk minimization (ERM) optimizes over trainable parameters \(\theta\) to minimize average loss \(\mathcal{L}_{ERM}\):
\[\mathcal{L}_{ERM}=-\frac{1}{N^{tr}}\sum_{i=1}^{N}l(x_{i}^{tr},y_{i}^{tr}), \tag{1}\]
on \(D^{tr}\), where \(l(x_{i},y_{i})\) is the binary cross-entropy loss.
Optimized model parameters \(\theta^{*}\) are obtained by invoking a training algorithm, for instance stochastic gradient descent (SGD), on the training dataset and model, i.e., \(\theta^{*,\gamma}=\mathcal{M}^{ERM}(D^{tr},f_{\theta},\gamma)\), where \(\gamma\in\Gamma\) are hyper-parameters of the training algorithm including learning rate, training epochs etc. Hyper-parameters are tuned by evaluating models \(f_{\theta^{*,\gamma}}\) for all \(\gamma\in\Gamma\) on \(D^{val}\) and picking the best model. More sophisticated algorithms like Bayesian optimization can also be used.
Now, we review three commonly used fairness metrics that we account for in this paper.
Demographic parity (DP):DP requires the model's outcomes to be independent of sensitive attribute. In practice, we seek to minimize the demographic parity gap:
\[\Delta_{\theta}^{DP}=\mathbb{P}[f_{\theta}(X)=1|A=1]-\mathbb{P}[f_{\theta}(X )=1|A=0] \tag{2}\]
Equal opportunity (EO):EO aims to equalize only the model's true positive rates across sensitive attributes. In practice, we seek to minimize
\[\begin{split}\Delta_{\theta}^{EO}=\mathbb{P}[f_{\theta}(X)=1|A=1,Y=1]-\\ \mathbb{P}[f_{\theta}(X)=1|A=0,Y=1]\end{split} \tag{3}\]
Worst-group accuracy (WGA):WGA seeks to maximize the minimum accuracy over all sub-groups (over sensitive attributes and target labels). That is, we seek to maximize:
\[WGA_{\theta}=\min_{a\in\{0,1\},y\in\{0,1\}}\mathbb{P}[f(x)=y|A=a,Y=y] \tag{4}\]
In all three settings, we seek to train models that optimize fairness under a constraint on average _target label accuracy_, _i.e.,_ accuracy in predicting the target label. For example, for equal opportunity, we seek \(\theta^{*}=\arg\min_{\theta\in\Theta}\Delta_{\theta}^{EO}\) such that \(\mathbb{P}[f_{\theta}(x)=Y]\in[Acc_{lower}^{thr},Acc_{upper}^{thr})\), where \(Acc_{lower}^{thr}\) and \(Acc_{upper}^{thr}\) are user-specified lower and upper bounds on target label accuracies, respectively.
### Antigone Algorithm
We now describe the Antigone algorithm which consists of three main steps. In step 1, we train multiple intentionally biased ERM models that each provide pseudo sensitive attribute labels on validation data. We view each model as a noisy sensitive attribute labeller on the validation set. In step 2, we use the proposed EDM metric to pick a noisy labeller from step 1 with the least noise. Finally, in step 3, we use the labelled validation set from step 2 to tune the hyper-parameters of methods like JTT that train fair classifiers without sensitive attributes on training data.
Step 1: Generating sensitive attribute labels on validation set.In step 1, we use the training dataset and standard ERM training to obtain a set of classifiers, \(\theta^{*,\gamma}=\mathcal{M}^{ERM}(D^{tr},f_{\theta},\gamma)\), each corresponding to a different value of training hyper-parameters \(\gamma\in\Gamma\). As we discuss in Section 2.1, these include learning rate, weight decay and number of training epochs. Each classifier, which predicts the target label for a given input, generates a validation set with noisy pseudo sensitive attribute labels as follows:
\[D^{val,\gamma}_{noisy}=\{x_{i}^{val},a_{i}^{val,\gamma},y_{i}^{val}\}_{i=1}^{N ^{val}}\quad\forall\gamma\in\Gamma \tag{5}\]
where:
\[a_{i}^{val,\gamma}=\begin{cases}1,&\text{if }f_{\theta^{*,\gamma}}(x_{i}^{val})=y_{i}^{ val}\\ 0,&\text{otherwise}.\end{cases} \tag{6}\]
where \(a_{i}^{val,\gamma}\) now refers to noisy pseudo sensitive attribute labels. We now seek to pick the set whose pseudo sensitive
attribute labels match most closely with true (but unknown) sensitive attributes. That is, we seek to pick the hyper-parameters corresponding to the "best" noisy labeller.
Step 2: Picking the best noisy labeler.From Step 1, let the correct set be \(X_{A=1,noisy}^{val,\gamma}=\{x_{i}^{val}:a_{i}^{val,\gamma}=1\}\) and the incorrect set be \(X_{A=0,noisy}^{val,\gamma}=\{x_{i}^{val}:a_{i}^{val,\gamma}=0\}\). To maximize the _pseudo label accuracy_, _i.e._, accuracy of our pseudo sensitive attributes labels, we would like the correct set to contain mostly majority group examples, and incorrect set to contain mostly minority group examples. That is, (perhaps counter-intuitively) we want our noisy labeler to be biased. In the absence of true sensitive attribute labels, since we cannot measure bias directly, we instead use the distance between the data distributions in the correct and incorrect sets as a proxy. In Antigone, we pick the simplest distance metric between two distributions, i.e., the Euclidean distance between their means (EDM) and theoretically justify this choice in Section 2.3. Formally,
\[EDM^{\gamma}=\|\mu(X_{A=1,noisy}^{val,\gamma})-\mu(X_{A=0,noisy}^{val,\gamma}) \|_{2} \tag{7}\]
where \(\mu(.)\) represents the empirical mean of a dataset. We pick \(\gamma^{*}=\arg\max_{\gamma\in\Gamma}EDM^{\gamma}\). Note that in practice we pick two different noisy labellers corresponding to target labels \(Y=\{0,1\}\).
Step 3: Training a fair model.Step 2 yields \(D_{noisy}^{val,\gamma^{*}}\), a validation dataset with (estimated) sensitive attribute labels. We can provide \(D_{noisy}^{val,\gamma^{*}}\) as an input to any method that trains fair models without access to sensitive attributes on training data, but requires a validation set with sensitive attribute labels to tune its own hyper-parameters. In our experimental results, we use \(D_{noisy}^{val,\gamma^{*}}\) to tune the hyper-parameters of JTT (Liu et al., 2021) and GEORGE (Sohoni et al., 2020).
### Analyzing Antigone under Ideal MC Noise
Prior work (Lamy et al., 2019) has modeled noisy sensitive attributes using the mutually contaminated (MC) noise model (Scott et al., 2013). Here, it is assumed that we have access to input instances with pseudo sensitive attribute labels, \(X_{A=0,noisy}\in\mathcal{X}\) and \(X_{A=1,noisy}\in\mathcal{X}\), corresponding to minority (pseudo sensitive attribute labels \(=0\)) and majority (pseudo sensitive attribute labels \(=1\)) groups, respectively, that are mixtures of their input instances with ground-truth sensitive attribute labels, \(X_{A=0}\in\mathcal{X}\) and \(X_{A=1}\in\mathcal{X}\). Specifically,
\[\begin{split} X_{A=1,noisy}&=(1-\alpha)X_{A=1}+ \alpha X_{A=0}\\ X_{A=0,noisy}&=\beta X_{A=1}+(1-\beta)X_{A=0} \end{split} \tag{8}\]
where \(\alpha\) and \(\beta\) are noise parameters. We construct \(D_{A=0,noisy}\) by appending input instances in \(X_{A=0,noisy}\) with their corresponding pseudo sensitive attribute labels (_i.e._, \(a_{i}=0\)) and target labels, respectively. We do the same for \(D_{A=1,noisy}\). Note that strictly speaking Equation 8 should refer to the probability distributions of the respective datasets, but we will abuse this notation to refer to the datasets themselves. As such Equation 8 says that fraction \(\alpha\) of the noisy majority group, \(X_{A=1,noisy}\), is contaminated with data from the minority group, and fraction \(\beta\) of the noisy minority group, \(X_{A=0,noisy}\), is contaminated with data from the majority group. An extension of this model assumes that the noise parameters are target label dependent, i.e., (\(\alpha_{0}\),\(\beta_{0}\)) for \(Y=0\) and (\(\alpha_{1}\),\(\beta_{1}\)) for \(Y=1\).
Note that the ideal MC model assumes that noisy datasets are constructed by sampling independently from the ground-truth distributions. While this is not strictly true in our case since the noise in our sensitive attribute labels might be instance dependent, the ideal MC model can still shed light on the design of Antigone.
**Proposition 2.1**.: _(Lamy et al., 2019) Under the ideal MC noise model in Equation 8, demographic parity and equal opportunity gaps measured on the noisy datsets are proportional to the true DP and EO gaps. Mathematically:_
\[\begin{split}&\Delta^{DP}(D_{A=0,noisy}\cup D_{A=1,noisy})=\\ &(1-\alpha-\beta)\Delta^{DP}(D_{A=0}\cup D_{A=1}),\end{split} \tag{9}\]
_and_
\[\begin{split}&\Delta^{EO}(D_{A=0,noisy}\cup D_{A=1,noisy})=\\ &(1-\alpha_{1}-\beta_{1})\Delta^{EO}(D_{A=0}\cup D_{A=1}).\end{split} \tag{10}\]
From Equation 9 and Equation 10, we can conclude that under the ideal MC noise model, the DP and EO gaps can be equivalently minimized using noisy sensitive attribute labels, assuming independent contamination and infinite validation data samples. In practice, these assumptions do not hold, however, and therefore we seek to maximize the proportionality constant \(1-\alpha-\beta\) (or \(1-\alpha_{1}-\beta_{1}\)) to minimize the gap between the true and estimated fairness values.
**Lemma 2.2**.: _Assume \(X_{A=0,noisy}\) and \(X_{A=1,noisy}\) correspond to the input data of noisy datasets in the ideal MC model. Then, maximizing the EDM between \(X_{A=0,noisy}\) and \(X_{A=1,noisy}\), i.e., \(\|\mu(X_{A=0,noisy})-\mu(X_{A=1,noisy})\|_{2}\) maximizes \(1-\alpha-\beta\)._
Proof.: From Equation 8, we can see that \(\|\mu(X_{A=0,noisy})-\mu(X_{A=1,noisy})\|_{2}=(1-\alpha-\beta)^{2}\|\mu(X_{A= 0})-\mu(X_{A=1})\|_{2}\). Here \(\|\mu(X_{A=0})-\mu(X_{A=1})\|_{2}\) is the EDM between the ground truth majority and minority data and is therefore a constant. Hence, maximizing EDM between \(X_{A=0,noisy}\) and \(X_{A=1,noisy}\) maximizes \(1-\alpha-\beta\)
In practice, we separately maximize EDM for target labels \(Y=\{0,1\}\) and hence maximize both \(1-\alpha_{0}-\beta_{0}\) and \(1-\alpha_{1}-\beta_{1}\). We note that our theoretical justification motivates the use of EDM for DP and EO fairness. While not exact, minimizing \(\alpha+\beta\) using EDM as a proxy is still helpful for WGA because it reduces contamination and, empirically, provides more reliable fairness estimates.
## 3 Experimental Setup
We describe below the implementation details of Antigone and baseline approaches JTT and GEORGE, followed by a description of datasets used.
### Implementation Details
We evaluate Antigone with the state-of-the-art fairness methods that work without sensitive attributes on training data: JTT (Liu et al., 2021) and GEORGE (Sohoni et al., 2020). GEORGE additionally does not require sensitive attributes on validation data. Below, we describe these baselines how we implement Antigone in conjunction with them.
Jtt:JTT operates in two stages. In the first stage, a biased model is trained using \(T\) epochs of standard ERM training to identify the incorrectly classified training examples. In the second stage, the misclassified examples are upsampled \(\lambda\) times, and the model is trained again to completion with standard ERM. The hyperparameters of stage 1 and stage 2 classifiers, including early stopping epoch \(T\), learning rate and weight decay for stage 1 and upsampling factor \(\lambda\) for stage 2, are jointly tuned using a validation dataset with ground-truth sensitive attribute label. We refer to this as the **Ground-truth sensitive attributes + JTT** baseline.
Antigone+JTT:Here, we replace the ground-truth sensitive attributes in the validation dataset with noisy sensitive attributes obtained from Antigone and use it to tune JTT's hyper-parameters. Antigone's ERM model has the same network architecture as the corresponding JTT model, and we explore over the same hyper-parameters as JTT's stage 1 model. The EDM metric is used to select the best hyper-parameter setting for Antigone.
George:GEORGE is a competing approach to Antigone in that it does not assume access to sensitive attributes on either training or validation data. GEORGE operates in two stages: In stage 1, an ERM model is trained until completion on the ground-truth target labels. The activation in the penultimate layer of the ERM model are clustered into \(k\) clusters to generate pseudo sensitive attributes on both the training and validation datasets. In Stage 2, these pseudo attributes are used to train/validate Group DRO.
Antigone+GEORGE:For a fair comparison with GEORGE, we replace its stage 1 with Antigone, and use the resulting pseudo sensitive attribute labels on the validation set to tune the hyper-parameters of GEORGE's stage 2. For Antigone, we tune over the same hyper-parameter settings used in Antigone+JTT using the EDM metric.
### Datasets and Parameter Settings
We empirically evaluate Antigone on the CelebA and Waterbirds datasets, which allow for a direct comparison with related work (Liu et al., 2021; Sohoni et al., 2020). We also evaluate Antigone on UCI Adult Dataset, a tabular dataset commonly used in the fairness literature (see Appendix A.1)
#### 3.2.1 CelebA Dataset
**Dataset details:** CelebA (Liu et al., 2015) is an image dataset, consisting of 202,599 celebrity face images annotated with 40 attributes including gender, hair colour, age, smiling, etc. The task is to predict hair color, which is either blond \(Y=1\) or non-blond \(Y=0\) and the sensitive attribute is gender \(A=\{\text{Men},\text{Women}\}\). The dataset is split into training, validation and test sets with 162770, 19867 and 19962 images, respectively. Only 15% of individuals in the dataset are blond, and only 6% of blond individuals are men. Consequently, the baseline ERM model under-performs on the blond men.
**Hyper-parameter settings:** In all our experiments using CelebA dataset, we fine-tune a pre-trained ResNet50 architecture for a total of 50 epochs using SGD optimizer and a batch size of 128. We tune JTT over the same hyper-parameters as in their paper: three pairs of learning rates and weight decays, \((1e-04,1e-04),(1e-04,1e-02),(1e-05,1e-01)\) for both stages, and over ten early stopping points up to \(T=50\) and \(\lambda\in\{20,50,100\}\) for stage 2. For Antigone, we explore over the same learning rate and weight decay values, as well as early stopping at any of the 50 training epochs. We report results for DP, EO and WGA fairness metrics. In each case, we seek to optimize fairness while constraining average target label accuracy to ranges \(\{[90,91),[91,92),[92,93),[93,94),[94,95)\}\).
For GEORGE, we use the same architecture and early stopping stage 2 hyper-parameter (up to \(T=50\)) reported in their paper. For Antigone+GEORGE, we replace GEORGE's stage 1 with the Antigone model identified by searching over the same hyper-parameter space as in Antigone+JTT.
#### 3.2.2 Waterbirds Dataset
**Dataset details:** Waterbirds is a synthetically generated dataset, containing 11,788 images of water and
land birds overlaid on top of either water or land backgrounds (Sagawa* et al., 2020). The task is to predict the bird type, which is either a waterbird \(Y=1\) or a landbird \(Y=0\) and the sensitive attribute is the background \(A=\{\text{Water background},\text{Land background}\}\). The dataset is split into training, validation and test sets with 4795, 1199 and 5794 images, respectively. While the validation and test sets are balanced within each target class, the training set contains a majority of waterbirds (landbirds) in water (land) backgrounds and a minority of waterbirds (landbirds) on land (water) backgrounds. Thus, the baseline ERM model under-performs on the minority group.
**Hyper-parameter settings:** In all our experiments using Waterbirds dataset, we fine-trained ResNet50 architecture for a total of 300 epoch using the SGD optimizer and a batch size of 64. We tune JTT over the same hyper-parameters as in their paper: three pairs of learning rates and weight decays, \((1e-03,1e-04),(1e-04,1e-01),(1e-05,1.0)\) for both stages, and over 14 early stopping points up to \(T=300\) and \(\lambda\in\{20,50,100\}\) for stage 2. For Antigone, we explore over the same learning rate and weight decay values, as well as early stopping points at any of the 300 training epochs. In each case, we seek to optimize fairness while constraining average accuracy to ranges \(\{[94,94.5),[94.5,95),[95,95.5),[95.5,96),[96,96.5)\}\).
For GEORGE, we use the same architecture and early stopping stage 2 hyper-parameter (up to \(T=300\)) reported in their paper. For Antigone+GEORGE, as in the CelebA dataset, we replace GEORGE's stage 1 with Antigone with hyper-parameters as described above.
## 4 Experimental Results
**Quality of Antigone's sensitive attribute labels:** Antigone seeks to generate accurate sensitive attribute labels on validation data, referred to as _pseudo label accuracy_, based on the EDM criterion (Lemma 2.2). In Figure 2, we empirically validate Lemma 2.2 by plotting EDM and noise parameters \(\alpha_{1}\) (contamination in majority group labels), \(\beta_{1}\) (contamination in minority group labels) and \(1-\alpha_{1}-\beta_{1}\) (proportionality constant between true and estimated fairness) on Waterbirds dataset (similar plot for CelebA dataset is in Appendix Figure 3(b)). From the figure, we observe that in both cases the EDM metric indeed captures the trend in \(1-\alpha_{1}-\beta_{1}\), enabling early stopping at an epoch that minimizes contamination. The best early stopping points based on EDM and oracular knowledge of \(1-\alpha_{1}-\beta_{1}\) are shown in a blue dot and star, respectively, and are very close.
Next, we evaluate the F1 scores of Antigone's noisy sensitive attributes for all four subgroups in the CelebA and Waterbirds datasets. In Table 1 we compare Antigone's sensitive attribute labels' F1 Score to GEORGE with the baseline number of clusters and GEORGE with \(k=2\) clusters. Across CelebA and Waterbirds datasets and all four subgroups, we find that Antigone outperforms GEORGE except for one sub-group in CelebA dataset. Finally, Table 1 also reports results on a version of Antigone that uses standard ERM training instead of EDM (Antigone (w/o EDM)). We find that EDM provides higher pseudo-label accuracy compared to this baseline. Appendix Table 5 shows precision and recall of Antigone's pseudo sensitive attribute labels and reaches the same conclusion.
To understand the sensitivity of Antigone to minority group representation, we vary the fraction of minority group individuals from 5%, 20%, 35%, 50% in CelebA dataset. The results are shown in Appendix Table 6. As the dataset gets more balanced, the _pseudo label accuracy_ reduces (as expected) because the trained models themselves become fairer. Nonetheless, the corresponding \(1-\alpha-\beta\) values (also shown in the table) are unchanged till 20% of imbalance, and minority group individuals are over-represented in incorrect sets up to 35% imbalance.
**Antigone+JTT:** Next, in Table 2, we compare the test accuracy in predicting the target labels, quantified as _target label accuracy_, and fairness achieved by Antigone with JTT (Antigone+JTT) versus a baseline ERM model and with JTT using ground-truth sensitive attributes (Ground-Truth+JTT). As expected, baseline ERM yields unfair outcomes on all three fairness metrics: DP, EO and WGA. We observe that
Figure 2: Euclidean Distance between Means (EDM) and noise parameters (\(\alpha_{1},\beta_{1}\) and \(1-\alpha_{1}-\beta_{1}\)) for the positive target class of Waterbirds dataset. Blue dot indicates the model picked by Antigone, while black star indicates the model that maximizes \(1-\alpha_{1}-\beta_{1}\).
Antigone + JTT improves fairness over the baseline ERM model and closes the gap with Ground-Truth+JTT.
On both DP and EO, Antigone+JTT is very close to Ground-Truth+JTT in terms of both _target label accuracy_ and fairness, and substantially improves on standard ERM models. On WGA, Antigone+JTT improves WGA from 38.7% for standard ERM to 68.1% at the expense of \(3\%\)_target label accuracy_ drop. Ground-Truth+JTT improves WGA further up to 78.6% but with a 4.4% _target label accuracy_ drop. Data for Waterbirds (Appendix Table 7) and UCI Adults (Appendix Table 8) have the same trends.
**Comparison with GEORGE:** Like Antigone, GEORGE also generates pseudo-sensitive attributes on validation data, but as noted in Table 1, Antigone's pseudo attribute labels are more accurate and have higher F1 scores than GEORGE's. We now analyze how these improvements translate to greater fairness by comparing the WGA achieved by GEORGE alone versus Antigone+GEORGE in Table 3. On Waterbirds, Antigone+GEORGE has both 7.4% higher WGA and marginally higher _target label accuracy_ than GEORGE. On Celeb-A, Antigone+GEORGE has 2.7% higher WGA but with a small 0.4% drop in _target label accuracy_. The results in Table 3 are averaged over five runs, as is common in prior work. In interpreting the standard deviations, we note that the performance of Antigone+GEORGE and GEORGE are correlated over the five runs (likely because of a shared stage 2): for CelebA, Antigone+GEORGE's WGA was equal to or better than GEORGE's in each one of our five runs. Similarly, for Waterbirds, Antigone+GEORGE's WGA is atleast 8.4% higher than GEORGE's in each run, except in one run where GEORGE has 1% higher WGA.
Ablation Studies:We perform two ablation experiments to understand the benefits of Antigone's proposed EDM metric. We already noted in Table 1 that Antigone with the proposed EDM metric produces higher quality sensitive attribute labels compared to a version of Antigone that picks hyper-parameters using standard ERM. We evaluated these two approaches using JTT's training algorithm and find that Antigone with EDM results in a 5.7% increase in WGA and a small 0.06% increase in average _target label accuracy_
Second, in Table 4, we also compare Antigone+JTT against a synthetically labeled validation dataset that exactly follows the ideal MC noise model in Section 2.3. We find that on DP Gap and EO Gap fairness metrics, Antigone's results are comparable (in fact sometimes slightly better) with those derived from the ideal MC model. On WGA, the most challenging fairness metric to optimize for, we find that the ideal MC model has a best-case WGA of 73.9% compared to Antigone's 69.4%. This reflects the loss in fairness due to the gap between the assumptions of the idealized model versus Antigone's implementation; however, the reduction in fairness is marginal when compared to the ERM baseline which has only a 38% WGA.
## 5 Related Work
Several works have observed that standard ERM training algorithms can achieve state-of-the-art accuracy on many tasks, but unintentionally make biased predictions for different sensitive attributes failing to meet the fairness objectives (Hovy and Sogaard, 2015; Oren et al., 2019; Hashimoto et al., 2018; Buolamwini and Gebru, 2018).
Methods that seek to achieve fairness are of three types: pre-processing, in-processing and post-processing algorithms. Pre-processing (Quadrianto et al., 2019; Ryu et al., 2018) methods focus on curating the dataset that includes removal of sensitive information or balancing the datasets. In-processing methods (Hashimoto et al., 2018; Agarwal et al., 2018; Zafar et al., 2019; Lahoti et al., 2020; Prost et al., 2019; Liu et al., 2021; Sohoni et al., 2020) alter the training mechanism by adding fairness constrains to the loss function or by training an adversarial framework to make predictions independent of sensitive attributes (Zhang et al., 2018). Post-processing methods (Hardt et al., 2016; Wang et al., 2020; Savani et al., 2020) alter the outputs, for _e.g._ use different threshold for different sensitive attributes. In this work, we focus on in-processing algorithms.
Prior in-processing algorithms, including the ones referenced above, assume access to sensitive attributes on the training data and validation dataset. Recent work sought to train fair model without training data annotations (Liu et al., 2021; Nam et al., 2020; Hashimoto et al., 2018; Creager et al., 2021) but, except for GEORGE (Sohoni et al., 2020),
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & Antigone (w/o EDM) & GEORGE & GEORGE & Antigone \\ & & & (\(k=2\)) & (w/ EDM) \\ \hline \multicolumn{5}{c}{CelebA (F1 Scores)} \\ \hline BM & 0.28\(\pm\)0.04 & 0.13\(\pm\)0.02 & 0.12 \(\pm\)0.01 & **0.35\(\pm\)0.04** \\ BW & 0.95\(\pm\)0.01 & 0.43\(\pm\)0.04 & 0.51\(\pm\)0.02 & **0.96\(\pm\)0.00** \\ NBW & 0.22\(\pm\)0.02 & 0.42\(\pm\)0.01 & **0.6\(\pm\)0.01** & 0.22\(\pm\)0.01 \\ NBM & 0.67\(\pm\)0.01 & 0.4\(\pm\)0.02 & 0.31\(\pm\)0.01 & **0.68\(\pm\)0.01** \\ \hline Ps. Acc. & 0.59\(\pm\)0.01 & 0.33\(\pm\)0.01 & 0.48\(\pm\)0.00 & **0.60\(\pm\)0.00** \\ \hline \multicolumn{5}{c}{Waterbirds (F1 Scores)} \\ \hline WL & 0.41\(\pm\)0.02 & 0.43\(\pm\)0.02 & 0.52\(\pm\)0.01 & **0.76\(\pm\)0.03** \\ WW & 0.72\(\pm\)0.00 & 0.36\(\pm\)0.02 & 0.43\(\pm\)0.02 & **0.83\(\pm\)0.01** \\ LW & 0.58\(\pm\)0.02 & 0.44\(\pm\)0.03 & 0.55\(\pm\)0.03 & **0.78\(\pm\)0.04** \\ LL & 0.76\(\pm\)0.01 & 0.34\(\pm\)0.02 & 0.55\(\pm\)0.03 & **0.84\(\pm\)0.02** \\ \hline Ps. Acc. & 0.68\(\pm\)0.01 & 0.30\(\pm\)0.02 & 0.53\(\pm\)0.01 & **0.81\(\pm\)0.02** \\ \hline \hline \end{tabular}
\end{table}
Table 1: F1 scores and _pseudo label accuracies_ (Ps. Acc.) We mark the best performance in bold. BM (blond men), BW (blond women), NBW (non-blond women) and NBM (non-blond men) for CelebA; WL (waterbirds landbkgd), WW (waterbirds waterbkgd), LW (landbirds waterbkgd) and LL (landbirds landbkgd) for Waterbirds.
require sensitive attributes on validation dataset to tune the hyperparameters. Like GEORGE, we seek to train fair classification models without ground-truth sensitive information on either training or validation dataset.
Antigone is different from GEORGE in three ways: (1) Unlike GEORGE, we account for the model prediction and the ground-truth target label to generate pseudo-sensitive attributes. (2) The hyper-parameters of the clustering step in GEORGE are fixed from literature and not specifically tuned for each dataset. In this paper, we propose a more principled approach to tune the model's hyperparameters in an unsupervised fashion to obtain noisy sensitive features. And finally, (3) GEORGE only focuses on worst-group accuracy, whereas Antigone can be adapted to different notions of fairness.
A related body of work develops _post-processing_ methods to improve fairness without access to sensitive attributes but assuming a small set of labelled data for auditing (Kim et al., 2019). One could use Antigone to create this auditing dataset, albeit with noise. Evaluating Antigone with these post-processing methods is an avenue for future work.
## 6 Conclusion
In this paper, we propose Antigone, a method to enable hyper-parameter tuning for fair ML models without access to sensitive attributes on training or validation sets. Antigone generates high-quality pseudo-sensitive attribute labels on validation data by training a family of biased classifiers using standard ERM and using correctly (incorrectly) classified examples as proxies for majority (minority) group membership. We propose a novel EDM metric based approach to pick the most biased model from this family and provide theoretical justification for this choice using the ideal MC noise model. The resulting validation dataset with pseudo sensitive attribute labels can then be used to tune the hyper-parameters of a fair training algorithm like JTT or GEORGE. We show that Antigone produces the highest quality of sensitive attributes compared to the state-of-art.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & \multicolumn{2}{c}{DP Gap} & \multicolumn{1}{c}{EO Gap} & \multicolumn{1}{c}{WGA} \\ \hline
[94, 95] & Antigone + JTT & (94.9, 14.7) & (94.6, 33.7) & (94.5, 61.7) \\ & Ideal MC + JTT & (94.9, 14.7) & (94.4, 34.1) & (94.4, 58.3) \\ \hline
[93, 94] & Antigone + JTT & (93.7, 12.2) & (93.9, 30.3) & (93.3, 60.0) \\ & Ideal MC + JTT & (93.7, 12.2) & (93.5, 26.3) & (93.7, 65.0) \\ \hline
[92, 93] & Antigone + JTT & (93.1, 12.1) & (92.4, 22.9) & (92.9, 65.6) \\ & Ideal MC + JTT & (93.1, 12.1) & (93.0, 22.7) & (93.2, 69.4) \\ \hline
[91, 92] & Antigone + JTT & (91.9, 9.3) & (91.1, 13.9) & (91.1, 66.7) \\ & Ideal MC + JTT & (91.9, 9.3) & (92.2, 19.1) & (91.8, 73.9) \\ \hline
[90, 91] & Antigone + JTT & (91.1, 7.9) & (91.1, 13.9) & (91.1, 66.7) \\ & Ideal MC + JTT & (90.9, 8) & (90.4, 18.9) & (91.4, 72.2) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Antigone + JTT vs Ideal MC + JTT (Avg. _target label accuracy_, Fairness) comparison on test data for different validation accuracy thresholds on the CelebA dataset. Lower DP and EO gaps are better. Higher WGA is better.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Val. Thresh. & Method & DP Gap & EO Gap & Worst-group Acc. \\ \hline
[94, 95] & Antigone + JTT & (94.6, 15.0)\(\pm\)(0.2, 0.7) & (94.7, 30.1)\(\pm\)(0.2, 3.2) & (94.4, 59)\(\pm\)(0.2, 4.7) \\ & Ground-Truth + JTT & (94.7, 14.9)\(\pm\)(0.2, 0.6) & (94.5, 30.4)\(\pm\)(0.2, 2.3) & (94.3, 62.1)\(\pm\)(0.3, 3.2) \\ \hline
[93, 94] & Antigone + JTT & (93.7, 13.1)\(\pm\)(0.2, 0.7) & (93.6, 26.4)\(\pm\)(0.5, 5.0) & (93.4, 62.6)\(\pm\)(0.2, 7.69) \\ & Ground-Truth + JTT & (93.6, 13.1)\(\pm\)(0.1, 0.46) & (93.6, 22.7)\(\pm\)(0.3, 2.7) & (93.4, 67.9)\(\pm\)(0.1, 1.19) \\ \hline
[92, 93] & Antigone + JTT & (92.7, 11.1)\(\pm\)(0.2, 0.5) & (92.3, 20.2)\(\pm\)(0.2, 3.4) & (92.7, 68.1)\(\pm\)(0.4, 3.7) \\ & Ground-Truth + JTT & (92.7, 11.2)\(\pm\)(0.3, 0.5) & (92.7, 16.9)\(\pm\)(0.4, 2.9) & (92.7, 72.5)\(\pm\)(0.2, 1.3) \\ \hline
[91, 92] & Antigone + JTT & (91.7, 9.6)\(\pm\)(0.1, 0.5) & (91.5, 16.3)\(\pm\)(0.3, 3.4) & (91.3, 63.2)\(\pm\)(0.3, 2.66) \\ & Ground-Truth + JTT & (91.8, 9.7)\(\pm\)(0.2, 0.5) & (91.8, 10.1)\(\pm\)(0.3, 4.1) & (91.8, 77.3)\(\pm\)(0.1, 2.4) \\ \hline
[90, 91] & Antigone + JTT & (91.0, 8.3)\(\pm\)(0.2, 0.4) & (90.9, 13.1)\(\pm\)(0.1, 3.6) & (90.9, 63.1)\(\pm\)(0.5, 4.4) \\ & Ground-Truth + JTT & (91.0, 8.4)\(\pm\)(0.2, 0.4) & (90.7, 6.8)\(\pm\)(0.4, 3.7) & (91.4, 78.6)\(\pm\)(0.2, 2.0) \\ \hline & ERM & (95.8, 18.6)\(\pm\)(0.0, 0.3) & (95.8, 46.4)\(\pm\)(0.2, 2.2) & (95.8, 38.7)\(\pm\)(0.0, 2.8) \\ \hline \hline \end{tabular}
\end{table}
Table 2: (Avg. _target label accuracy_, Fairness) on test data for different validation accuracy thresholds on the CelebA dataset. Lower DP and EO gaps are better. Higher WGA is better.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{CelebA} & \multicolumn{2}{c}{Waterbirds} \\ \cline{2-5} Method & Avg Acc & WGA & Avg Acc & WGA \\ \hline ERM & **95.7\(\pm\)**0.41** & 34.5\(\pm\)31 & 95.9\(\pm\)0.2 & 29.7\(\pm\)1.6 \\ \hline GEORGE & 93.6\(\pm\)0.3 & 60.4\(\pm\)23 & 95.5\(\pm\)0.7 & 50.0\(\pm\)5.8 \\ Antigone + GEORGE & 93.3\(\pm\)0.3 & 62.1\(\pm\)12 & **96.0\(\pm\)0.2** & **57.4\(\pm\)6.6** \\ \hline GEORGE (k=2) & 94.6\(\pm\)0.1 & 62.6\(\pm\)21 & 95.0\(\pm\)0.8 & 46.7\(\pm\)11.7 \\ Antigone + GEORGE (k=2) & 94.2\(\pm\)0.3 & **65.3\(\pm\)**29** & 95.8\(\pm\)0.6 & 54.4\(\pm\)7.1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of GEORGE using Antigone’s noisy validation data compared with GEORGE by itself. We observe that on CelebA and Waterbirds dataset, GEORGE \(+\) Antigone out-performs GEORGE, even if GEORGE assumes knowledge of number of clusters \((k=2)\) in its clustering step. We show errors for over five runs. |
2308.13902 | Spurious-Free Lithium Niobate Bulk Acoustic Resonator for Piezoelectric
Power Conversion | Recently, piezoelectric power conversion has shown great benefits from
replacing the bulky and lossy magnetic inductor in a traditional power
converter with a piezoelectric resonator due to its compact size and low loss.
However, the converter performance is ultimately limited by existing resonator
designs, specifically by moderate quality factor (Q), moderate
electromechanical coupling (kt2), and spurious modes near resonance. This work
reports a spurious-free lithium niobate (LiNbO3) thickness-extensional mode
bulk acoustic resonator design, demonstrating Q of 4000 and kt2 of 30% with a
fractional suppressed region of 62%. We first propose a novel grounded ring
structure for spurious-free resonator design, then validate its performance
experimentally. Upon further work, this design could be extended to
applications requiring spurious suppression, such as filters, tunable
oscillators, transformers, etc. | Kristi Nguyen, Eric Stolt, Weston Braun, Vakhtang Chulukhadze, Jeronimo Segovia-Fernandez, Sombuddha Chakraborty, Juan Rivas-Davila, Ruochen Lu | 2023-08-26T15:16:47Z | http://arxiv.org/abs/2308.13902v1 | # Spurious-Free Lithium Niobate Bulk Acoustic Resonator for Piezoelectric Power Conversion
###### Abstract
SummaryRecently, piezoelectric power conversion has shown great benefits from replacing the bulky and lossy magnetic inductor in a traditional power converter with a piezoelectric resonator due to its compact size and low loss. However, the converter performance is ultimately limited by existing resonator designs, specifically by moderate quality factor (\(Q\)), moderate electromechanical coupling (\(k^{2}\)), and spurious modes near resonance. This work reports a spurious-free lithium niobate (LiNbO\({}_{3}\)) thickness-extensional mode bulk acoustic resonator design, demonstrating \(Q\) of 4000 and \(k^{2}\) of 30% with a fractional suppressed region of 62%. We first propose a novel grounded ring structure for spurious-free resonator design, then validate its performance experimentally. Upon further work, this design could be extended to applications requiring spurious suppression, such as filters, tunable oscillators, transformers, etc.
piezoelectric power conversion; lithium niobate; piezoelectric resonator; spurious suppression; acoustic resonator
## I Introduction
Due to their shorter acoustic wavelength and lower loss, acoustic devices have replaced their radio-frequency (RF) counterparts with commercial success in applications such as front-end filters and oscillators [1, 2, 3, 4, 5, 6, 7, 8, 9]. More recently, piezoelectric power conversion has emerged as yet another application, where inductors are replaced with acoustic resonators in power converters to reduce form factor and improve performance.
Piezoelectric power converter circuits are modeled as a resonator connected to various switch configurations (\(S_{i}\), \(S_{2}\), \(S_{3}\), \(S_{4}\)) and direct current (DC) voltage sources (\(V_{in}\), \(V_{out}\)) [Fig. 1 (a)]. The resonator is modeled with an equivalent electrical circuit called the Butterworth-Van Dyke (BVD) circuit that consists of a series motional inductor, resistor, and capacitor (\(L_{in}\)\(R_{in}\)\(C_{in}\)) connected in parallel with a static capacitance (\(C_{0}\)).
The converter's operation range is restricted by the resonator's inductive behavior, i.e., between series and parallel resonances. Although the converter is excited with DC voltages, zero-voltage switching sequences are leveraged to induce a motional current within the resonator at the operating frequency [Fig 1 (b)]. By tuning the switches' timings, the operating frequency can be varied, which ultimately determines the converter's output power for a given voltage conversion ratio. These switching sequences comprise _connected_, _open_, and _zero_ stages that soft-charge the resonator's static capacitance \(C_{0}\) and minimize switching losses [11]. Maximum power output occurs near series resonance, and as the operating frequency increases, output power decreases and converter efficiency increases [10]. In essence, piezoelectric power conversion utilizes the piezoelectric resonator as the converter's sole energy storage element [Fig. 1 (b)].
Although the working principle has been proven [12, 13, 14, 15], the piezoelectric power converter's performance is limited by the integrated resonator, specifically by moderate quality factor (\(Q\)), electromechanical coupling (\(k^{2}\)), and spurious modes near resonance. Lower \(Q^{\bullet}\)\(k^{2}\) reduces converter efficiency, while spurious modes between series and parallel resonances limit the converter's operating range [13].
However, conventional spurious suppression methods (e.g., apodization and raised/recessed frame [1, 16, 17, 18]) are insufficient, as they tend to spread out spurious modes or prove to be difficult to implement at MHz frequencies. Thus, we propose a novel spurious-free bulk acoustic resonator design in lithium niobate (LiNbO\({}_{3}\)) that surpasses the state-of-the-art (SoA, Table 1) in spurious suppression with a high figure-of-merit (FoM, \(Q\)*\(k^{2}_{i}\)). Future research could extend this design to
Fig. 1: (a) Circuit schematic of piezoelectric resonator, modeled by BVD circuit, integrated into power converter. (b) Idealized voltage and current waveforms through resonator for 40 V input, 30 V output [32].
other applications, including filters, oscillators, transformers, etc.
## II Design & Simulation
The bulk acoustic, thickness-extensional (TE) resonator was designed using LiNbO\({}_{3}\) for its intrinsic high electromechanical coupling and low loss material properties [19, 20, 21, 22, 23, 24]. However, selecting the orientation and direction of the applied electric field is challenging as LiNbO\({}_{3}\) is highly anisotropic [25]. The goal is to choose a crystal orientation that increases coupling of the TE mode to induce uniform vibration, thus increasing \(Q\) and minimizing parasitic couplings. Fig. 2 plots \(k^{2}\) as a function of the rotated Y-axis, namely \(k_{33}\)2 (e\({}_{32}\)/\(c_{33}\)/\(c_{33}\)) for TE and \(k_{33}\)2 (e\({}_{33}\)/\(c_{33}\)/\(c_{33}\)) for thickness shear (TS). In order to optimize TE coupling, while minimizing other modes, 36Y-cut LiNbO\({}_{3}\) was selected as a commercially viable option. Thanks to its unique dispersion behavior, 36Y-cut LiNbO\({}_{3}\) is an optimal choice for confining energy of the TE mode [26, 27, 28].
Footnote 2: The \(k_{33}\)/\(c_{33}\) ratio is defined as \(k_{33}=\frac{1}{2}\).
A novel "grounded ring" structure is proposed for spurious suppression. The novel resonator features center electrodes on the top and bottom that are electrically excited in opposing configurations. These electrodes are further surrounded by a non-metallized separation gap, which are then surrounded by a metallized "ring" that is electrically grounded on top and bottom [Fig. 3 (a)]. From the top-view, the center electrode, non-metallized separation gap, and ring are circularly-shaped for spurious suppression [Fig. 3 (b)]. Dimensions are enumerated in Fig. 3 (c).
The dimensions of the grounded ring and separation gap were optimized via parametric sweep. It was found that a smaller separation gap generally improved performance but posed potential challenges with power handling, while a larger ring width improved performance yet saturated after a certain threshold was reached.
In principle, the separation gap generated by the grounded ring not only maintains the TE mode, but also eliminates lateral spurious tones. Unlike [29], where a recessed frame in a certain structure removes lateral modes by altering the dispersion characteristics, careful implementation of our design uses a grounded ring for spurious-free operation by electrically loading the piezoelectric material such that it can reinforce the TE coupling [30]. In conjunction, the circular shape leverages the isotropic piezoelectric coefficient e\({}_{33}\) while suppressing the anisotropic e\({}_{31}\) in 36Y-cut LiNbO\({}_{3}\)[31].
This novel design was simulated using three-dimensional (3D) finite element analysis (FEA) in COMSOL. For comparison, two reference designs were also simulated. First, a rectangular reference TE design is shown, consisting of rectangular electrodes centered on the top and bottom of LiNbO\({}_{3}\). The simulated impedance and resistance reveal large spurious modes in the inductive region of the resonator, making
Fig. 4: Simulated impedances (Z) and resistances (R) of the rectangular reference design (a), the circular reference design (c), and the novel grounded ring design (e). Displacement at resonance of all three designs, marked by triangles in the impedance plots, are illustrated in (b, d, f).
Fig. 3: Illustration of (a) side-view and (b) top-view of the proposed resonator design with grounded ring, with parameters tabulated in (c). All electrodes have aluminum (AI) thickness of 300 nm.
Fig. 2: Coupling coefficient, \(k^{2}\), as the rotation from the y-axis varies, for TS mode (\(k_{33}\)2) and TE mode (\(k_{33}\)2). 36Y-cut LiNbO\({}_{3}\) was selected for this work, marked by the star.
it virtually unusable for piezoelectric power conversion [Fig. 4 (a)]. The displacement mode shape reveals multiple modes at resonance, indicating non-uniform vibration [Fig. 4 (b)].
Second, a circular reference TE design is shown, where the electrodes are designed to be circular. While the active area has decreased compared to the rectangular reference design, resulting in a slight increase in resistance \(R_{\mathrm{m}}\), there is some spurious suppression near the parallel resonance [Fig. 4 (c)]. While the circular electrode shape mitigates some spurious modes by leveraging the anisotropy in LiNbO\({}_{3}\)[Fig. 4 (d)], the resonator still needs to further suppress lateral wave propagation, especially near resonance.
Lastly, our proposed design, which improves upon the circular TE design by adding the grounded ring, is simulated. The impedance and resistance are completely spurious-free [Fig. 4 (e)], thus increasing the spurious-suppressed region for increased converter operation range. The ring design vibrates with much more uniformity and greater amplitude, with little-to-no lateral mode shapes detected [Fig. 4 (f)].
## III Fabrication
After the design is thoroughly validated in FEA, the proposed resonator is fabricated with standard cleanroom procedures. Lithography is performed on 4-inch 0.3 mm thick 36Y-cut LiNbO\({}_{3}\), provided by Precision Micro-Optics, to form the electrode and ring patterns [32]. Afterwards, 300 nm of aluminum (Al) is deposited on both sides with an e-beam evaporator. The wafer thickness was chosen based on the frequency specifications set by the desired power converter operation. A clearly defined non-metallized gap separates the electrode from the ring [Fig. 5 (a)]. The wafer is then diced and the individual resonator is epoxied at the corners and wire bonded to the testbed [Fig. 5 (b)]. The resonator itself is 18 x 18 mm\({}^{2}\) in size, while the entire mounted device has an area of 28 x 28 mm\({}^{2}\). Copper traces are routed to an SMA connector for characterization.
## IV Results
The measured impedance, resistance, and Bode \(Q\)[33] of the rectangular reference and novel designs are compared in Fig. 6. The rectangular reference design [Fig. 6 (a)] features the same design as that in Fig. 4 (a). The measured results show huge spurious modes that are greatly suppressed in the ring design [Fig. 6 (b)]. The remaining spurious modes in the proposed design are likely caused by wafer thickness variations. Both fabricated devices demonstrate Bode \(Q\) around 4000 [Fig. 6 (c-d)]. Since Bode Q depends on group delay, it is extremely sensitive to spurious modes. The reference design [Fig. 6 (c)] yields an inconsistent value of \(Q\) as the frequency varies. In contrast, our proposed design [Fig. 6 (d)] measures a smoother and more constant \(Q\) over a broader frequency range, suggesting a completely spurious-free performance.
Finally, the proposed design features \(k_{i}^{2}\) of 30% with a spurious-suppressed region of 0.72 MHz and a fractional suppressed region of 62%. The spurious-suppressed region is defined as the frequency range where resistance is no larger than 20 x R\({}_{\mathrm{m}}\) (minimum resistance), and the fractional suppressed region is the ratio of the spurious-suppressed region to the difference between series and parallel resonance frequencies. These metrics aim to characterize spurious suppression over a range of frequencies; wider spurious suppression expands the converter's output powers.
The proposed bulk acoustic resonator surpasses the SoA spurious suppression methods (Table 1) with the highest fractional suppressed region of 62%, while maintaining a high FoM of 1200. Thus, our design shows great potential for not only piezoelectric power conversion, but also any application requiring high FoM and no spurious modes.
## V Conclusion
This work reports a spurious-free bulk LiNbO\({}_{3}\) acoustic resonator design for piezoelectric power conversion with high \(Q\) of 4000, \(k^{2}\) of 30%, and a large fractional suppressed region of 62%. First, the optimal LiNbO\({}_{3}\) orientation cut was selected based on its ability to maximize the TE mode. Then, a novel resonator topology was designed and extensively validated against existing conventional designs. Lastly, this design was fabricated and characterized, showing excellent results. Future research could extend the proposed grounded ring-based spurious-free resonator design to other applications, such as filters, oscillators, and transformers.
Fig. 5: Zoomed-in view (a) of the fabricated device epoxied and wire bonded to the substrate (b).
Fig. 6: Measured impedance/resistance and Bode Q for the reference device (a,c) and the proposed ring device (b,d), with the spurious-suppressed region highlighted in yellow. |
2307.12241 | Explainable Depression Detection via Head Motion Patterns | While depression has been studied via multimodal non-verbal behavioural cues,
head motion behaviour has not received much attention as a biomarker. This
study demonstrates the utility of fundamental head-motion units, termed
\emph{kinemes}, for depression detection by adopting two distinct approaches,
and employing distinctive features: (a) discovering kinemes from head motion
data corresponding to both depressed patients and healthy controls, and (b)
learning kineme patterns only from healthy controls, and computing statistics
derived from reconstruction errors for both the patient and control classes.
Employing machine learning methods, we evaluate depression classification
performance on the \emph{BlackDog} and \emph{AVEC2013} datasets. Our findings
indicate that: (1) head motion patterns are effective biomarkers for detecting
depressive symptoms, and (2) explanatory kineme patterns consistent with prior
findings can be observed for the two classes. Overall, we achieve peak F1
scores of 0.79 and 0.82, respectively, over BlackDog and AVEC2013 for binary
classification over episodic \emph{thin-slices}, and a peak F1 of 0.72 over
videos for AVEC2013. | Monika Gahalawat, Raul Fernandez Rojas, Tanaya Guha, Ramanathan Subramanian, Roland Goecke | 2023-07-23T06:39:51Z | http://arxiv.org/abs/2307.12241v1 | # Explainable Depression Detection via Head Motion Patterns
###### Abstract.
While depression has been studied via multimodal non-verbal behavioural cues, head motion behaviour has not received much attention as a biomarker. This study demonstrates the utility of fundamental head-motion units, termed _kinemes_, for depression detection by adopting two distinct approaches, and employing distinctive features: (a) discovering kinemes from head motion data corresponding to both depressed patients and healthy controls, and (b) learning kineme patterns only from healthy controls, and computing statistics derived from reconstruction errors for both the patient and control classes. Employing machine learning methods, we evaluate depression classification performance on the _BlackDog_ and _AVEC2013_ datasets. Our findings indicate that: (1) head motion patterns are effective biomarkers for detecting depressive symptoms, and (2) explanatory kineme patterns consistent with prior findings can be observed for the two classes. Overall, we achieve peak F1 scores of 0.79 and 0.82, respectively, over BlackDog and AVEC2013 for binary classification over episodic _thin-slices_, and a peak F1 of 0.72 over videos for AVEC2013.
Kinemes, Head-motion, Depression detection, Explainability +
Footnote †: journal: Accepted in 2013
+
Footnote †: journal: Accepted in 2013
+
Footnote †: journal: Accepted in 2013
## 1. Introduction
Clinical depression, a prevalent mental health condition, is considered as one of the leading contributors to the global health-related burden (Shergel, 2013; Goh et al., 2014), affecting millions of people worldwide (Shergel, 2013; Goh et al., 2014). As a mood disorder, it is characterised by a prolonged (\(>\) two weeks) feeling of sadness, worthlessness and hopelessness, a reduced interest and a loss of pleasure in normal daily life activities, sleep disturbances, interdens and lack of energy. Depression can lead to suicide in extreme cases (Shergel, 2013) and is often linked to comorbidities such as anxiety disorders, substance abuse disorders, hypertensive diseases, metabolic diseases, and diabetes (Shergel, 2013; Goh et al., 2014). Although effective treatment options are available, diagnosing depression through self-report and clinical observations presents significant challenges due to the inherent subjectivity and biases involved.
Over the last decade, researchers from affective computing and psychology have focused on investigating objective measures that can aid clinicians in the initial diagnosis and monitoring of treatment progress of clinical depression (Shergel, 2013; Goh et al., 2014). A key catalyst to this progress is the availability of relevant datasets, such as AVEC2013 and subsequent challenges (Goh et al., 2014). In recent years, research on depression detection employing affective computing approaches has increasingly focused on leveraging non-verbal behavioural cues such as facial expressions (Shergel, 2013; Goh et al., 2014), body gestures (Goh et al., 2014), eye gaze (Goh et al., 2014), head movements (Goh et al., 2014) and verbal features (Shergel, 2013; Goh et al., 2014) extracted from multimedia data to develop distinctive features to classify individuals as depressed or healthy controls, or to estimate the severity of depression on a continuous scale.
In this study, we examine the utility of inherently interpretable head motion units, referred to as _kinemes_(Goh et al., 2014), for assessing depression. Initially, we utilise data from both healthy controls and depressed patients to discover a basis set of kinemes via the _(pitch, yaw, and roll)_ head pose angular data obtained from short overlapping time-segments (termed two-class kineme discovery or 2CKD). Further, we employ these kinemes to generate features based on the frequency of occurrence of distinctive, class-characteristic kinemes. Subsequently, we discover kineme patterns solely from head pose data corresponding to healthy controls (Healthy control kineme discovery or HCKD), and use them to represent both healthy and depressed class segments. A set of statistical features are then computed from the reconstruction errors between the raw and learned head-motion segments corresponding to both the depressed and control classes (see Figure 1). Using machine learning methodologies, we evaluate the performance of the features derived from the two approaches. Our results show that head motion patterns are effective behavioural cues for detecting depression. Additionally, explanatory class-specific kinemes patterns can be observed, in alignment with prior research.
This paper makes the following research contributions:
* A study of head movements as a biomarker for clinical depression, which so far has been understudied.
* Proposing the _kineme_ representation of motion patterns as an effective and explanatory means for depression analysis.
* A detailed investigation of various classifiers for 2-class and 4-class categorisation on the AVEC2013 and BlackDog datasets. We obtain peak F1-scores of 0.79 and 0.82, respectively, on _thin-slice_ chunks for binary classification on the BlackDog and AVEC2013 datasets, which compare favorably to prior approaches. Also, a video-level F1-score of 0.72 is achieved for 4-class categorisation on AVEC2013.
The remainder of this paper is organised as follows. Section 2 provides an overview of related work. Section 3 describes the kineme formulation, followed by Section 4 that details the explainable kineme features used as a representation of motion patterns. The methodology is presented in Section 5, while Section 6 provides details of the datasets, experimental settings, and classifiers used in this study. The experimental results are shown and discussed in Section 7. Finally, the conclusions are drawn in Section 8.
## 2. Related Work
In this section, we briefly review the literature focusing on (a) depression detection as a classification problem, and (b) depression detection using head motion patterns.
### Depression Analysis as a Classification Task
Traditionally, depression detection has been approached as a supervised binary classification task, with many studies relying on discriminative classifiers to distinguish between _healthy controls_ and _patients_(Beng et al., 2015; Chen et al., 2017; Chen et al., 2018). A typical recognition accuracy of up to 80% demonstrates the promise of behavioural cues such as eye-blink and closed-eye duration rate, statistical features computed over the yaw, pitch and roll head-pose features, _etc._ to differentiate the two classes. However, challenges involved in depression detection such as limited clinically validated, curated data and a skewed data distributions have been acknowledged in the literature (Chen et al., 2017; Chen et al., 2018).
Recent efforts have sought to learn patterns indicative of only the target class and reformulate depression detection as a one-class classification problem to mitigate the issues with imbalanced datasets (Chen et al., 2017; Chen et al., 2018). Studies have attempted to learn features associated with control participants and treat inputs that deviate from these patterns as _anomalous_(Garay et al., 2017; Garay et al., 2017). Gerych _et. al._(Gerych et al., 2017) formulate the task as anomaly detection by leveraging autoencoders to learn features of the non-depressed class and treating depressed user data as outliers. Similarly, Mourao-Miranda _et. al._(Mourao-Miranda et al., 2018) employ a one-class SVM to classify patients as outliers compared to healthy participants based on the fMRI responses to sad facial expressions. Conversely, a few studies explore one-class classification by learning features characterising the depressed class, and treating non-depressed subjects as outliers (Chen et al., 2017; Chen et al., 2018).
### Depression Detection via Head Motion Cues
Many studies have focused on non-verbal behavioural cues, such as body gestures (Srivastava et al., 2014; Srivastava et al., 2014), facial expressions (Chen et al., 2017; Chen et al., 2018; Chen et al., 2018), their combination (Srivastava et al., 2014) and speech features (Garay et al., 2017; Srivastava et al., 2014; Srivastava et al., 2014) as biomarkers for depression diagnosis and rehabilitation utilising computational tools (Srivastava et al., 2014). Head motion patterns have nevertheless received little attention. Psychological research on depression assessment has identified head motion as a significant non-verbal cue for depression with more pronounced behavioural changes in hand and head regions as compared to other body parts for depressed patients (Srivastava et al., 2014). Waxer _et. al._(Waxer et al., 2014) found that depressed subjects are more likely to keep their heads in a downward position and exhibit significantly reduced head nodding compared to healthy subjects (Garay et al., 2017). Another study focusing on social interactions identified the reduced involvement of depressed patients in conversations, where their behaviour was characterised by lesser encouragement (head nodding and backchanneling while listening) and fewer head movements (Srivastava et al., 2014).
From a computational standpoint, only a few studies have employed head pose and movement patterns for automatic depression detection. Alghowinem _et al._(Alghghowinem et al., 2017) analysed head movements by modelling statistical features extracted from the 2D Active Appearance Model (AAM) projection of a 3D face and demonstrated the efficacy of head pose as behavioural cue. Another study (Srivastava et al., 2014) generated a histogram of head movements normalised over time to highlight the diminished movements of depressed patients due to psychomotor retardation, characterised by a more frequent occurrence of static head positions than in healthy controls. Several studies (Garay et al., 2017; Srivastava et al., 2014; Srivastava et al., 2014; Srivastava et al., 2014) explored the utilisation of head motion as a complementary cue to other modalities to enhance detection performance. For instance, several studies (Chen et al., 2017; Chen et al., 2017) combined head pose with
Figure 1. Overview: We learn kinemes for the control class, and the reconstruction errors between the raw and reconstructed head-motion segments, obtained via kinememe clustering, are computed for both the control and depressed classes. Statistical descriptors over the yaw, pitch and roll dimensions (a total of \(8\times 3\) features) are utilized for depression detection via machine learning techniques.
speech behaviour and eye gaze to develop statistical features for depression analysis. Generalisation across different cross-cultural datasets was attempted in (Blei et al., 2017) by using head pose and eye gaze based temporal features. Kacem _et. al._(Kacem et al., 2017) encoded head motion dynamics with facial expressions to classify depression based on severity, while Dibeklioglu _et. al._(Dibeklioglu et al., 2017) included vocal prosody in combination with head and facial movements for depression detection.
### Novelty of the Proposed Approach
From the literature review, it can be seen that while a number of studies have employed head movements as a complementary cue in multimodal approaches, only few studies have deeply explored head motion as a rich source of information. Further, the explainability of behavioural features, especially head motion features, for depression detection has not yet been explored in the literature. This study (a) is the first to propose the use of kinemes as depression biomarkers, (b) explores multimodal cues derived from head motion behaviour as potential biomarkers for depression; specifically, we show that kinemes learned for the depressed and control classes, or only the control class enable accurate depression detection, and (c) the learned kinemes also _explain_ depressed behaviours consistent with prior observations.
## 3. Kimeime Formulation
This section describes our approach to discovering a set of elementary head motion units termed _kinemes_ from 3D head pose angles. These head pose angles are expressed as a time-series of short overlapping segments, which enables shift invariance. The segments are then projected onto a lower-dimensional space and clustered using a Gaussian Mixture Model (Zhu and Wang, 2017).
We extracted 3D head pose angles using the OpenFace tool (Deng et al., 2017) in terms of 3D Euler rotation angles, _pitch \((\theta_{p})\), yaw \((\theta_{y})\)_ and _roll \((\theta_{r})\)_. The head movement over a duration \(T\) is denoted as a time-series: \(\mathbf{\theta}=\{\theta_{p}^{1},\theta_{y}^{1},\theta_{r}^{1}:T\}\). We ensure that the rotation angles remain non-negative by defining the range in \([0^{\circ},360^{\circ}]\).
For each video, the multivariate time-series \(\mathbf{\theta}\) is divided into short overlapping segments of length \(\ell\) with overlap \(\ell/2\), where the \(i^{th}\) segment is represented as a vector \(\mathbf{h}^{(i)}=[\theta_{p}^{i:i+\ell}\,\theta_{y}^{i:i+\ell}\,\theta_{r}^{ i:i+\ell}]\). Considering the total number of segments in any given video as \(s\), the characterisation matrix \(\mathbf{H}_{\mathbf{\theta}}\) for this video is defined as \(\mathbf{H}_{\mathbf{\theta}}=[\mathbf{h}^{(1)},\mathbf{h}^{(2)},\cdots,\mathbf{h }^{(s)}]\). Thus, for a training set of \(N\) samples, the head motion matrix is created as \(\mathbf{H}=[\mathbf{H}_{\mathbf{\theta}_{i}}|\mathbf{H}_{\mathbf{\theta}_{i}}|\cdots |\mathbf{H}_{\mathbf{\theta}_{N}}]\) with each column of \(\mathbf{H}\) representing a single head motion time-series for a given video sample. We decompose \(\mathbf{H}\in\mathbb{R}_{+}^{m\times n}\) into a basis matrix \(\mathbf{B}\in\mathbb{R}_{+}^{m\times q}\) and a coefficient matrix \(\mathbf{C}\in\mathbb{R}_{+}^{q\times n}\) using Non-negative Matrix Factorization (NMF) such that \(m=3\ell\), \(n=Ns\)
\[\min_{\mathbf{B}\geq 0,\mathbf{C}\geq 0}\|\mathbf{H}-\mathbf{B}\mathbf{C}\|_{F}^ {2} \tag{1}\]
where \(q\leq min(m,n)\) and \(\|\cdot\|_{F}\) denotes the Frobenius norm. Rather than clustering the raw head motion segments, we employ a more interpretable and stable approach by clustering the coefficient vectors in the transformed space. To this end, we learn a Gaussian Mixture Model (GMM) using the columns of the coefficient matrix \(\mathbf{C}\) to produce a \(\mathbf{C}^{*}\in\mathbb{R}_{+}^{q\times k}\) where \(k<<Ns\). These vectors in the learned subspace are transformed back to the original head motion subspace defined by the Euler angles using \(\mathbf{H}^{*}=\mathbf{B}\mathbf{C}^{*}\). The columns of matrix \(\mathbf{H}^{*}\) represent the set of \(K\) kinemes as \(\{\mathcal{K}_{i}\}_{i=1}^{K}\).
Now, we can represent any head motion time-series \(\theta\) as a sequence of kinemes discovered from the input video set by associating each segment of length \(\ell\) from \(\theta\) with one of the kinemes. For each \(i^{th}\) segment in the time-series, we compute the characterisation vector \(\mathbf{h}^{(i)}\) and project it onto the transformed subspace defined by \(\mathbf{B}\) to yield \(\mathbf{c}^{(i)}\) such that:
\[\hat{\mathbf{c}}=\underset{\epsilon^{(i)}\geq 0}{\arg\min}\|\mathbf{h}^{(i)}- \mathbf{B}\mathbf{c}^{(i)}\|_{F}^{2} \tag{2}\]
We then maximise the posterior probability \(P(K|\hat{\mathbf{c}})\) over all kinemes to map the \(i^{th}\) segment with its corresponding kineme \(K^{(i)}\). In the same way, we compute the corresponding kineme label for each segment of length \(\ell\) to obtain a sequence of kinemes: \(\{K^{(1)}\cdots K^{(s)}\}\), where \(K^{(j)}\in\mathcal{K}\) for all segments of time-series \(\theta\).
## 4. Explainable kinematic features
We now examine kineme patterns obtained from the depression datasets, namely _BlackDog_(Blei et al., 2017) and _AVEC2013_(Kacem et al., 2017) (described in Sec. 6.1). Using the _Openface_(Deng et al., 2017) toolkit, we extracted _yaw_, _pitch_ and _roll_ angles per frame, and segmented each video into 2s and 5s-long chunks with 50% overlap for the AVEC2013 and BlackDog datasets, respectively. Considering \(K=16\)(Zhu and Wang, 2017), we extracted kinemes from both patient and healthy control segments, following the procedure outlined in Sec. 3. We further examined the kinemes learned for each dataset to identify the set of distinctive kinematics for the two classes. To obtain the most discriminative kinemes, we computed the relative frequency of occurrence for each kineme for the control and patient data, and selected the top five kinemes per class based on their relative frequency difference (see Sec. 5.1).
Selected kinemes corresponding to the maximal difference in their relative frequency of occurrence for the control and patient classes are visualised in Figures 2 (_BlackDog_) and 3 (_AVEC2013_). Examining the control-specific kinemes in Figs. 2 and 3, we observe a greater degree of movement for healthy subjects as compared to a predominantly static head pose conveyed by the depressed patient-specific kinematics. Head nodding, characterised by pitch oscillations, and considerable roll angle variations can be noted for at least one control-class kineme; conversely, patient-specific kinemes exhibit relatively small changes over all head pose angular dimensions. These findings are reflective of reduced head movements in the depressed cohort compared to healthy individuals, which is consistent with observations made in past studies (Deng et al., 2017; K
were empirically chosen and provided the best results from among segment lengths spanning \(2s\) to \(7s\) for both datasets. For both approaches, a total of \(K=16\) kinemes are learned from the two datasets as per the procedure outlined in Section 3.
### Kineme Discovery from Two-class Data
To examine whether the kinemes discovered from head pose angles of both classes are effective cues for depression detection, we learn kinemes from segments corresponding to both patient and control videos. Upon discovering the kineme values, the _relative frequency_\(\eta_{K_{i}}\) of each kineme \(K_{i}\) is computed over the two classes as:
\[\eta_{K_{i}}=\frac{f(K_{i})}{\sum_{i=1}^{16}f(K_{i})} \tag{3}\]
where \(f(K_{i})\) represents the frequency of occurrence of the kineme \(K_{i}\) for a particular class. We then compute the relative frequency difference for each kineme between the two classes to identify the ten most differentiating kinemes (four kinemes per class are depicted in Figs. 2, 3). Next, we generate a feature set by extracting the frequencies of the selected kinemes over the thin-slice chunks considered for analysis. Thus, we obtain a 10-dimensional feature vector representing kineme frequencies for each chunk.
### Kineme Discovery from Control Data
Here, we learn kinemes representing head motion solely from the control cohort. Subsequently, head pose segments from both the patient and control classes are represented via the discovered kinemes, and reconstruction errors computed. Let the raw head pose vector \(\mathbf{h}^{(i)}\) for the \(i^{th}\) segment in the original subspace be denoted as:
\[\mathbf{h}^{(i)}=[\theta_{p}^{i\pm t\ell}\,\theta_{y}^{i\pm t\ell}\,\theta_{r} ^{i\pm t\ell}] \tag{4}\]
Let the kineme value associated with this segment be \(K^{(i)}\). Based on the kinemes discovered from the control cohort alone, we calculate the reconstructed kineme for the \(i^{th}\) segment as \(\tilde{\mathbf{h}}^{(i)}\). The reconstructed vector for each kineme is determined by converting the GMM cluster centre for each kineme from the learned space to the original _pitch-yaw-roll_ space. The reconstructed head pose vector for the segment is:
\[\tilde{\mathbf{h}}^{(i)}=[\tilde{\theta}_{p}^{i\pm t\ell}\,\tilde{\theta}_{y}^ {i\pm t\ell}\,\tilde{\theta}_{r}^{i\pm t\ell}] \tag{5}\]
To compute the reconstruction error for both depressed patients and healthy controls, we compute the signed difference between the two vectors for each segment to account for the difference between raw head pose vector and the GMM cluster centres. We calculate the difference vector \(\mathbf{d}^{(i)}\) for each \(i^{th}\) segment as:
\[\mathbf{d}^{(i)}=\mathbf{h}^{(i)}-\tilde{\mathbf{h}}^{(i)}=[d_{p}^{i\pm t \ell}\,d_{y}^{i\pm t\ell}\,d_{r}^{i\pm t\ell}] \tag{6}\]
These signed differences values are added over each angular dimension of pitch (\(p\)), yaw (\(y\)), and roll (\(r\)) for the segment.
\[\mathbf{s_{e}}^{(i)}=\sum_{n=1}^{\ell}d_{e}^{i\pm n} \tag{7}\]
Figure 3. Plots of kinemes that occur more frequently for the minimally depressed (left) and patient (right) cohorts in _AVEC2013_.
Figure 2. Plots of kinemes that occur more frequently for the control (left) and patient (right) cohorts in the _BlackDog_ dataset.
where each \(s_{e}(i)\) is calculated for each angular dimension \(e\in\{p,y,r\}\) over all segments of both classes. Depending on the thin-slice chunk duration considered for classification, we compute different descriptive statistics to generate the feature set. Considering number of elementary kinememe chunks in the considered time-window to be \(n_{e}\), we obtain the following feature vector for each angle \(e\in\{p,y,r\}\):
\[\mathbf{as_{e}}=[|s_{e}(1)|,|s_{e}(2)|,\cdots,|s_{e}(n_{e})|] \tag{8}\]
where \(|\cdot|\) represents the absolute value. We then calculate eight statistical features from the above vectors, namely, _minimum_, _maximum_, _range_, _mean_, _median_, _standard deviation_, _skewness_, and _kurtosis_ (total of \(8\times 3\) features over the yaw, pitch, roll dimensions).
## 6. Experiments
We perform binary classification on the BlackDog and AVEC2013 datasets, plus 4-class classification on AVEC2013. This section details our datasets, experimental settings and learning algorithms.
### Datasets
We examine two datasets in this study: clinically validated data collected at the Black Dog Institute - a clinical research facility focusing on the diagnosis and treatment of mood disorders such as anxiety and depression (referred to as _BlackDog_ dataset) - and the _Audio/Visual Emotion Challenge_ (AVEC2013) depression dataset.
**BlackDog Dataset (Dosovitskiy et al., 2017):** This dataset comprises responses from healthy controls and depression patients selected as per the criteria outlined in the Diagnostic and Statistic Manual of Mental Disorders (DSM-IV). Healthy controls with no history of mental illness and patients diagnosed with severe depression were carefully selected (Dosovitskiy et al., 2017). For our analysis, we focus on the structured interview responses in (Dosovitskiy et al., 2017), where participants answered open-ended questions about life events, designed to elicit spontaneous self-directed responses, asked by a clinician. In this study, we analyse video data from 60 subjects (30 depressed patients and 30 healthy controls), with interview durations ranging from \(183-1200s\).
**AVEC2013 Dataset (Dosovitskiy et al., 2017):** Introduced for a challenge in 2013, this dataset is a subset of the audio-visual depressive language corpus (AViD-corpus) comprising 340 video recordings of participants performing different PowerPoint guided tasks detailed in (Dosovitskiy et al., 2017). The videos are divided into three nearly equal partitions (training, development, and test) with videos ranging from \(20-50min\). Each video frame depicts only one subject, although some participants feature in multiple video clips. The participants completed a multiple-choice inventory based on the Beck Depression Index (BDI) (Boll et al., 2016) with scores ranging from 0 to 63 denoting the severity of depression. For binary classification, we dichotomise the recordings into the non-depressed and depressed cohorts as per the BDI scores. Subjects with a BDI score \(\leq 13\) are categorised as _non-depressed_, while the others are considered as _depressed_.
**AVEC2013 Multi-Class Classification:** For fine-grained depression detection over the AVEC2013 dataset, we categorise the dataset based on the BDI score into four classes as detailed below:
### Experimental Settings
**Implementation Details:** For binary classification, we evaluate performance for the smaller _BlackDog_ dataset via 5-repetitions of 10-fold cross-validation (10FCV). For the AVEC2013, the pre-partitioned train, validation and test sets are employed. We utilise the validation sets for fine-tuning classifier hyperparameters.
**Chunk vs Video-level Classification:** The videos from both datasets are segmented into smaller chunks of \(15s-135s\) length, to examine the influence of _thin-slice_ chunk duration on the classifier performance. We repeated the video label for all chunks and metrics are computed over all chunks for chunk-level analysis. Additionally, video-level classification results are obtained by computing the majority label over all video chunks in the test set.
**Performance Measures:** For the BlackDog dataset, results are shown as \(\mu\pm\sigma\) values over 50 runs (5\(\times\) 10FCV repetitions). For AVEC2013, performance on the test set is reported. For both, we evaluate performance via the accuracy (Acc), weighted F1 (F1), precision (Pr), and recall (Re) metrics. The weighted F1-score denotes the mean F1-score over the two classes, weighted by class size.
### Classification Methods
Given that our proposed features do not model spatial or temporal correlations, we employ different machine learning models for detecting depression as described below:
* **Logistic Regression (LR)**, a probabilistic classifier that employs a sigmoid function to map input observations to binary labels. We utilise extensive grid-search to fine-tune parameters such as penalty \(\in\{1,l2,None\}\) and regulariser \(\lambda\in\{1e^{-6},\cdots,1e^{3}\}\).
* **Random Forest (RF)**, where multiple decision trees are generated from training data whose predictions are aggregated for labelling. Fine-tuned parameters include the number of estimators \(N\in[2,\cdots,8]\), maximum depth \(\in[3,\cdots,7]\), and maximum features in split \(\in[3,\cdots,7]\).
* **Support Vector Classifier (SVC)**, a discriminative classifier that works by transforming training data to a high-dimensional space where the two classes can be linearly separated via a hyperplane. For SVC, we examine different kernels \(\in\{rbf,poly,sigmoid\}\) and fine-tune regularisation parameter \(C\in\{0.1,1,10,100\}\) and kernel coefficient \(\gamma\in\{0.0001,\cdots,1,scale,auto\}\).
* **Extreme Gradient Boosting (XGB)**, a model built upon a gradient boosting framework, and focused on improving a series of weak learners by employing the gradient descent algorithm in a sequential manner. The fine-tuned hyperparameters include the number of estimators \(\{50,100,150\}\), maximum depth \(\in[3,\cdots,7]\) of the tree and learning rate \(\in[0.0005,\cdots,0.1]\).
* **Multi Layer Perceptron (MLP)**, where we employed a feed-forward neural network with two hidden dense layers comprising 12 and 6 neurons, resp., with a rectified linear unit (ReLU) activation. For training, we employ categorical cross-entropy as the loss function and fine-tune the following hyperparameters: learning rate \(\in\{1e^{-4},1e^{-3},1e^{-2}\}\), and batch size \(\in\{16,24,32,64\}\). We utilise the Adam optimiser for updating the network weights during training.
## 7. Results and Discussion
Table 1 shows the classification results obtained for the _BlackDog_ dataset with the 2CKD and HCKD approaches (Section 5). Table 2 presents the corresponding results for the _AVEC2013_ dataset. These tables present classification measures obtained at the _chunk-level_ (best results achieved over \(15-135s\)-long chunks for the two datasets are presented), and the _video-level_ (label derived upon computing the mode over the chunk-level labels). Based on these results, we make the following observations:
* It can be noted from Tables 1 and 2 that relatively lower accuracies and F1 scores are achieved for both datasets using the 2CKD approach, implying that while class-characteristic kinemes are explanative as seen from Figs. 2 and 3, they are nevertheless not discriminative enough to effectively distinguish between the two classes.
* In comparison, we note far superior performance with the HCKD method over all classifiers. As a case in point, we obtaine peak chunk-level F1-scores of 0.79 and 0.62, resp., for HCKD and 2CKD on BlackDog, while the corresponding F1-scores are 0.82 and 0.61, resp., on AVEC. This observation reveals considerable and distinguishable differences in the reconstruction errors for the patient and control classes, and convey that patient data are characterised as _anomalies_ when kinemes are only learned from the control cohort.
* Examining the HCKD precision and recall measures for both datasets, we note higher precision than recall at the chunk-level for the BlackDog dataset. Nevertheless, higher recall is achieved at the video-level with multiple classifiers. Likewise, higher chunk-level precision is noted for AVEC, even if ceiling video-level precision and recall are achieved.
* Comparing HCKD chunk and video-level F1-scores for both datasets, similar or higher video-level F1 values can be seen in Table 1. F1-score differences are starker in Table 2, where video-level scores are considerably higher than chunk-level scores. These results suggest that aggregating observations over multiple thin-slice chunks is beneficial and enables more precise predictions as shown in (Krishnan et al., 2017).
* Examining measures achieved with the different classifiers, the support vector classifier achieves the best chunk-level F1-score on both datasets, with the LR classifier performing very comparably. All classifiers achieve very similar performance when video-level labels are compared.
### Comparison with the state-of-the-art
Our best results are compared against prior classification-based depression detection studies in Table 3. For the BlackDog dataset, Alghowinem _et al._(Bhowinem et al., 2015) analysed statistical functional features extracted from a 2D Active Appearance Model, whereas Joshi _et al._(Joshi et al., 2016) computed a histogram of head movements by estimating the displacement of fiducial facial points. Compared to \(N\)-average recall of 0.71 reported in (Bhowinem et al., 2015), and an accuracy of 0.72 noted in (Joshi et al., 2016), our kineme-based approach achieves better chunk and video-level accuracies (0.75 and 0.80, resp.), and superior chunk-level recall (0.81). As most previous studies on the AVEC2013 dataset focus on continuous prediction, we compare our model's performance with the AVEC2014 (Joshi et al., 2016) results examining visual features.
AVEC2014 used the same subjects as AVEC2013, but with additional, specific task data (_Northwind, Freeform_) extracted from the AViD videos. For video analysis, Senousaoui _et al._(Senousaoui et al., 2017) extracted LGBP-TOP features from frame blocks to obtain an accuracy of 0.82 using an SVM classifier. On the other hand, Al-gawwam _et al._(Al-gawwam et al., 2015) extracted eye-blink features from video data using a facial landmark tracker to achieve an accuracy of 0.92 for the _Northwind_ task and 0.88 for the _Freeform_ task. Comparatively, our work achieves an accuracy of 0.82 at the chunk-level and 1.00 at the video-level. The next section will detail the performance of a more fine-grained 4-class categorisation on the AVEC2013 dataset.
### AVEC2013 Multi-class Classification
Table 4 depicts video-level 4-class classification results achieved on the AVEC2013 dataset via the HCKD approach. The 4-class categorisation was performed to further validate the correctness of the HCKD approach, which produces ceiling video-level F1, Precision and Recall measures on AVEC2013 in binary classification. Results are reported on the test set, upon fine-tuning the classifier models on the development set. Reasonably good F1-scores are achieved even with 4-class classification, with a peak F1 of 0.72 obtained with the LR, RF and support vector classifiers. Cumulatively, our empirical results confirm that kinemes encoding atomic head movements are able to effectively differentiate between (a) the patient and control classes, and (b) different depression severity bands.
### Ablative Analysis over Thin Slices
Tables 1 and 2 evaluate detection performance over (_thin-slice_) chunks or short behavioural episodes, and over the entire video, on the BlackDog and AVEC2013 datasets. We further compared labelling performance at the chunk and video-levels using chunks spanning \(15-135s\). The corresponding results are presented in Figure 4. For both plots presented in the figure, the dotted curves denote video-level F1-scores, while solid curves denote chunk-level scores obtained for different classifiers.
For the BlackDog dataset (Fig. 4 (left)), longer time-slices (of length \(75-105s\)) achieve better performance than shorter (\(15-60s\) long) ones at both the chunk and video-levels across all classifiers; these findings are consistent with the finding that more reliable predictions can be achieved with longer observations in general (Krishnan et al., 2017). However, a performance drop is noted for very long chunk-lengths of \(120-135s\) duration. Decoding results on the AVEC2013 dataset, consistent with Table 3 results, a clear gap is noted between the chunk and video-level results, with the latter demonstrating superior performance. Very similar F1-scores are observed across classifiers for various chunk lengths. No clear trends are discernible from video-level F1-scores obtained with different chunk-lengths, except that the performance in general decreases for all classifiers with very long chunks.
\begin{table}
\begin{tabular}{|l l||c c c c||c c c c|} \hline \multirow{2}{*}{**Condition**} & \multirow{2}{*}{**Classifier**} & \multicolumn{5}{c||}{**Chunk-level**} & \multicolumn{5}{c|}{**Video-level**} \\ & & **Acc** & **F1** & **Pr** & **Re** & **Acc** & **F1** & **Pr** & **Re** \\ \hline \hline \multirow{4}{*}{**2CKD**} & **LR** & 0.60\(\pm\)0.15 & 0.61\(\pm\)0.14 & 0.67\(\pm\)0.22 & 0.65\(\pm\)0.22 & 0.60\(\pm\)0.20 & 0.59\(\pm\)0.21 & 0.55\(\pm\)0.30 & 0.65\(\pm\)0.33 \\ & **RF** & 0.58\(\pm\)0.13 & 0.60\(\pm\)0.12 & 0.67\(\pm\)0.21 & 0.61\(\pm\)0.21 & 0.61\(\pm\)0.19 & 0.62\(\pm\)0.19 & 0.59\(\pm\)0.32 & 0.59\(\pm\)0.32 \\ & **SVC** & 0.60\(\pm\)0.15 & 0.62\(\pm\)0.15 & 0.68\(\pm\)0.25 & 0.62\(\pm\)0.25 & 0.62\(\pm\)0.19 & 0.63\(\pm\)0.19 & 0.61\(\pm\)0.32 & 0.59\(\pm\)0.33 \\ & **XGB** & 0.55\(\pm\)0.17 & 0.54\(\pm\)0.16 & 0.63\(\pm\)0.21 & 0.71\(\pm\)0.22 & 0.53\(\pm\)0.17 & 0.50\(\pm\)0.20 & 0.54\(\pm\)0.23 & 0.79\(\pm\)0.21 \\ & **MLP** & 0.53\(\pm\)0.15 & 0.52\(\pm\)0.17 & 0.60\(\pm\)0.22 & 0.71\(\pm\)0.21 & 0.51\(\pm\)0.20 & 0.47\(\pm\)0.21 & 0.53\(\pm\)0.27 & 0.74\(\pm\)0.32 \\ \hline \multirow{4}{*}{**HCKD**} & **LR** & 0.77\(\pm\)0.13 & 0.78\(\pm\)0.12 & 0.85\(\pm\)0.19 & 0.74\(\pm\)0.21 & 0.79\(\pm\)0.16 & 0.78\(\pm\)0.17 & 0.81\(\pm\)0.30 & 0.66\(\pm\)0.31 \\ & **RF** & 0.71\(\pm\)0.13 & 0.73\(\pm\)0.12 & 0.75\(\pm\)0.25 & 0.71\(\pm\)0.20 & 0.76\(\pm\)0.15 & 0.76\(\pm\)0.16 & 0.75\(\pm\)0.26 & 0.82\(\pm\)0.25 \\ & **SVC** & 0.78\(\pm\)0.14 & **0.79\(\pm\)0.13** & 0.87\(\pm\)0.18 & 0.74\(\pm\)0.20 & 0.80\(\pm\)0.18 & **0.80\(\pm\)0.19** & 0.83\(\pm\)0.30 & 0.70\(\pm\)0.31 \\ & **XGB** & 0.72\(\pm\)0.13 & 0.72\(\pm\)0.12 & 0.75\(\pm\)0.18 & 0.81\(\pm\)0.15 & 0.78\(\pm\)0.17 & 0.78\(\pm\)0.17 & 0.74\(\pm\)0.27 & 0.82\(\pm\)0.27 \\ & **MLP** & 0.75\(\pm\)0.13 & 0.76\(\pm\)0.12 & 0.78\(\pm\)0.21 & 0.81\(\pm\)0.21 & 0.76\(\pm\)0.16 & 0.76\(\pm\)0.16 & 0.74\(\pm\)0.29 & 0.77\(\pm\)0.28 \\ \hline \end{tabular}
\end{table}
Table 1. Chunk and Video-level classification results on the BlackDog dataset with the 2CKD and HCKD approaches. Accuracy (Acc), F1, Precision (Pr) and Recall (Re) are tabulated as (\(\mu\pm\sigma\)) values.
\begin{table}
\begin{tabular}{|l l||c c c c||c c c c|} \hline \multirow{2}{*}{**Condition**} & \multirow{2}{*}{**Classifier**} & \multicolumn{5}{c||}{**Chunk-level**} & \multicolumn{5}{c|}{**Video-level**} \\ & & **Acc** & **F1** & **Pr** & **Re** & **Acc** & **F1** & **Pr** & **Re** \\ \hline \hline \multirow{4}{*}{**2CKD**} & **LR** & 0.58 & 0.58 & 0.54 & 0.65 & 0.61 & 0.61 & 0.57 & 0.71 \\ & **RF** & 0.61 & 0.61 & 0.57 & 0.59 & 0.72 & 0.72 & 0.67 & 0.82 \\ & **SVC** & 0.61 & 0.61 & 0.57 & 0.63 & 0.64 & 0.64 & 0.61 & 0.65 \\ & **XGB** & 0.59 & 0.58 & 0.57 & 0.44 & 0.67 & 0.67 & 0.65 & 0.65 \\ & **MLP** & 0.56 & 0.56 & 0.52 & 0.60 & 0.58 & 0.58 & 0.65 & 0.65 \\ \hline \multirow{4}{*}{**HCKD**} & **LR** & 0.80 & 0.80 & 0.77 & 0.81 & 0.94 & 0.94 & 0.94 & 0.94 \\ & **RF** & 0.78 & 0.78 & 0.78 & 0.75 & 1.00 & **1.00** & 1.00 & 1.00 \\ & **SVC** & 0.82 & **0.82** & 0.83 & 0.77 & 1.00 & **1.00** & 1.00 & 1.00 \\ & **XGB** & 0.80 & 0.80 & 0.79 & 0.77 & 1.00 & **1.00** & 1.00 & 1.00 \\ & **MLP** & 0.81 & 0.80 & 0.80 & 0.77 & 1.00 & **1.00** & 1.00 & 1.00 \\ \hline \end{tabular}
\end{table}
Table 2. Chunk and Video-level classification results on the AVEC2013 dataset with the 2CKD and HCKD approaches. Accuracy (Acc), F1, Precision (Pr) and Recall (Re) are tabulated as (\(\mu\pm\sigma\)) values.
\begin{table}
\begin{tabular}{|l l||c c c c c|} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Methods**} & \multicolumn{2}{c||}{**Features**} & \multicolumn{2}{c|}{**Evaluation metrics**} \\ & & & **Acc** & **F1** & **Pr** & **Re** \\ \hline \hline \multirow{4}{*}{**BlackDog**} & Alghowinem _et al._[5] & Head movement & - & - & - & 0.71 \\ & Joshi _et al._[24] & Head movement & 0.72 & - & - & - \\ & Ours (Chunk-level) & Kinemes & **0.75** & 0.76 & 0.78 & **0.81** \\ & Ours (Video-level) & Kinemes & **0.80** & 0.80 & 0.83 & 0.70 \\ \hline \multirow{4}{*}{**AVEC2013**} & Senoussaoui _et al._[4] (AVEC2014) [39] & Video features & 0.82 & - & - & - \\ & Al-gawram _et al._[4] (AVEC2014 - Northwind) [2] & Eye Blink & 0.85 & - & - & - \\ & Al-gawram _et al._[4] (AVEC2014 - Freeform) [2] & Eye Blink & 0.92 & - & - & - \\ & Ours (AVEC2013 at chunk-level) & Kinemes & 0.82 & 0.82 & 0.83 & 0.87 \\ \cline{1-1} & Ours (AVEC2013 at Video-level) & Kinemes & **1.00** & 1.00 & 1.00 & 1.00 \\ \hline \end{tabular}
\end{table}
Table 3. Comparison with prior works for the two datasets.
for the dimensional pairs to evaluate which angular dimension(s) are more informative.
Figure 5 presents F1-scores obtained with the different classifiers for uni-dimensional and pairwise-dimensional features. On the BlackDog dataset, a combination of the pitch and yaw-based descriptors produce the best performance across all models, while roll-specific descriptors perform worst. For the AVEC2013 dataset, pitch-based descriptors achieve excellent performance across models. The F1-scores achieved with these features are very comparable to the pitch + yaw and pitch + roll combinations. Here again, roll-specific features achieve the worst performance. Cumulatively, these results convey that pitch is the most informative head pose dimension, with roll being the least informative. With respect to combinations, the pitch + yaw combination in general produces the best results. These results again confirm that responsiveness in social interactions, as captured by pitch (capturing actions such as head nodding) and yaw (capturing head shaking), provides a critical cue for detecting depression, consistent with prior studies (Bordan et al., 2017; Ghaislawat et al., 2018).
## 8. Conclusion
In this paper, we demonstrate the efficacy of elementary head motion units, termed _kinemes_, for depression detection by utilising two approaches: (a) discovering kinemes from data of both patient and control cohorts, and (b) learning kineme patterns solely from the control cohort to compute statistical functional features derived from reconstruction errors for the two classes. Apart from effective depression detection, we also identify explainable kineme patterns for the two classes, consistent with prior research.
Our study demonstrates the utility of head motion features for detecting depression, but our experiments are restricted to classification tasks involving a discretisation of the depression scores. In the future, we will investigate (a) the utility of kinemes for continuous prediction (regression) of depression severity, (b) the cross-dataset generalisability of models trained via kinemes, and (c) the development of multimodal methodologies combining kinemes with other behavioural markers, and evaluating their efficacy.
|
2306.12168 | Decisions & Disruptions 2: Decide Harder | Cyber incident response is critical to business continuity -- we describe a
new exercise that challenges professionals to play the role of Chief
Information Security Officer (CISO) for a major financial organisation. Teams
must decide how organisational team and budget resources should be deployed
across Enterprise Architecture (EA) upgrades and cyber incidents. Every choice
made has an impact -- some prevent whilst others may trigger new or continue
current attacks. We explain how the underlying platform supports these
interactions through a reactionary event mechanism that introduces events based
on the current attack surface of the organisation. We explore how our platform
manages to introduce randomness on top of triggered events to ensure that the
exercise is not deterministic and better matches incidents in the real world.
We conclude by describing next steps for the exercise and how we plan to use it
in the future to better understand risk decision making. | Benjamin Shreeve, Joseph Gardiner, Joseph Hallett, David Humphries, Awais Rashid | 2023-06-21T10:43:13Z | http://arxiv.org/abs/2306.12168v1 | # Decisions & Disruptions 2: Decide Harder
###### Abstract
Cyber incident response is critical to business continuity--we describe a new exercise that challenges professionals to play the role of Chief Information Security Officer (CISO) for a major financial organisation. Teams must decide how organisational team and budget resources should be deployed across Enterprise Architecture (EA) upgrades and cyber incidents. Every choice made has an impact--some prevent whilst others may trigger new or continue current attacks. We explain how the underlying platform supports these interactions through a reactionary event mechanism that introduces events based on the current attack surface of the organisation. We explore how our platform manages to introduce randomness on top of triggered events to ensure that the exercise is not deterministic and better matches incidents in the _real world_. We conclude by describing next steps for the exercise and how we plan to use it in the future to better understand risk decision making.
## 1 Introduction
Major cyber security incidents regularly disrupt businesses, and in extreme circumstance have even bankrupted them. We have created a major new incident response exercise to help businesses. We have worked with specialists from law enforcement and major financial organisations to create an exercise that challenges teams to handle a major, responsive cyber security incident. The aim of the exercise is to expose senior managers and incident response teams to the time, resource and political pressures they will encounter whilst handling a major crisis whilst at the same time gathering granular decision-making data to inform cyber response handling research.
This work builds upon a freely available previous game released under a CC-BY-NC license: Decisions & Disruptions (D-D) 1. D-D is a highly successful tabletop exercise utilised by police forces across the UK and businesses across the world. It was designed to explore how people make risk decisions around cyber-physical infrastructures. Whilst D-D has provided valuable practical and research insights [12, 13, 14, 5], we argue that it is limited by a deterministic mechanism, fixed and tied to one sector. We, therefore, propose a new game: Decisions & Disruptions 2: Decide Harder (D-D 2) which provides an extensible engine for risk decision making exercises that incorporates randomness and more complicated threat relationships that can be targeted towards any industry, rather than just for critical national infrastructure. This paper introduces our proposed new game D-D 2 and the new features it incorporates.
Footnote 1: decisions-disruptions.org
## 2 Related Work
Cyber security exercises are a popular way of raising awareness of the subject matter. A number of exercises have been developed specifically as part of University courses (e.g, [3, 10]). However, such exercises tend to have a relatively narrow scope related to the content of specific University courses and they are rarely validated with or used outside of academia.
Other existing exercises have been created to help raise awareness of cyber security issues in industry (e.g., [4, 5, 6, 2, 11, 8]). All of these exercises are tabletop exercises and with the exception of Frey et al. [5] all use card-based mechanisms. Whilst the card-based mechanisms are valuable for raising awareness they often limit how well exercises can reflect real-world scenarios. For example, such mechanisms rely on a mixed deck of cards (or several decks) which are then drawn from at random to provide game events. As such
it is hard to such mechanisms to capture the way that events in the real-world may be related with one event causing another to occur. This is not to say they are not of value, but they often emphasise learning of specific aspects rather than emulation of scenarios. For example, Hart et al. [8] introduce the Riskio serious game--their exercise is aimed at non-technical participants and challenges them to consider potential threat vectors and then identify possible countermeasures. This provides a no doubt valuable learning experience, but is not the same as exposing participants to an emulation of decision-making under pressure which we is the aim of our exercise. Hart et al. [7] have taken lessons learned from the development of Riskio [8] and used it, along with a careful analysis of a wide range of cyber security games (including D-D), to create the MOTENS design model for serious games. This model suggests that the most effective cyber security games include: **M**ultiple modes of learning--exposing players to a wide range of cyber security aspects; **O**wnership Self-Learning--Providing a range of options to help meeting learning objectives;**T**heory--that supports the design;**E**nvironment--creating an appropriate environment where people feel they can learn;**N**egotiation--moving toward a coaching and problem-based learning style;**S**elf learning--enabling participants to build upon their base knowledge.
Frey et al. [5] provide a different approach--their exercise _Decisions & Disruptions_ provides teams with a Lego representation of a hydro-electric company with an plant (operations) site and separate office site. Teams play through 4 rounds, investing a budget each round to implement security controls on these sites. At the end of each round they discover what cyber attacks have befallen the business as a result of their choices. The game mechanism makes it clear to players that there is a direct correlation between the investment choices they have made and the events they have suffered. This is a significant improvement in terms of realism, enabling more complex attack scenarios to be developed. However, there are limitations to the mechanism created by Frey et al. [5], the game mechanism has hard-coded paths through it (e.g., event A will occur if security control B has not been purchased by round 3). This means that teams cannot replay the exercise as they will always experience the same set consequences for their actions.
We seek to build on the success of D-D by developing an exercise where the players affect not only the landscape in which they play but also the consequences of their choices. An exercise where events that have occurred and choices that have been made increase the likelihood of (or even directly cause) specific events to occur, and where events in turn can change the landscape. We work closely with CISOs and Cyber Security Incident Response Teams (CSIRTs) to develop a unique, replayable exercise, inspired by D-D that challenges teams to make cyber security decisions under the time, resource and political pressures of a series of unfolding cyber incidents.
## 3 The Exercise
The exercise challenges teams to help the CISO of a fictitious financial organisation handle a _very bad_ cyber security week--each day of the week is represented by a 20 minute time-limited round. Each day, a series of cyber-related events will occur, which can be handled in various ways. Players have to decide which tasks the CISO's team will tackle each day to stay on top of what's happening to their company.
Teams have the ability to affect the overall attack-surface of the business and therein the type and effectiveness of possible attack vectors used against the business by updating assets through actions such as patching and staff training. How attacks (or symptoms) are handled affect whether attack(s) are prevented, continue or whether new related attacks are triggered. The exercise is designed to be played by CSIRT, CISOs and senior executives.
### A Reactionary Event Mechanism
In order to be able to represent the complexities of real-world decision-making a complex and reactionary event mechanism was developed (see figure 1). Teams are tasked with protecting the EA of the business (see figure 2). They are able to affect the company's attack surface by investing staff hours each day into a wide range of possible upgrades (and in some cases by sacrificing profit). The attack surface is evaluated at the start of each round and used to identify which events can occur (out of a library of 120 possible events). Of those that can occur, 5 are randomly selected and added to the event list for the next round. Events can be specified to only occur given a particular set of criteria--such as a particular EA upgrade having been purchased or a previous event having occurred. Each choice that a team makes has an associated cost in terms of staff hours and company profitability. Events have been carefully designed in tandem with cyber security law enforcement officers to provide a realistic representation of the threats facing financial organisations. The range of events reflect the major MITRE ATT&CK 2 areas highlighted through interviews with Chief Information Security Officers (CISOs) in major financial organisations as problematic.
Footnote 2: [https://attack.mitre.org/matrices/enterprise/](https://attack.mitre.org/matrices/enterprise/)
For many choices there are also consequences. These are either explicit feedback to teams as to the impact of their choices--including additional impact to hours/profitability/shareprice--or an in-game consequence which can include triggering other events in the future. These performance indicators were flagged up through interviews with CISOs and board members of major financial organisations as vital indications of performance during a cyber incident [1, 15]. These triggered events are then added to the event list for the next round (possibly bypassing any criteria evaluation). Events can also trigger other events to occur if they are ignored; for example if a team decides an event is
not important in one round it can still have an impact on their next round by queuing up an event to penalise their neglect. Some events are designated _'on-draw'_ events--they affect the round as soon as they are drawn, deliberately introducing variance to the game. For example, an event may tell a team that a member of the CSIRT is ill that day; they therefore will start the round 8 hours down, with only 72 hours available to them to utilise. These _'on-draw'_ events can force hours, profitability, shareprice and even make EA upgrade choices for teams before a round starts.
For example, as part of the game an event may occur where a staff member reports finding a USB stick on the ground. The CISO can decide whether to ignore it, to forensically analyse it (at a cost of time) or to destroy the device. Each decision will have impacts (and could in turn lead to more events occurring. To visualise these kinds of decisions we created an automated tool to create attack trees from the engine's database of events and explore what happens. This gives a quick pictorial guide to the consequences of any decision in the game and the threat landscape of any particular configuration.
### Resources
Teams have to negotiate various resource constraints. Firstly, they have to contend with the challenge that a day has a finite length--in this case a configurable 20 minute limit. If they fail to utilise their resources effectively in that time then the game automatically moves forward to the next day. The primary unit of daily resources are the number of _'hours'_ available to the team each day--that is, how many of staff in the CSIRT team that they are managing are available that day (the exercise assumes that there are 10 staff members, each with a maximum 8 hours per day, 80 hours in total per day). These hours are then allocated to event/EA actions.
The businesses performance is represented by two broad financial metrics that are both affected by choices made and consequences. Firstly, the team have to manage the overall long-term _'projected profit'_ for the business--this starts at \(\upxi\)500,000. However, it can be reduced through loss of business or fines that occur as a result of choices made. Teams can also choose to reinvest some of this projected profit into the business through certain EA upgrades or event choices which have associated financial costs. Secondly, teams have to consider the short-term performance of the business in the form of its _'share price'_--which can be affected both positively and negatively as a result of choices made from an initial value of \(\upxi\)100. If the players actions result in the _'projected profit'_ or _'share price'_ reaching zero then the game ends.
### Exercise Management
The exercise comprises two parts: a game master tool that tracks and evaluates the state of play and an associated physical card deck (see figure 3). The game master utilises a digital tool, written in Java, that reads a database containing details on the possible events (and their relationship to one another) as well as the assets and upgrades that possible for the EA. Each event/choices and asset/upgrade are also printed to a deck of cards providing session flexibility.
### Typical Play Through
At the start of each _day_ (round) the exercise interface updates to reflect the status of the business (see example in appendix B).
This right hand panel consists of a list of possible actions that can be undertaken to help improve the cyber security of their assets. The left hand panel shows a list of the most pertinent cyber events that have hit the business in the last 24 hours and that need resolving (a minimum of 5 occur each day) plus any that have yet to be actioned in some way that are remaining from previous rounds.
Figure 1: Summary of the different mechanisms in the game and the outcomes that can happen.
Figure 2: Financial Organisation enterprise architecture
The EA upgrades (see example in appendix B) can be purchased at any point during a round and affect the attack surface of the game in the next round.
Each event presents presents teams with several choices about how they might choose to resolve each of these events (see example in appendix B). All events by default include 'ignore this round' which defers making a choice for that event to the next round. All choices are final--once a choice has been made (other than ignore), then it cannot be undone.
Teams work together to identify which events and EA upgrades they should dedicate resources towards in the round. As they make each choice the game provides them with instant feedback which may result in their hours, profitability or share price being immediately hit, which may in turn then limit the choices they had planned. When the round concludes the game utilises the reactionary event mechanism logic (see section 3.1 and figure 1) to evaluate which events will occur in the next round (as a result of changes the team have made to the businesses attack surface), or that the team have directly triggered (because of choices made in during the round).
The exercise continues this way through multiple rounds with teams starting each round with at least 5 events to address and a starting number of hours for the day affected by 'on-draw' events. The exercise concludes either when the round limit is reached (7 rounds) or when one of the failure criteria are triggered. The game has 3 failure criteria: the profitability of the business can reach zero, the share price can reach zero or the team can have 10 or more concurrent events open. We consider a team that has failed to resolve (or has triggered) 10 (or more) events to have reached a point of event saturation that could never be resolved in the real world. The extent to which consequences of event choices affect the profitability and shareprice of the organisation vary based on the magnitude of the event. Some consequences may have a positive impact whilst others may be severe, or even sufficient to bankrupt the organisation.
## 4 Design Choices
We have created a mechanism that can identify if an attack is viable based upon not only the current state of the organisations attack surface, but also on whether specific previous events have occurred and particular choices made. This means that events that are presented are far more representative of real-world scenarios.
We have taken this further by treating certain defensive approaches as _'resistances'_. Defensive mechanisms, like _firewalls_ and _antivirus_, can never be 100% effective. Instead, our game mechanism captures the current success rate of these mechanisms (for example the Firewall is only 60% effective) and then uses these values to establish if an attack that could be prevented by a firewall will occur by incorporating chance. We also provide teams with the ability to improve these resistances by investing in specific EA upgrades--and punish _negligence_ for not doing so in a timely manner. In doing so we create an exercise where there is no longer a 1:1 relationship between events and defences and instead there are combinations of defences that can limit the likelihood of specific attack vectors being exploited--just like in the real world. The tool is itself also extensible. The events for a given game are taken from a database and can be rewritten for different sectors or markets. Penalties for different events and how events interlink is configurable allowing for a wide range of variations of the game to be created with relative ease.
Our design choices fulfil Hart et al.'s [7] MOTENS pedagogical design framework for serious cyber games: **M**ultiple Modes of Learning: The mechanism enables participants to experience not only a range of events, but also to explore the relationship between EA and system security and attacks. **O**wnership and Self-Learning: The underlying platform is flexible, allowing sessions to be tailored to specific learning objectives. **T**heory: The overarching game principle is informed by experiential learning theory [9]. **E**nvironment: The exercise environment is designed so teams have more forgiving opening rounds to help them familiarise themselves with the exercise mechanics and expectations. **N**egotiation: The promotion of shared decision-making amongst participants and the immediacy of feedback through the reactionary event mechanism move learning from static presentation to an immersive exploratory experience. Finally, **S**elf-Learning: the facilitation of these exercises enables participants to ask for clarifications when needed, enabling each participant to start from their own knowledge baseline.
## 5 Next steps and Future Work
The exercise is ready for testing with real-world CSIRTs and executives. This stage will be used to identify UI bugs, and refine the content of the event and asset database. Once this is complete the exercise will be made freely available under a Creative Commons License for organisations to use. We will continue to work with our industry partners using the exercise to gather data around how organisations prioritise and respond to different incidents and triggers.
Future work will focus on the creation of new asset and event databases for different sectors, making it possible to explore how different sectors and different technology stacks handle incident scenarios. Decisions and Disruptions has always helped organisations better understand how they make risk decisions: but now they have to _decide harder_.
## Acknowledgments
We would like to thank Cyber Griffin, the City of London Police and the City of London Corporation for their funding and support. |
2302.09901 | First-order photon condensation in magnetic cavities: A two-leg ladder
model | We consider a model of free fermions in a ladder geometry coupled to a
nonuniform cavity mode via Peierls substitution. Since the cavity mode
generates a magnetic field, no-go theorems on spontaneous photon condensation
do not apply, and we indeed observe a phase transition to a photon condensed
phase characterized by finite circulating currents, alternatively referred to
as the equilibrium superradiant phase. We consider both square and triangular
ladder geometries, and characterize the transition by studying the energy
structure of the system, light-matter entanglement, the properties of the
photon mode, and chiral currents. The transition is of first order and
corresponds to a sudden change in the fermionic band structure as well as the
number of its Fermi points. Thanks to the quasi-one dimensional geometry we
scrutinize the accuracy of (mean field) cavity-matter decoupling against large
scale density-matrix renormalization group simulations. We find that
light-matter entanglement is essential for capturing corrections to matter
properties at finite sizes and for the description of the correct photon state.
The latter remains Gaussian in the the thermodynamic limit both in the normal
and photon condensed phases. | Zeno Bacciconi, Gian Marcello Andolina, Titas Chanda, Giuliano Chiriacò, Marco Schiró, Marcello Dalmonte | 2023-02-20T10:55:14Z | http://arxiv.org/abs/2302.09901v4 | **First-order superradiant phase transition in magnetic cavities: A two-leg ladder model**
## Abstract
**We consider a model of free fermions in a ladder geometry coupled to a non-uniform cavity mode via Peierls substitution. Since the cavity mode generates a magnetic field, no-go theorems on spontaneous photon condensation do not apply, and we indeed observe a phase transition to a superradiant phase. We consider both square and triangular ladder geometries, and characterize the transition by studying the energy structure of the system, light-matter entanglement, the properties of the photon mode, and chiral currents. The superradiant transition is of first order and corresponds to a sudden change in the fermionic band structure as well as the number of its Fermi points. Thanks to the quasi-one dimensional geometry we scrutinize the accuracy of (mean field) cavity-matter decoupling against large scale density-matrix renormalization group simulations. We find that light-matter entanglement is essential for capturing corrections to matter properties at finite sizes and for the description of the correct photon state. The latter remains Gaussian in the the thermodynamic limit both in the normal and superradiant phases.**
###### Contents
* 1 Introduction
* 2 Hamiltonian
* 3 Square ladder
* 3.1 Methods
* 3.1.1 Photon mean-field
* 3.1.2 Density-matrix renormalization group
* 3.2 Results: Photon mean-field vs. DMRG
* 3.3 Results: Gaussian fluctuations
* 4
## 1 Introduction
One of the aims of the paradigm of cavity control is to modify the properties of quantum materials using cavity embedding [1, 2, 3]. In strong coupling regimes, vacuum effects [4] can modify the properties of the material even without external illumination, e.g., by affecting magneto-transport of a two dimensional (2D) material [5] or suppressing topological protection of the integer quantum Hall effect [6]. Recently, coupling to a cavity mode has been demonstrated to affect the critical temperature and the phase transition in charge-density wave systems [7]. Theoretical proposals have focused on the possibility of controlling electronic instabilities and ordered phases by quantum fluctuations of the cavity field, including superconductivity [8, 9] and ferro-electricity [10], or even inducing phase transitions in both light and matter degrees of freedom by onset of the so called _superradiant_ phase where the ground state has a macroscopic number of coherent photons. The superradiant phase transition, originally introduced in the context of the Dicke model [11, 12, 13] describing an ensemble of two-level atoms collectively coupled to a common cavity mode, has been recently discussed for electronic systems coupled to single-mode cavity [14, 15, 16, 17, 18, 19, 20].
A proper description of the superradiant phase transition requires a gauge invariant framework for the light-matter interaction, an issue which poses key theoretical challenges for truncated models which only retain a subset of degrees of freedom. In the ultrastrong coupling regime [21], where the light-matter coupling is comparable to the transition energies of the atoms, this truncation could lead to violations of gauge-invariance [22, 23, 24], thus questioning the validity of such a description. Indeed, the theoretical predictions of superradiance have been hindered by the use of truncated models lacking gauge invariance, leading to inaccurate results.
To tackle this issue, Refs. [25, 26] considered an underlying microscopic model without relying on any truncation and proved that the superradiant phase transition is prohibited as long as a single-mode spatially _uniform_ vector potential is considered. In order to reproduce this result within a truncated model, it is crucial to use a gauge-invariant descriptions of the light-matter interaction such as the Peierls phase and its extensions [27, 28, 29].
More recent works [30, 31, 32, 33, 34] have relaxed the strong assumption of the spatially uniform
vector potential and show that the superradiant phase transition is analogous to the Condon magnetostatic instability [35]. Since in strictly one dimensional (1D) geometry the orbital motion of electrons cannot be affected by a magnetic field, one needs to consider at least two dimension or the spin degree of freedom [33].
Here we investigate the occurrence of the superradiant phase transition in a minimal setting beyond 1D - i.e., a two-leg ladder [36, 37, 38, 39, 40] - where the orbital motion of spinless fermions is coupled through Peierls substitution to a non-uniform cavity mode which generates a fluctuating uniform magnetic field. In contrast to 1D chains, two-leg ladders allow us to analyze transverse response to non-uniform vector potentials, while still being amenable to a thorough numerical investigation beyond typical mean-field approximations by means of the density-matrix renormalization group (DMRG) techniques [41, 42, 43, 44] (recently being also employed in cavity quantum electrodynamics (QED) systems [45, 46, 47, 48, 49, 50]).
Our results show that ladder geometries can indeed host an equilibrium superradiant phase (to which we will refer to also as photon condensation [25], not to be confused with the non-equilibrium phase transition observed in dye filled microcavities [51]), via a first-order transition from a normal metallic phase. The first order nature of this transition arises from the strongly non-linear orbital paramagnetic response of the ladder system and provides therefore a different scenario for superradiance with respect to those discussed so far in the literature [31, 32, 33]. While a photon mean-field (PMF) decoupling of the photon and matter degrees of freedom captures qualitatively the phase transition, we find that for finite sizes the correct treatment of quantum fluctuations is essential to estimate physically relevant quantities, such as current and photon properties. This demonstrates how, in these settings, the light-matter entanglement and photon squeezing cannot, in general, be neglected. In the thermodynamic limit, we show that the photon condensation allows to modify the properties of an extensive system with a single cavity mode in the collective strong coupling regime. Remarkably, even in this thermodynamic limit, where the photon state is Gaussian, it is necessary to consider both the light-matter entanglement and photon squeezing to determine the photon properties.
The structure of the paper is the following. In Sec. 2 we describe the Hamiltonian of the light-matter coupled system and introduce the main physical gauge-invariant quantities. In Sec. 3 we first introduce the PMF approximation and the DMRG numerics. Then we discuss the result comparing the two approaches and recover a qualitative agreement between the two by adding quantum fluctuations on top of the PMF solution. In Sec. 4 we move to the triangular ladder geometry highlighting the similarities with the square ladder case. In Sec. 5, we draw the conclusions and discuss possible future directions.
## 2 Hamiltonian
We consider a hybrid light-matter system where the light component is represented by a single cavity mode and the matter component is described by a tight-binding model of charged (\(q=-1\)) spinless free fermions on a ladder geometry. The ladder sits on the \(x-y\) plane, extends in the \(x\) direction with a lattice spacing \(d\) and the spacing between the two legs is also \(d\). Depending on the alignment of the sites on the two legs of the ladder and on the nature of inter-leg hoppings, we consider either a square or triangular geometry, see Fig. 1.
The Hamiltonian describing the fermion dynamics reads:
\[\hat{H}_{0}=\left(\sum\limits_{\sigma=\pm}\hat{H}_{\sigma}\right)+ \hat{H}_{\perp}, \tag{1}\] \[\hat{H}_{\sigma}=-t_{0}\sum\limits_{j=1}^{L-1}\hat{c}_{\sigma,j}^{ \dagger}\hat{c}_{\sigma,j+1}\,\] (2) \[\hat{H}_{\perp}=\left(-t_{1}\sum\limits_{j=1}^{L}\hat{c}_{+,j}^{ \dagger}\hat{c}_{-,j}-t_{2}\sum\limits_{j=1}^{L-1}\hat{c}_{+,j}^{\dagger}\hat{ c}_{-,j+1}+\text{h.c.}\right)\, \tag{3}\]
where \(\sigma=+(-)\) indicates the top (bottom) leg of the ladder, \(j=1,\dots,L\) is the site/rung index on each leg, and \(\hat{c}_{\sigma,j}^{\dagger}\) (\(\hat{c}_{\sigma,j}\)) creates (destroys) a fermion on the site \(j\) and on the leg \(\sigma\). We consider open boundary conditions and one fermion per rung so that \(N=L\), unless specified otherwise. Moreover, we set equal hopping amplitudes \(t_{0}=t_{1}\), while \(t_{2}=0\) or \(t_{2}=t_{0}\) for the square and the triangular geometry, respectively.
The cavity setup we consider is that of a single mode where the cavity Hamiltonian is represented by a single quadratic bosonic mode with frequency \(\omega_{c}\):
\[\hat{H}_{\text{c}}=\omega_{\text{c}}\hat{a}^{\dagger}\hat{a}. \tag{4}\]
Correspondingly, the cavity vector potential is \(\hat{\mathbf{A}}(\mathbf{r})=\mathbf{A}_{0}(\mathbf{r})(\hat{a}+\hat{a}^{\dagger})\) where \(\mathbf{A}_{0}(\mathbf{r})\) retains the spatial structure of the cavity mode and \(\hat{a}\) is the annihilation operator for a photon in
Figure 1: Sketch of the ladder plus cavity system under consideration. (a) Without any coupling to the cavity, the two legs of the ladder have the same intra-leg hopping \(t_{0}\) (solid lines), an inter-leg hopping between corresponding site \(t_{1}\) (dashed lines), and (for the triangular ladder) a diagonal inter-leg hopping \(t_{2}\) (dotted lines). The cavity mode has frequency \(\omega_{\text{c}}\) and a space profile given by the vector potential \(\mathbf{A}_{0}(\mathbf{r})=-B_{0}y\hat{\mathbf{x}}\). (b) Upon coupling to the cavity, the intra-leg hopping terms are modified by the photon in a different way for the top (\(t_{0}^{+}\)) and bottom (\(t_{0}^{-}\)) leg.
this cavity mode. We consider a spatially varying mode function that in the vicinity of the ladder can be written as \(\mathbf{A}_{0}(\mathbf{r})=-B_{0}y\hat{\mathbf{x}}\). The cavity is, therefore, magnetic since the cavity mode has a non-zero curl which in classical electrodynamics gives rise to a magnetic field. In our quantum light model, this means that cavity photons generate a fluctuating magnetic flux through the ladder plaquettes. We remark here that the single-mode approximation is not always valid and it in general depends on the specifics of the system [14, 15, 16, 17, 18, 19, 20, 52, 53, 1, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53]. In order to have a gauge-invariant coupling between matter and light, we implement the light-matter coupling by means of the Peierls substitution:
\[\hat{c}^{\dagger}_{i,\sigma}\hat{c}_{j,\sigma^{\prime}}\rightarrow\exp\Big{[} iq\int_{R_{i,\sigma}}^{R_{j,\sigma^{\prime}}}d\mathbf{r}\cdot\hat{\mathbf{A}}( \mathbf{r})\Big{]}\hat{c}^{\dagger}_{i,\sigma}\hat{c}_{j,\sigma^{\prime}}\, \tag{5}\]
where \(R_{i,\sigma}\) denotes the position of the electronic site \(i,\sigma\). In our case, the Peierls phase is non-zero only for intra-leg hoppings which are along the \(x\) direction. The Peierls phase as discussed by Luttinger [54] is only an approximation of the coupling to electromagnetic fields when the value of the magnetic flux over an area, comparable to the typical size of the fermionic orbitals, is comparable to \(\pi\)[27]. However the corrections strongly depend on the nature of the localized orbitals, and since neglecting these corrections does not spoil the gauge-invariant properties of the coupling, we keep only the Peierls phase.
The full light-matter coupled Hamiltonian then reads:
\[\hat{H}=\omega_{\mathrm{c}}\hat{a}^{\dagger}\hat{a}- \Big{(}t_{1}\sum_{j=1}^{L}\hat{c}^{\dagger}_{+,j}\hat{c}_{-,j}+t_{ 2}\sum_{j=1}^{L-1}\hat{c}^{\dagger}_{+,j}\hat{c}_{-,j+1}\] \[+t_{0}\sum_{j=1}^{L-1}\sum_{\sigma=\pm}e^{i\sigma g(\hat{a}+\hat{ a}^{\dagger})/\sqrt{L}}\hat{c}^{\dagger}_{\sigma,j}\hat{c}_{\sigma,j+1}+ \mathrm{h.c.}\Big{)}\, \tag{6}\]
where we have introduced the dimensionless coupling constant \(g=|q|d^{2}B_{0}\sqrt{L}/2\) which is the parameter that drives the transition. Note that \(g\) does not grow explicitly with \(L\) given the scaling of the field intensity \(B_{0}\propto 1/\sqrt{L}\) provided that the density \(N/V=L/V\) is fixed. In optical cavities, the frequency of the mode \(\omega_{\mathrm{c}}\) and the field intensity \(B_{0}\) are, in general, not independent parameters (\(\omega_{\mathrm{c}}\propto B_{0}^{2}\)). Still we can, in principle, tune the light-matter interaction strength \(g\) independently of \(\omega_{\mathrm{c}}\), for example, by varying the fermionic charge \(q\). In the following we will in any case stick with \(q=-1\) and use \(g\) as an independent parameter1. The Hamiltonian (6) is invariant under the combined application of (1) the parity transformation of the photon \(P_{\mathrm{ph}}:\hat{a}\rightarrow-\hat{a}\) and (2) the leg inversion \(P_{\sigma}:\sigma\rightarrow-\sigma\), so that (c.f. [40])2:
Footnote 1: One could think of changing the lattice spacing \(d\), but this would in turn change the hopping integrals. This is one of the main issue of the Peierls phase, it inevitably links the light-matter interaction and hopping integrals as discussed in [29].
Footnote 2: However, it is to be noted that independent applications of \(P_{\mathrm{ph}}\) or \(P_{\sigma}\) do not leave the Hamiltonian invariant.
\[\mathcal{P}\equiv P_{\mathrm{ph}}P_{\sigma},\quad\mathcal{P}\hat{H}\mathcal{P} ^{-1}=\hat{H}. \tag{7}\]
We now define two important quantities that are physically related in this light-matter system. The first one is the magnetic flux per plaquette \(\hat{\Phi}\) pointing in the \(z\) direction:
\[\hat{\Phi}=\int_{\square}dxdy\ \nabla\times\hat{\mathbf{A}}(\mathbf{r})=\frac{2g}{ \sqrt{L}}(\hat{a}+\hat{a}^{\dagger})\, \tag{8}\]
where the \(\square\) indicates the integral on a plaquette. The light-matter coupling in the Hamiltonian (6) only depends on the magnetic flux \(\hat{\Phi}\) which is a well-defined physical (and thus gauge-invariant) quantity. The second quantity is the chiral charge current:
\[\hat{J}_{\chi}=\sum_{j=1}^{L-1}\hat{J}_{\square,j}. \tag{9}\]
The chiral current is defined as the sum of the plaquette currents \(\hat{J}_{\square,j}=-\hat{J}_{+,j}-\hat{J}_{\perp,j+1}+\hat{J}_{-,j}+\hat{J}_{ \perp,j}\) flowing in a anticlockwise direction, where \(\hat{J}_{\perp,j}\) is the inter-leg current flowing from the top \(\sigma=+\) to the bottom leg \(\sigma=-\) at sites \(j\) and \(\hat{J}_{\pm,j}\) is the intra-leg longitudinal current flowing from site \(j\) to site \(j+1\). The gauge-invariant currents can be derived starting from the charge density \(\hat{n}_{\sigma,j}\equiv q\hat{c}^{\dagger}_{\sigma,j}\hat{c}_{\sigma,j}\) with \(q=-1\), which fulfills a discrete continuity equation \(\partial_{t}\hat{n}_{\sigma,j}=-\hat{J}_{\sigma,j}+\hat{J}_{\sigma,j-1}- \sigma\hat{J}_{\perp,j}\). By comparing this expression with the Heisenberg equation for the density \(\partial_{t}\hat{n}_{\sigma,j}=i[\hat{H},\hat{n}_{\sigma,j}]\) and carrying out the explicit calculation, we find for the currents:
\[\hat{J}_{\sigma,j}=-it_{0}\left(e^{i\sigma g(\hat{a}+\hat{a}^{\dagger})/ \sqrt{L}}\hat{c}^{\dagger}_{\sigma,j}\hat{c}_{\sigma,j+1}-\text{h.c.}\right) \qquad\hat{J}_{\perp,j}=-it_{1}(\hat{c}^{\dagger}_{+,j}\hat{c}_{-,j}-\text{h.c.})\.\]
Performing the sum, the contributions from the inter-leg current cancel out except for the boundary contributions and we are left with
\[\hat{J}_{\chi}=it_{0}\sum_{j,\sigma}\left(\sigma e^{i\sigma\frac{\sigma}{\sqrt {L}}(a+a^{\dagger})}\hat{c}^{\dagger}_{\sigma,j}\hat{c}_{\sigma,j+1}-\text{h.c.}\right)-it_{1}(\hat{c}^{\dagger}_{+,1}\hat{c}_{-,1}-\hat{c}^{\dagger}_{+,L} \hat{c}_{-,L}-\text{h.c.}). \tag{10}\]
The chiral current and the magnetic flux operators defined above correspond to physical, gauge invariant, observables, and as such their expectation values do not depend on the choice of the gauge [55, 56, 57]. Different gauge choices are indeed implemented through unitary transformations which act on both operators and states, and leave invariant physical observables. In the following, we will use the expectation values of the chiral current and the magnetic flux as the order parameters for the superradiant phase transition, making it a gauge-invariant phenomenon.
The presented Hamiltonian, although a minimal toy model, serves as a powerful tool in understanding the physics of magnetic superradiant phase transitions and the collective strong coupling regime of itinerant electrons coupled to a single quantized cavity mode. Despite its simplicity, realizing such a model in solid state materials embedded in optical cavities could be extremely challenging. Then although ladder geometries can be realized in cold-atom set-ups, atoms are neutral and our Hamiltonian cannot be straightforwardly realizes if not with dynamical synthetic gauge fields3. While the experimental realization of the presented model may be a challenge, our primary objective is to have a clearer interpretation of the results and a better understanding of the underlying physics. Therefore, we leave the question of experimental realizations open for future studies and focus on the theoretical aspects of the model in the present work.
Footnote 3: Note that “dynamical” semi-classical gauge fields for cold atomic set-ups have been studied (see e.g. Ref. [58, 59]) but the dynamics is linked to their driven-dissipative nature and hence differs from the model object of this work.
## 3 Square ladder
We start by looking at the half-filled (\(N/L=1\)) square ladder geometry (\(t_{2}=0\)). We solve for the ground state of the model with two approaches: _(i)_ using an approximate photon mean-field decoupling where the fermionic problem is non-interacting and the light-matter entanglement is neglected; _(ii)_ performing numerical simulations with DMRG where the light-matter entanglement is taken into account and the problem is fully many-body.
### Methods
#### 3.1.1 Photon mean-field
In the photon mean-field approximation (PMF) the quantum correlations between photon and matter are neglected by assuming a product ground state \(\left|\Psi^{\rm PMF}\right\rangle=\left|\psi_{\rm m}\right\rangle\left|\psi_{ \rm ph}\right\rangle\)[31, 32, 53]. This gives rise to two mean-field Hamiltonians for photon \(\hat{H}_{\rm ph}^{\rm PMF}\equiv\left\langle\psi_{\rm m}\right|\hat{H}\left| \psi_{\rm m}\right\rangle\) and matter \(\hat{H}_{\rm m}^{\rm PMF}\equiv\left\langle\psi_{\rm ph}\right|\hat{H}\left| \psi_{\rm ph}\right\rangle\) that must be solved self-consistently. Up to irrelevant constants they read:
\[\hat{H}_{\rm m}^{\rm PMF}=-t_{0}R\sum\limits_{j,\sigma}e^{i\sigma \phi/2}\hat{c}_{\sigma,j}^{\dagger}\hat{c}_{\sigma,j+1}-t_{1}\sum\limits_{j} \hat{c}_{+,j}^{\dagger}\hat{c}_{-,j}+{\rm h.c.}\, \tag{11}\] \[\hat{H}_{\rm ph}^{\rm PMF}=\omega_{\rm c}\hat{a}^{\dagger}\hat{a}+ J_{1}\cos\left[\frac{g(\hat{a}+\hat{a}^{\dagger})}{\sqrt{L}}\right]+J_{2}\sin \left[\frac{g(\hat{a}+\hat{a}^{\dagger})}{\sqrt{L}}\right]\,, \tag{12}\]
where we introduced the mean-field parameters \(R\), \(\phi\), \(J_{1}\) and \(J_{2}\). The first two depend on the photon state and are defined as
\[R\equiv\left|\left\langle\psi_{\rm ph}\right|e^{i\frac{g(\hat{a}+\hat{a}^{ \dagger})}{\sqrt{L}}}\left|\psi_{\rm ph}\right\rangle\right|\,,\hskip 28.452756pt \phi\equiv 2\arg\left[\left\langle\psi_{\rm ph}\right|e^{i\frac{g(\hat{a}+\hat{a }^{\dagger})}{\sqrt{L}}}\left|\psi_{\rm ph}\right\rangle\right]\,, \tag{13}\]
such that \(R\)\(e^{i\phi/2}=\left\langle\psi_{\rm ph}\right|e^{i\frac{g(\hat{a}+\hat{a}^{ \dagger})}{\sqrt{L}}}\left|\psi_{\rm ph}\right\rangle\). The matter mean-field parameters \(J_{1}\) and \(J_{2}\) are defined as
\[J_{1}\equiv-t_{0}\sum\limits_{j=1}^{L-1}\sum\limits_{\sigma} \left\langle\psi_{\rm m}\right|\left(\hat{c}_{\sigma,j}^{\dagger}\hat{c}_{ \sigma,j+1}+{\rm h.c.}\right)\left|\psi_{\rm m}\right\rangle\, \tag{14}\] \[J_{2}\equiv-it_{0}\sum\limits_{j=1}^{L-1}\sum\limits_{\sigma} \left\langle\psi_{\rm m}\right|\left(\sigma\hat{c}_{\sigma,j}^{\dagger}\hat{c}_ {\sigma,j+1}-{\rm h.c.}\right)\left|\psi_{\rm m}\right\rangle. \tag{15}\]
The photon parameters \(\phi\) and \(R\) have respectively the interpretation of a magnetic flux per plaquette and of the cavity renormalization of the hopping process. Whenever the photon quantum state is Gaussian, the expectation values of an exponential can be expressed in terms of expectation values of the two quadratures \(\hat{X}\equiv(\hat{a}+\hat{a}^{\dagger})/\sqrt{2}\), \(\hat{P}\equiv i(\hat{a}-\hat{a}^{\dagger})/\sqrt{2}\), and their fluctuations. In particular for Gaussian states \(|\psi_{\rm ph}^{G}\rangle\), our mean-field parameters \(R\) and \(\phi\) become:
\[R_{G}=\exp\Bigl{[}-\frac{2g^{2}}{L}\Bigl{(}\langle\psi_{\rm ph}^ {G}|\hat{X}^{2}|\psi_{\rm ph}^{G}\rangle-\langle\psi_{\rm ph}^{G}|\hat{X}|\psi _{\rm ph}^{G}\rangle^{2}\Bigr{)}\Bigr{]}\, \tag{16}\] \[\phi_{G}=\frac{g\sqrt{2}}{\sqrt{L}}\langle\psi_{\rm ph}^{G}|\hat{ X}|\psi_{\rm ph}^{G}\rangle=\left\langle\psi_{\rm ph}^{G}\right|\hat{\Phi}\left| \psi_{\rm ph}^{G}\right\rangle. \tag{17}\]
Note that if the photonic state is not Gaussian, in principle, we can have \(\phi\neq\left\langle\psi_{\rm ph}\right|\hat{\Phi}\left|\psi_{\rm ph}\right\rangle\). The physical interpretation of the matter parameters \(J_{1}\) and \(J_{2}\) in terms of the chiral current \(\hat{J}_{\chi}\) depends on the values of \(\phi\) and \(R\) as
\[\left\langle\Psi^{\rm PMF}\right|\hat{J}_{\chi}\left|\Psi^{\rm PMF}\right\rangle =R\Big{[}J_{2}\cos(\phi/2)-J_{1}\sin(\phi/2)\Big{]}+\left\langle\Psi^{\rm PMF }\right|\hat{J}_{\perp,N}-\hat{J}_{\perp,1}\left|\Psi^{\rm PMF}\right\rangle. \tag{18}\]
The solution of the PMF Hamiltonians is obtained by a standard self-consistent procedure:
1. Start from a guess \(R\) and \(\phi\);
2. Solve the single-particle problem given by the matter mean-field Hamiltonian in Eq. (11) and compute \(J_{1}\) and \(J_{2}\) as Eqs. (14) and (15);
3. Solve the photonic mean-field Hamiltonian in Eq. (12) via exact diagonalization and compute a new \(R^{\prime}\) and \(\phi^{\prime}\) from Eq. (13);
4. Repeat from 2, using \(R^{\prime}\) and \(\phi^{\prime}\) as a new mean-field parameters until the desired convergence is reached.
Note that in presence of first-order transitions one needs to be careful and try different initial guesses as the self-consistency can get stuck in local minima of the energy.
#### 3.1.2 Density-matrix renormalization group
This ladder geometry, being a quasi-1D system, is well-suited to be approached via the density-matrix renormalization (DMRG) techniques [41, 42, 43, 44]. The matrix-product state (MPS) representation that we use for this purpose is similar to those employed in previous works on light-matter systems [45, 46, 47, 48], where the single photon site is placed at one end of the MPS chain, while rest of the MPS is made up of fermionic sites, as depicted in Fig. 2. Additionally, to preserve the global \(\mathbb{U}(1)\) symmetry associated with the conservation of total fermionic charge \(\sum_{\sigma,j}n_{\sigma,j}\), we employ \(\mathbb{U}(1)\) symmetric tensors [60, 61] for the fermionic sites, while a standard dense tensor is used for the photon site. Moreover, the use of the matrix-product operator (MPO) representation for the Hamiltonian, as illustrated in Fig. 2, is efficient as the long-range light-matter interaction term can be expressed exactly in the MPO form [62, 63]. The ground state obtained through DMRG is a variationally computed state, with an error that can be precisely controlled through the bond dimension \(\chi\) of the MPS ansatz. By adjusting the bond dimension, one can verify the convergence and attain the desired level of accuracy. See App. D for further details on the convergence of DMRG simulations. For the numerical implementation of the DMRG algorithm we use the ITensor library [64] and the respective codes can be found at GitHub [65].
It is important to note that, in a symmetry-broken phase, while strictly speaking, exact symmetry breaking does not occur in the ground state at finite sizes, it is a well-established characteristic of DMRG to converge to one of the symmetry-broken states, as these states have significantly less entanglement compared to the macroscopic superposition of two (or more) symmetry-broken states. In the following discussion, while considering the symmetry-broken phase, we will focus solely on these symmetry-broken ground states, which can be reached either automatically through the DMRG algorithm or with the aid of a small symmetry breaking term4.
Footnote 4: For large enough system-size \(L\), DMRG may randomly converge to one of the symmetry-broken ground states, which can also be influenced by the choice of initial input state. To eliminate such arbitrariness, we add a very small symmetry breaking term in our simulations and select only a specific symmetry-broken state.
The DMRG solution allows us then to easily access the photon density matrix by tracing out the matter degree of freedom as (see Fig. 2):
\[\rho_{\rm ph}={\rm Tr}_{\rm m}\Big{[}\left|\Psi\right\rangle\left\langle\Psi \right|\Big{]}. \tag{19}\]
From this we can, for example, calculate the entanglement entropy of the cavity with respect to matter \(S(\rho_{\rm ph})=-{\rm Tr}\Big{[}\rho_{\rm ph}\ln\rho_{\rm ph}\Big{]}\) that in the PMF decoupling is exactly zero.
### Results: Photon mean-field vs. DMRG
Phase diagram.A sample of the results obtained with DMRG and PMF are depicted in Fig. 3. Overall, we observe that the two approaches match in determining the structure of the phase diagram - both show a superradiant first-order phase transition from a normal metallic phase for \(g<g_{\rm c}\) to a superradiant phase for \(g>g_{\rm c}\) characterized by a \(\mathbb{Z}_{2}\) symmetry breaking where the symmetry \(\mathcal{P}\) (see Eq. (7)) gets spontaneously broken. The energy kink shown in Fig. 3(d) support the first-order nature of the transition for both DMRG5 and PMF. The normal phase is connected to the state at \(g=0\) and display metallic properties. The superradiant phase is a Condon phase where a finite current \(\langle\hat{J}_{\chi}\rangle\neq 0\) (Fig. 3(b)) is linked to a finite magnetic flux \(\langle\hat{\Phi}\rangle\neq 0\) (Fig. 3(a))6. In particular, we have \(|\langle\hat{J}_{\chi}\rangle|=\frac{L}{2}(\pi-|\langle\hat{\Phi}\rangle|)\)
Figure 2: The matrix-product state (MPS) and the matrix-product operator (MPO) representations used in our study. The photonic Hilbert space is truncated to accommodate maximum number of photons \(N_{\rm max}^{\rm ph}=63\), while the fermionic local Hilbert space dimension is 2. Note that the photon mean-field (PMF) approximation is implicitly setting \(\chi=1\) for the first link that connects the photon and fermionic degrees of freedom. In this MPS representation, the photon density matrix can be computed efficiently by tracing out the fermionic degrees of freedom.
from both DMRG and PMF with small finite-size corrections, where \(|\langle\hat{\Phi}\rangle|\leq\pi\) and \(\langle\hat{J}_{\chi}\rangle\) and \(\langle\hat{\Phi}\rangle\) have the same sign. The matter state is a diamagnetic band insulator, also called the Hofstader flux state in the context of fermionic ladder with static magnetic fields [37]. The
Figure 3: DMRG results for different system sizes and their comparison with the photon mean-field (red dashed line). (a-b) The magnetic flux per plaquette and chiral current as the cavity and the matter order parameters respectively. They both show a discontinuity at the first-order transition point \(g_{c}\simeq 1.44\) that slightly shifts at finite sizes. The relation \(|\langle\hat{J}_{\chi}\rangle|=\frac{L}{2}(\pi-|\langle\hat{\Phi}\rangle|)\) is satisfied in the thermodynamic limit. Note that here we plot the absolute values of the order parameters since they can take on both negative and positive values in the superradiant phase depending on the degenerate symmetry-broken state, but always have the same sign as each other. (Inset) Zoom near the thermodynamic limit transition point \(g_{c}=1.44\) marked with a vertical black line. Note that finite size corrections to the transition point and value of the current are different between DMRG and PMF. (c) Photon entanglement entropy with respect to the matter. It is finite in the thermodynamic limit for both phases, although much smaller in the superradiant phase. (d) Total energy density showing a kink at the transition point as expected for a first-order transition both in DMRG and the photon mean-field. Note that finite size corrections are different.
notion of diamagnetism, however, in the present context is an unusual one. While usually one defines diamagnetism when the magnetization of a material is opposite with respect to an applied magnetic field, here the magnetization is in the same direction but proportional to the difference \(\pi-|\langle\hat{\Phi}\rangle|\). Diamagnetism must be interpreted as a response of the system trying to bring the magnetic flux not to \(0\) but to \(\pi\).
Finite size effects.The chiral current shown in Fig. 3(b) has finite size corrections which compare well between PMF and DMRG, but the exact transition point is shifted towards lower (PMF) and higher (DMRG) values of the coupling strength. The reason is that while the superradiant phase has low photon entanglement (Fig. 3) and is well captured by the PMF, the normal phase has high photon entanglement and then finite-size corrections are different between PMF and DMRG. In particular the finite size effect of the PMF comes mainly from the mean-field hopping renormalization \(R\) that tends to \(1\) in the thermodynamic limit as the squeezing of the cavity remain finite (Eq. (16)). The same happens for the total energy and the magnetic susceptibility (not shown).
Magnetostatic instability.We refer to the instability to a ground-state displaying a finite magnetic flux \(\langle\hat{\Phi}\rangle\neq 0\) as a "magnetostatic instability" [30, 31, 34, 66]. In Fig. 4 we show the mean-field picture of the magnetostatic instability characterizing the first-order transition. Differently to a second-order transition [31] the instability is not controlled by the linear orbital magnetic susceptibility (which is a property of the Fermi-surface [32]) of the normal state \(\phi=0\). Conversely, the first-order nature is given by a strong non-linear response of the system at strong magnetic fluxes. In particular, the non-linear behavior comes together with an indirect gap opening in the band structure at \(\phi=2\pi/3\). Once fixed the fermionic bare energies, the transition point only depends on the energy of the cavity per unit of flux, which is controlled by the combination \(\omega_{\rm c}/g^{2}\) as the cavity PMF energy density in the thermodynamic limit is \(E_{\rm ph}=\omega_{\rm c}(\phi/4g)^{2}\).
The numerical results from DMRG simulations confirm the photon mean-field picture for the instability. Our findings show that in the normal phase, when the superradiant order parameter vanishes, collective coupling to a single cavity mode cannot change the properties of a thermodynamically large system [53, 67].
Cavity quantum state.As shown in Fig. 5, the density matrix \(\rho_{\rm ph}\) obtained via DMRG shows that the cavity can be can be accurately approximated by a Gaussian state (see App. A). In order to quantify the non-Gaussianity of the state, we compute the Quantum relative entropy [68]:
\[\Delta_{G}=S(\rho_{\rm ph}^{\rm G})-S(\rho_{\rm ph})\, \tag{20}\]
where \(\rho_{\rm ph}^{\rm G}\) is the Gaussian density matrix that has the same expectation values \(\langle\hat{X}^{2}\rangle\), \(\langle\hat{P}^{2}\rangle\), \(\langle\hat{X}\rangle\), \(\langle\hat{P}\rangle\) of \(\rho_{\rm ph}\). The non-Gaussian nature of the state is a finite size correction (see Fig. 5(a)) and it arises from the non-linear nature of the Peierls phase. The non-Gaussianity revealed to be sensitive to numerical details at sizes higher than \(L=76\) for which a more careful analysis is needed (App. D). Then in order to have more information on the nature of the corrections we also show, in Figs. 5(b)-(d), the Wigner function [69] of the cavity at the smallest investigated system size. The corrections do not spoil the positivity of the Wigner function nor the qualitative shape. Also in the PMF (not shown) the non-Gaussianity goes to zero in the \(L\to\infty\) and the squeezing remain constant so that \(R\to 1\). The dashed
Figure 4: The mean-field picture of the transition. (a) Mean-field energy per particle as a function of a cavity generated ‘classical’ magnetic flux \(\phi\) at \(g=g_{c}=1.44\). The photon energy is \(E_{\rm ph}=\omega_{c}(\phi/4g)^{2}\) and the matter energy \(E_{\rm matt}=\left\langle\psi_{\rm m}\right|\hat{H}_{\rm m}^{PMF}\left|\psi_{ \rm m}\right\rangle/L\) is obtained from the single-particle problem of Eq. (11) fixing \(R=1\). (Inset) \(E(\phi)\) shown from \(g=1.2\) (light color) to \(g=2.8\) (dark color) across the transition. After the transition the solution at \(\phi=0\) does not immediately become unstable, signaling metastability – a typical feature of first-order phase transitions. (b) Metallic and (c) insulating band structure at two fluxes corresponding to the two minima of the total energy at \(g=g_{c}\) in periodic boundary conditions. Horizontal lines mark the chemical potential.
lines in Figs. 5(b)-(d) mark the width of the respective Gaussian states and they encode the nature of the Gaussian state. A finite light-matter entanglement is represented by a lager area enclosed in the dashed lines as it increase the variance of both quadratures (see App. A), while squeezing reduce the fluctuations in one quadrature while increasing the other one to keep the product constant. The gaussianity of the cavity state can be an important starting point for semiclassical treatments of light-matter problems [70]
### Results: Gaussian fluctuations
In order to gain further insights into the entangled light-matter ground state one can try to calculate pertubative contribution to the ground state \(\ket{\Psi_{0}}\) in powers of \(g\). Leaving a more detailed discussion in App. C at first order (with periodic boundary conditions) we obtain:
\[\left|\Psi_{(1)}\right>=\ket{\Psi_{0}}-\frac{g}{\sqrt{L}}\sum_{k}\frac{2f_{k}} {\omega_{k}+\omega_{\rm c}}\ket{\rm PH_{\rm k},1_{\rm c}}, \tag{21}\]
where \(\ket{\rm PH_{\rm k},1_{\rm c}}=\hat{\bar{c}}_{k,2}^{\dagger}\hat{\bar{c}}_{k,1 }\hat{a}^{\dagger}\ket{\Psi_{0}}\) is a polaritonic state with one photon in the cavity and with one direct particle-hole excitation over the Fermi see at momentum \(k\), \(\hat{\bar{c}}_{k,a}\) with \(a=1,2\)
Figure 5: Cavity quantum state obtained with DMRG. (a) Quantum relative entropy \(\Delta_{G}\) as a measure of cavity non-Gaussianity for different system sizes. The state is Gaussian in the thermodynamic limit in both phases. (Inset) The numerical extrapolation of the non-Gaussianity to the thermodynamic limit by using a linear fit \(f(1/L)=a+b/L\). Errors of \(10^{-5}\) on the DMRG points have been considered to account for slight dependency on numerical parameters (see App. D). (b-d) Wigner functions of the photon at finite system-size \(L=40\) for the uncoupled system \(g=0\) (b), in normal phase \(g=1.4\) (c), and in the superradiant phase \(g=2\) (d). Even at the smallest presented size when the non-Gaussianity is larger, the Wigner function is positive everywhere. White contours display the width of the corresponding Gaussian state \(\rho_{\rm ph}^{G}\) having same \(\langle\hat{X}^{2}\rangle-\langle\hat{X}\rangle^{2}\) and \(\langle\hat{P}^{2}\rangle-\langle\hat{P}\rangle^{2}\) as the actual photon state \(\rho_{\rm ph}\), and the white arrow shows the displaced nature of the superradiant state. Red dots mark the origin \((x,p)=(0,0)\).
are the fermionic operators that diagonalize \(H_{\rm m}^{\rm PMF}\), \(\omega_{k}\) is the direct band gap, and \(f_{k}\) are the matrix elements for the magnetic transition. This matches the DMRG solution at small values of \(g\) (not shown), but it becomes useless at higher values of \(g\), in particular after the superradiant transition.
As an alternative method to gain analytical insight on the system dynamics, we examine the Gaussian fluctuations above the mean-field solution which, in the normal phase, at first order in \(g\) actually recover the above perturbative result. This treatment is motivated by the observation that, for all values of \(g\) considered, the photon state is always Gaussian in the thermodynamic limit. We then expand the photon operator as:
\[\hat{a}=\alpha_{0}+\delta\hat{a}\, \tag{22}\]
where \(\alpha_{0}=\phi\sqrt{L}/4g\) is the solution of the photon mean-field decoupling restricted to coherent states and \(\delta\hat{a}\) the bosonic quantum fluctuations around it fulfilling the bosonic commutation relations \([\delta\hat{a},\delta\hat{a}^{\dagger}]=1\). Note that the coherent state around which we are expanding does not, in general, correspond to the full solution of the PMF as that is a generic pure quantum state of the cavity Hilbert space.
To simplify the treatment, we work in periodic boundary conditions (details in App. B) and introduce the creation operator in momentum space \(\hat{c}_{\sigma,k}=\frac{1}{\sqrt{L}}\sum_{j}e^{-ikj}\hat{c}_{\sigma,j}\). and the pseudospin representation \(\hat{\sigma}^{\alpha}_{k}=(\hat{c}^{\dagger}_{+,k}\quad\hat{c}^{\dagger}_{-,k })\sigma^{\alpha}(\hat{c}_{+,k}\quad\hat{c}_{-,k})^{T}\) with \(\sigma^{\alpha}\) being the Pauli matrices (\(\alpha=0,1,2,3\)). Now in order to obtain a quadratic Hamiltonian, we expand the Peierls phase up to second order in \(\delta\hat{a}/\sqrt{L}\) obtaining up to constants:
\[\hat{H}\simeq\omega_{c}\delta\hat{a}^{\dagger}\delta\hat{a}+\sum_{k,\alpha} \hat{\sigma}^{\alpha}_{k}h^{\alpha}_{k}+\frac{g}{\sqrt{L}}(\delta\hat{a}+ \delta\hat{a}^{\dagger})\sum_{k,\alpha}\hat{\sigma}^{\alpha}_{k}d^{\alpha}_{p,k}-\frac{g}{2L}(\delta\hat{a}+\delta\hat{a}^{\dagger})^{2}\sum_{k,\alpha} \hat{\sigma}^{\alpha}_{k}d^{\alpha}_{d,k}\, \tag{23}\]
where
\[\mathbf{h}_{k}=-(2t_{0}\cos(k)\cos(\phi/2),t_{1},0,2t_{0}\sin(k)\sin( \phi/2))\,\] \[\mathbf{d}_{p,k}=-(-2t_{0}\cos(k)\sin(\phi/2),0,0,2t_{0}\sin(k)\cos( \phi/2))\,\] \[\mathbf{d}_{d,k}=-(2t_{0}\cos(k)\cos(\phi/2),0,0,2t_{0}\sin(k)\sin( \phi/2)). \tag{24}\]
Since the light-matter coupling remains diagonal in the momentum space, the occupation will be conserved at each momentum, and it will be \(N_{k}=\langle\hat{\sigma}^{0}_{k}\rangle=0,1,2\) depending on the mean-field solution encoded in \(\phi\). The only non-trivial momentum sectors are those singly occupied where a direct particle-hole excitation is allowed, for the others \(\langle\hat{\sigma}^{1,2,3}_{k}\rangle=0\). For every momentum sector with occupation \(N_{k}=1\), we can rotate the pseudospin degree of freedom to a new basis \(\hat{\sigma}_{k}\), so that every term in the Hamiltonian, except for the ones containing cavity fluctuations \(\delta\hat{a}\), becomes diagonal. Then we use a Holstein-Primakoff transformation of the particle-hole pseudospin for which \(\hat{\sigma}^{3}_{k}=-(1-2\hat{b}^{\dagger}_{k}\hat{b}_{k})\) and \(\hat{\sigma}^{1}_{k}=(\hat{b}_{k}+\hat{b}^{\dagger}_{k})\) which is exact if one consider \(\hat{b}_{k}\) as hard core bosons. Since the occupation of a single particle-hole is not expected to be more than \(O(1/L)\) we lift the hard core boson constraint. As a last step, we discard non-quadratic terms to obtain a quadratic Hamiltonian:
\[\hat{H}^{(2)}=\omega_{c}\delta\hat{a}^{\dagger}\delta\hat{a}+\sum_{k}\omega_{k }\hat{b}^{\dagger}_{k}\hat{b}_{k}-D\frac{g^{2}}{2}(\delta\hat{a}+\delta\hat{a} ^{\dagger})^{2}+g(\delta\hat{a}+\delta\hat{a}^{\dagger})\sum_{k}P_{k}(\hat{b}_ {k}+\hat{b}^{\dagger}_{k})\, \tag{25}\]
with
\[\omega_{k}=2\sqrt{\sum_{\alpha=1}^{3}(h_{k}^{\alpha})^{2}}\,\qquad P _{k}=\delta_{N_{k},1}\frac{2}{\sqrt{L}}\frac{d_{p,k}^{3}d_{\alpha,k}^{1}}{\omega _{k}}\,\] \[D=\frac{1}{L}\sum_{k}N_{k}d_{d,k}^{0}-\frac{2}{L}\sum_{k}\delta_{ N_{k},1}\frac{d_{d,k}^{3}d_{d,k}^{3}}{\omega_{k}}. \tag{26}\]
The quadratic Hamiltonian in Eq. (25) can be diagonalized with a Hopfield-Bogoliubov transformation [71] obtaining:
\[\hat{H}^{(2)}=E_{0}+\sum_{\mu=1}^{M+1}\epsilon_{\mu}\hat{d}_{\mu}^{\dagger} \hat{d}_{\mu}\, \tag{27}\]
where \(\hat{d}_{\mu}\) is the polariton annihilation operator and
\[M=\sum_{k}\delta_{N_{k},1} \tag{28}\]
is the number of available particle-hole transition which depends on the mean-field phase. The latter is \(M=L/3\) in the normal phase and \(M=L\) in superradiant phase. The vacuum of polaritons, defined by \(\hat{d}_{\mu}\left|0_{\rm pol}\right\rangle=0\) for all \(\mu\), corresponds to a multi-mode Gaussian state of cavity photon and particle-hole excitations, and is different from the mean-field ground state \(\left|\Psi^{PMF}\right\rangle\neq\left|0_{\rm pol}\right\rangle\). In the limit of small \(g\), the ground state wavefunction \(\left|0_{\rm pol}\right\rangle\) coincides with the first-order perturbative result of Eq. (21), and then corrects it with \(O(g^{2})\) terms that give rise to the squeezing of the cavity mode. In order to check the validity of this treatment, we can directly compare the photon observables in the thermodynamic limit (Fig. 6). Note that finite system comparisons are not too meaningful as the boundary conditions between DMRG and the treatment with Gaussian fluctuations are different. There is anyway a small discrepancy that can be attributed to the higher order terms discarded in Eq. (25) that give rise to effective polariton-polariton interactions. However, this does not contradict the result presented in Fig. 5, as it is, in general, unable to spoil the Gaussianity of the cavity density matrix. Nonetheless, up to this minimal errors, the treatment of Gaussian fluctuations is able to faithfully capture the nature of the light-matter correlated ground state (see the comparisons in Fig. 6).
Now given the quadratic Hamiltonian in Eq. (27) we can also get information on the excited states of the light-matter system. In particular, we show, in Fig. 7, the zero temperature spectral function of the photon, calculated in the Lehmann representation as:
\[A(\omega)=\sum_{\mu=0}^{M+1}|\bra{\mu}\hat{a}^{\dagger}\left|0_{\rm pol} \right\rangle|^{2}\delta(\omega-\epsilon_{\mu})-|\bra{\mu}\hat{a}\left|0_{\rm pol }\right\rangle|^{2}\delta(\omega+\epsilon_{\mu})\, \tag{29}\]
where \(\left|\mu\right\rangle=\hat{d}_{\mu}^{\dagger}\left|0_{\rm pol}\right\rangle\) for \(\mu>0\) and \(\mu=0\) corresponds to the vacuum \(\left|0_{\rm pol}\right\rangle\). In the normal phase \(g<g_{c}\) we clearly see two polariton lines. The lower polariton starts at the cavity frequency \(\omega_{c}=1\) and the upper polariton at the energy \(\omega=\omega_{k}=2\) that corresponds to the excitation energy of all direct particle-hole excitations, as clear from the band structure of Fig. 4(b). The hybridized degree of freedom is then a superposition of all \(M\) available
particle-hole excitations leaving \(M-1\) dark polariton dark states, akin to what happens for intersubband exciton-polaritons [72] where intersubband particle-hole excitations provide a macroscopic electric dipole moment. In the superradiant phase \(g>g_{c}\) instead one polariton mode brings almost all the photon spectral weight and crosses the rest of the polariton modes made up by the particle-hole continuum. Indeed, whether or not a clear polariton doublet can form depend on the band structure presented in figure 4(b,c). The energy of the brightest polariton in the superradiant phase is increasing with \(g\) as the cavity fluctuations are more and more squeezed due to the term proportional to \(D\) in Eq. (27). Consistently with the first order nature of the transition the polariton gap does not close.
Figure 6: (a-b) The photon entropy \(S_{\text{ph}}\) and the variance of the quadrature \(X\) for the three different methods: DMRG (red), mean-field (blue), and mean-field plus Gaussian fluctuations (green). \(S_{\text{ph}}\) is zero by definition for the PMF, and hence is not shown. Note that while the system-size is fixed at \(L=76\) for all three methods, the boundary condition for the treatment with Gaussian fluctuations have been set to periodic instead of open. (c-f) The scaling analysis at two representative points inside each phase: \(g=1\) (c,e) and \(g=1.6\) (d,f). Dashed lines are linear fits to a function \(f(1/L)=a+b/L\), and have been extrapolated to \(L\rightarrow\infty\). For \(g=1\), the extrapolation of \(\langle\hat{X}^{2}\rangle-\langle\hat{X}\rangle^{2}\) for the PMF is not plotted and its extrapolated value is \(a=0.279\). The difference with the bare mean-field is most evident in the normal phase, where the photon entanglement is higher. On the other hand, the treatment of Gaussian fluctuations does not coincide with the DMRG results too, but the errors are drastically reduced.
Figure 7: The photon spectral function \(A(\omega)\) obtained from the treatment with Gaussian fluctuations, for the square lattice case. Each polariton energy is smeared with a Lorentzian of width \(\eta=10^{-2}\). In the normal phase, there are two bright polariton modes starting at frequencies \(\omega_{c}=1\) and \(\omega=\omega_{k}=2\), and \(M-1\) dark modes at \(\omega=\omega_{k}=2\) signaled by a white dashed line (see Eq. (28) for the definition of \(M\)). In the superradiant phase only the one polariton mode is clearly visible while the the other one is shared between all the particle-hole excitations that now have a non-uniform energy structure. The polariton gap does not close at the first-order transition. System size here is \(L=400\).
## 4 Triangular ladder
We now discuss the similar case of a triangular ladder geometry with \(t_{0}=t_{1}=t_{2}=\omega_{\rm c}=1\). We do not present explicitly all the calculations for the PMF and for the Gaussian fluctuations, as these can be done in close analogy with the square ladder case. We only mention here that the inclusion of the \(t_{2}\) hopping changes \(\mathbf{h}_{k}\), and the parameters \(\omega_{k}\), \(D\), and \(P_{k}\) appearing in Eq. (25). As shown in Fig. 8, we find again a first order transition to a superradiant state with \(\langle\hat{\Phi}\rangle\neq 0\). The main qualitative difference is in the state of the matter which now goes from a metallic state with 4 Fermi points to a superradiant metallic state with 2 Fermi points as evident from the band structure in Fig. 8. Again the normal phase has more photon entanglement with respect to the superradiant phase, and indeed the PMF does not correctly capture the fluctuations of the cavity quadrature \(\hat{X}\) below the transition but Gaussian fluctuations are in good agreement. We remind the reader here that the treatment with Gaussian fluctuations is done with periodic boundary conditions while PMF and DMRG are with open boundary conditions. However, in the thermodynamic limit (not shown) results are compatible with the interpretation given for the square ladder case.
Now by looking at the photon spectral function it is now clear that high photon entanglement in the normal phase is not directly linked to a strong coupling to a single collective
Figure 8: Results for the triangular ladder geometry. (a) The photonic order parameter showing the first-order superradiant transition at \(g=1.26\) for both DMRG and PMF. (b) Variance of the cavity quadrature \(\hat{X}\) with the three different methods. We show only one system size and the PMF+fluctuations is done with periodic boundary conditions while DMRG and PMF are in open boundary conditions. (c-d) Band structures of the PMF problem for fixed \(R=1\) and two values of \(\phi\) corresponding to the two minima at the transition. Horizontal lines mark the chemical potential. (e) Photon spectral function obtained via the Gaussian fluctuations. Again the gap does not close at the transition due to its first order nature. Each polariton energy has been smeared with a Lorentzian of width \(\eta=10^{-2}\)
excitation. While for the square only one collective excitation was mixing with the cavity, here it is evident that the whole continuum of particle-hole excitations is contributing as there is no polariton doublet in the spectrum at \(g<g_{c}\). In the superradiant phase, the ground state of the cavity is strongly squeezed and hence its excitation energy is pushed to higher frequencies as we increase \(g\).
## 5 Conclusion
In this work, we have proposed a class of minimal toy models for charged fermions coupled through a Peierls phase to a non-uniform cavity mode. The cavity hosts a fluctuating magnetic flux which above a critical light-matter coupling develops a non-zero expectation value, leading to a first-order superradiant phase transition. To the best of our knowledge, this is the first example of an equilibrium first-order superradiant phase transition for an electronic system. We have shown how the key element for such transition is a strong non-linear magnetic response of the ladder band structure, coming with a sudden change of the number of Fermi points as a function of the superradiant order parameter, 4 to 0 (2) for the square (triangular) ladder case. Thanks to the quasi-1D nature of the ladder geometry, we have been able to study the ground state via DMRG, hence fully taking into account light-matter entanglement and all kinds of quantum fluctuations. Our numerical results confirms that quantum fluctuations of a single cavity mode alone in the so-called collective strong coupling regime (\(g=\text{const}\) for \(L\rightarrow\infty\)) do not alter the phase diagram of a thermodynamically large system [53, 67].
Indeed the transition we discussed is agnostic to the PMF decoupling. Still, we find that light-matter entanglement is essential to properly describe the quantum state of a strongly coupled cavity mode as discussed in Fig. 6. As already found in other systems with linear dipole-like light-matter couplings [49] the cavity state is Gaussian. Here we have shown how the non-linear nature of the Peierls phase gives a small non-Gaussian correction at finite sizes without any qualitative changes in the Wigner function that remains positive in all the explored phases.
Supported by the Gaussian nature of the cavity ground state in the thermodynamic limit, we analytically derived the quadratic fluctuations on top of the mean-field solution. This highlights the role of polariton states whose ground state gives a qualitatively correct result for the photon entropy and gives access to the cavity spectral function. The latter reflects the first-order nature of the transition.
The presented model is a valid starting point to study superradiant transitions for large enough system sizes in a numerically exact way. Although not shown in the main text the magnetic instability of the ladder geometry exists for a wide range of geometries and Hamiltonian parameters, including both metal-metal and insulator-insulator first and second order superradiant transitions. Moreover, local fermion-fermion interactions can be included without any added cost to the DMRG simulations, as recently done in [73] for a single XXZ chain. Another element that would be interesting to add to the model is the electronic spin which should favor the paramagnetic response of the system and could have a non-trivial interplay with the orbital magnetism subject of this work.
Finally, we note that a recent study [74], that appeared on the same day on arXiv, have obtained similar results in a system of Van Vleck paramagnetic molecules, showing the importance of cavities with a significant magnetic component.
## Acknowledgements
We express our sincere gratitude to G. Arwas, B. Beradze, M. Capone, I. Carusotto, C. Ciuti, D. De Bernardis, O. Di Stefano, D. Fausti, G. Mazza, A. Mercurio, C. Mora, A. Nersesyan, F.M.D Pellegrino, M. Polini and S. Savasta. for useful discussions. We are particularly grateful to the organizers of _Shedding Quantum Light on Strongly Correlated Materials (QLCM22)_ where this collaboration took its first steps.
Funding informationThe work of G. C. and M. D. was partly supported by the ERC under grant number 758329 (AGEnTh), and by the MIUR Programme FARE (MEPH). T. C. acknowledges the support of PL-Grid Infrastructure for providing high-performance computing facility for a part of the numerical simulations reported here. G.M.A. and M.S. acknwoledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 101002955 - CONQUER).
## Appendix A Gaussian states and Wigner function
Given a bosonic degree of freedom \(\hat{a}\), a Gaussian state can be identified by two complex parameters \(\alpha\), \(\xi\), and one real positive parameter \(N_{\rm th}\). For a single mode, the density matrix of a generic Gaussian state can be written as a displaced and squeezed thermal state:
\[\rho^{G}=\hat{D}(\alpha)\hat{S}(\xi)\frac{N_{\rm th}^{\hat{a}^{\dagger}\hat{a}}} {(1+N_{\rm th})^{\hat{a}^{\dagger}\hat{a}}}\hat{S}^{\dagger}(\xi)\hat{D}^{ \dagger}(\alpha)\, \tag{30}\]
where \(\hat{D}(\alpha)\) and \(\hat{S}(\xi)\) are respectively the displacement and the squeezing operators:
\[\hat{D}(\alpha)\equiv\exp\left[\alpha\hat{a}^{\dagger}-\alpha^{*} \hat{a}\right]\,,\] \[\hat{S}(\xi)\equiv\exp\left[(\xi^{*}\hat{a}\hat{a}-\xi\hat{a}^{ \dagger}\hat{a}^{\dagger})/2\right]\,. \tag{31}\]
The covariance matrix of the quadratures \(\hat{X}\) and \(\hat{P}\) for a generic state is defined as:
\[\mathbf{\sigma}\equiv\begin{pmatrix}\langle\hat{X}^{2}\rangle-\langle\hat{X} \rangle^{2}&\langle\hat{X}\hat{P}+\hat{P}\hat{X}\rangle-\langle\hat{X} \rangle\langle\hat{P}\rangle\\ \langle\hat{X}\hat{P}+\hat{P}\hat{X}\rangle-\langle\hat{X}\rangle\langle\hat {P}\rangle&\langle\hat{P}^{2}\rangle-\langle\hat{P}\rangle^{2}\end{pmatrix}. \tag{32}\]
For Gaussian states, every property can be expressed in terms of expectation values of the quadratures and their covariance matrix. In terms of the parameters \(\alpha\), \(\xi=re^{i\theta}\) and \(N_{\rm th}\), we have:
\[\langle\hat{X}\rangle=\sqrt{2}\,{\rm Re}[\alpha]\,\ \ \langle\hat{P} \rangle=\sqrt{2}\,{\rm Im}[\alpha]\, \tag{33}\] \[\sigma_{11}=\left(\frac{1}{2}+N_{\rm th}\right)\Big{(}\cosh(2r)+ \sinh(2r)\cos(\theta)\Big{)}\,\] (34) \[\sigma_{22}=\left(\frac{1}{2}+N_{\rm th}\right)\Big{(}\cosh(2r)- \sinh(2r)\cos(\theta)\Big{)}\,\] (35) \[\sigma_{12}=\sigma_{21}=\left(\frac{1}{2}+N_{\rm th}\right)\sinh (2r)\sin(\theta). \tag{36}\]
The von Neumann entropy of the photon in a Gaussian state reads:
\[S(\rho^{G})=(N_{\rm th}+1)\ln(N_{\rm th}+1)-N_{\rm th}\ln N_{\rm th}. \tag{37}\]
We remark here that the origin of a finite entropy, i.e., \(N_{\rm th}>0\), is not generically guaranteed to be the entanglement with some other quantum system, unlike the closed cavity system in the main text, since it can also have a classical contribution. For example, a harmonic oscillator with frequency \(\omega_{\rm c}\) and at inverse temperature \(\beta\) is in a Gaussian state with \((\alpha,\xi,N_{\rm th})=(0,0,e^{-\beta\omega_{\rm c}})\).
Another definition for the Gaussian states is that their Wigner function:
\[W(x,p)=\frac{1}{\pi}\int dye^{2ipy}\left\langle x+y\right|\hat{\rho}\left|x-y \right\rangle\, \tag{38}\]
is a Gaussian:
\[W(x,p)=\frac{1}{\pi}\exp\biggl{(}-\frac{1}{2}(x-x_{0},p-p_{0})\mathbf{ \sigma}^{-1}(x-x_{0},p-p_{0})^{T}\biggr{)}\, \tag{39}\]
where \(\sigma\) it the covariance matrix, \(x_{0}=\langle\hat{X}\rangle\) and \(p_{0}=\langle\hat{P}\rangle\). We also recall that for a symmetrically ordered operator \(\hat{O}(\hat{a},\hat{a}^{\dagger})\) such as the Peierls phase the Wigner function can be used to compute expectation values as averages over the phase space:
\[\langle\hat{O}(\hat{a},\hat{a}^{\dagger})\rangle=\int dxdpW(x,p)O\Bigl{(}\frac {x+ip}{\sqrt{2}},\frac{x-ip}{\sqrt{2}}\Bigr{)}. \tag{40}\]
In the main text, therefore, we need to perform just Gaussian integrals to arrive at Eq. (16).
## Appendix B Photon mean-field in periodic boundary conditions
In this appendix, we expand on the case of periodic boundary condition without specifying the geometry. Using the same pseudo-spin representation defined in the text in momentum space we have that the light-matter Hamiltonian reads:
\[\hat{H}=\hat{H}_{\rm c}+\sum_{k,\alpha}H_{k}^{\alpha}(\hat{a},\hat{a}^{ \dagger})\hat{\sigma}_{k}^{\alpha}\, \tag{41}\]
where at each momentum sector \(k\) we have:
\[\mathbf{H}_{k}(\hat{a},\hat{a}^{\dagger})=-\Bigl{(}2t_{0}\cos(k)\cos \Bigl{(}\hat{\Phi}/2\Bigr{)},t_{1}+t_{2}\cos(k),t_{2}\sin(k),2t_{0}\sin(k)\sin \Bigl{(}\hat{\Phi}/2\Bigr{)}\Bigr{)}. \tag{42}\]
Note that this representation is possible because the cavity mode has zero momentum in the direction of the ladder. Focusing on the thermodynamic limit and the matter state, we can work in the PMF approximation and restrict ourselves to coherent states for the cavity \(|\alpha_{0}\rangle\) with the identification of the mean-field parameter defined in the main text: \(\phi=4g\alpha_{0}/\sqrt{N}\) and \(R=1\). In this way the mean-field electronic Hamiltonian with periodic boundary conditions is
\[\hat{H}_{\rm m}^{\rm PMF}=\sum_{k,\alpha}h_{k}^{\alpha}\hat{\sigma}_{k}^{ \alpha}. \tag{43}\]
The two bands are:
\[\epsilon_{k,a}=2t_{0}\cos(k)\cos(\phi)+(-1)^{a}\sqrt{t_{1}^{2}+t_{2}^{2}+2t_{1}t_{ 2}\cos(k)+4t_{0}^{2}\sin^{2}(k)\sin^{2}(\phi)}\, \tag{44}\]
with \(a=1,2\). For the square ladder case discussed extensively in the main text with \(t_{1}=t_{0}=1\), the chemical potential is \(\mu=0\) at every \(\phi\), and an indirect gap in the dispersion opens at \(\phi=\frac{2}{3}\pi\). The minimization of the total energy as a function of \(\phi\) then gives the PMF ground state.
## Appendix C Perturbation theory
The Hamiltonian at \(g=0\) has a factorized ground state that reads:
\[\ket{\Psi_{0}}=\prod_{\frac{\pi}{3}<|k|<\frac{2\pi}{3}}\hat{\bar{c}}_{1,k}^{ \dagger}\prod_{|k|<\frac{\pi}{3}}\hat{\bar{c}}_{1,k}^{\dagger}\hat{\bar{c}}_{2,k}^{\dagger}\ket{0_{\rm m},0_{\rm c}}\, \tag{45}\]
where \(\ket{0_{\rm m},0_{\rm c}}\) is the state with zero electrons and photons, and \(\hat{\bar{c}}_{a,k}^{\dagger}\) is the creation operator that diagonalize the bare electronic Hamiltonian and \(a=1,2\) the band index. Different points in momentum space can have \(N_{k}=0,1,2\) number of electrons and this is a conserved quantity. Starting from \(\ket{\Psi_{0}}\) and the expansion of the light-matter interaction in Eq. (23) we can compute perturbative corrections at small \(g\). In particular we have \(H\simeq H_{0}+V_{g}\) with:
\[\hat{V}_{g}=\frac{g}{\sqrt{L}}\sum_{k,\alpha}\hat{\sigma}_{k}^{\alpha}d_{p,k} ^{\alpha}(\phi=0). \tag{46}\]
The only non-zero matrix matrix element at first order are those with a single direct particle-hole excitation for the matter and one photon in the cavity:
\[f_{k}=\bra{PH_{k},1_{\rm c}}(\delta\hat{a}+\delta\hat{a}^{\dagger})\sum_{ \alpha}\hat{\sigma}_{k}^{\alpha}d_{p,k}^{\alpha}(\phi=0)\ket{\Psi_{0}}=2\sin( k)\theta\left(|k|-\frac{\pi}{3}\right)\theta\left(\frac{2\pi}{3}-|k|\right)\, \tag{47}\]
with \(\theta(x)\) the Heaviside function. Summing over all momenta we arrive to the expression in the text for the ground state corrections:
\[\ket{\Psi_{(1)}}=\ket{\Psi_{0}}-\frac{g}{\sqrt{L}}\sum_{k}\frac{2f_{k}}{ \omega_{k}+\omega_{\rm c}}\ket{\rm PH_{k},1_{\rm c}}. \tag{48}\]
The second-order expansion involves also the \(g^{2}\) contribution and populate also the two-photon sector of the cavity, needed for the squeezing of the mode.
## Appendix D Details about DMRG simulations
For all the DMRG simulations performed here, the energy density difference between the final two DMRG sweeps has been kept below \(10^{-8}\) to ensure convergence. In order to maintain computational feasibility, the dimension of the photon Hilbert space has been truncated to a
maximum photon number of \(N_{\rm max}^{\rm ph}=63\). The photon Hilbert space must be large enough to describe coherent states found in the superradiant regime but also the strong squeezing. We have verified that this truncation level is sufficient to obtain converged results for all values of \(g\) and system sizes up to \(L=76\).
In most of the presented figures, the bond dimension used for the MPS ansatz is \(\chi=600\), sufficient to achieve converged results for system sizes up to \(L=76\) with a tolerance of \(10^{-8}\) on the energy density and a maximum truncation error of \(10^{-6}\). These are worse case values which are found in the normal phase where the fermions are gapless and entangled with the cavity. However, to better capture the thermodynamic limit, we have also analyzed larger system sizes up to \(L=136\). For these system sizes, we increased the bond dimension to \(\chi=1000\) to converge most observables, except for the non-Gaussianity of the photon state. This observable has been found to be particularly sensitive to a combination of numerical parameters including the number of DMRG sweeps and the bond dimension. This problem of convergence is particularly pronounced for values of \(g\) in the normal phase, where the entanglement in the system is higher. To account for this difficulty in the analysis, we have considered an empirical error of \(10^{-5}\) in the data for Fig. 5.
We then also comment on the non-linear nature of the Peierls phase. This is represented in our code by using the exact matrix elements in the photon number basis \(\{|n\rangle\}\) of the displacement operators \(\hat{D}(ig/\sqrt{L})\) which reads [75]:
\[\langle n|\,\hat{D}(\alpha)\,|m\rangle=\sqrt{\frac{n!}{m!}}\alpha^{m-n}\exp \biggl{(}-\frac{|\alpha|^{2}}{2}\biggr{)}L_{n}^{(m-n)}(|\alpha|^{2})\qquad \text{for}\qquad m\geq n\, \tag{49}\]
with \(L_{n}^{(m-n)}(x)\) a generalized Laguerre polynomial and for \(m<n\) one can just take the complex conjugate since \(\hat{D}\) is unitary. When one works at finite size or considers \(g/\sqrt{L}\) fixed ("single-particle" strong coupling) the matrix elements of the displacement operator should be evaluated carefully in a truncated Hilbert space. For example the exponentiation of the matrix \(ig\hat{X}/\sqrt{L}\) as \(\langle n|\exp\Bigl{(}ig\hat{X}/\sqrt{L}\Bigr{)}\,|m\rangle\) in a truncated Hilbert space does not exactly corresponds to \(\langle n|\,\hat{D}(ig/\sqrt{L})\,|m\rangle\). To be more quantitative in Fig. 9 we plot the difference between the matrix elements obtained by exponentiating \(ig\hat{X}/\sqrt{L}\) and the exact ones from Eq. (49) at a small photon Hilbert space cutoff \(N_{\rm max}^{\rm ph}=9\). This illustrates the necessity for a large photonic cut-off in the numerical simulations.
|
2310.06823 | NECO: NEural Collapse Based Out-of-distribution detection | Detecting out-of-distribution (OOD) data is a critical challenge in machine
learning due to model overconfidence, often without awareness of their
epistemological limits. We hypothesize that ``neural collapse'', a phenomenon
affecting in-distribution data for models trained beyond loss convergence, also
influences OOD data. To benefit from this interplay, we introduce NECO, a novel
post-hoc method for OOD detection, which leverages the geometric properties of
``neural collapse'' and of principal component spaces to identify OOD data. Our
extensive experiments demonstrate that NECO achieves state-of-the-art results
on both small and large-scale OOD detection tasks while exhibiting strong
generalization capabilities across different network architectures.
Furthermore, we provide a theoretical explanation for the effectiveness of our
method in OOD detection. Code is available at https://gitlab.com/drti/neco | Mouïn Ben Ammar, Nacim Belkhir, Sebastian Popescu, Antoine Manzanera, Gianni Franchi | 2023-10-10T17:53:36Z | http://arxiv.org/abs/2310.06823v3 | # Neco: Neural Collapse Based Out-of-Distribution Detection
###### Abstract
Detecting out-of-distribution (OOD) data is a critical challenge in machine learning due to model overconfidence, often without awareness of their epistemological limits. We hypothesize that "neural collapse", a phenomenon affecting in-distribution data for models trained beyond loss convergence, also influences OOD data. To benefit from this interplay, we introduce NECO, a novel post-hoc method for OOD detection, which leverages the geometric properties of "neural collapse" and of principal component spaces to identify OOD data. Our extensive experiments demonstrate that NECO achieves state-of-the-art results on both small and large-scale OOD detection tasks while exhibiting strong generalization capabilities across different network architectures. Furthermore, we provide a theoretical explanation for the effectiveness of our method in OOD detection. We plan to release the code after the anonymity period.
## 1 Introduction
In recent years, deep learning models have achieved remarkable success across various domains (OpenAI, 2023; Ramesh et al., 2021; Jumper et al., 2021). However, a critical vulnerability often plagues these models: they tend to exhibit unwarranted confidence in their predictions, even when confronted with inputs that deviate from the training data distribution. This issue gives rise to the challenge of Out-of-Distribution (OOD) detection for Deep Neural Networks (DNNs). OOD detection holds significant implications for safety. For instance, in medical imaging a DNN may fail to make an accurate diagnosis when presented with data that falls outside its training distribution (_e.g._, using a different scanner). A reliable DNN classifier should not only correctly classify known In-Distribution (ID) samples but also flag any OOD input as "unknown". OOD detection plays a crucial role in ensuring the safety of applications such as medical analysis (Schlegl et al., 2017), industrial inspection (Paul Bergmann and Stege, 2019), and autonomous driving (Kitt et al., 2010).
There are various approaches to distinguish ID and OOD data that fall in three main categories: confidence-based (Liu et al., 2020; Hendrycks and Gimpel, 2017; Hendrycks et al., 2022; Huang et al., 2021; Liang et al., 2018), features/logits-based (Sun and Li, 2022; Sun et al., 2021; Wang et al., 2022; Djurisic et al., 2022) and distance/density based (Ming et al., 2023; Lee et al., 2018; Sun et al., 2022). OOD detection can be approached in a supervised manner, primarily employing outlier exposure methods (Hendrycks et al., 2018), by training a model on OOD datasets. However, in this work, we focus on post-hoc (unsupervised) OOD detection methods. These methods do not alter the network training procedure, hence avoid harming performance and increasing the training cost. Consequently, they can be seamlessly integrated into production models. Typically, such methods leverage a trained network to transform its latent representations into scalar values that represent the confidence score in the given inputs prediction. The underlying presumption is that ID samples should yield high-confidence scores, while the confidence should notably drop for OOD samples. Post-hoc approaches make use of a model-learned representation, such as the model's logits or deep features which are typically employed for prediction, to compute the OOD score.
A series of recent studies (Ishida et al., 2020; Papyan V, 2020) have shed light on the prevalent practice of training DNNs well beyond the point of achieving zero error, aiming for zero loss. In the
Terminal Phase of Training (TPT), occurring after zero training set error is reached, a "Neural Collapse" (NC) phenomenon emerges, particularly in the penultimate layer and in the linear classifier of DNNs (Papyan V, 2020), and it is characterized by four main properties:
1. **Variability Collapse (NC1):** during the TPT, the within-class variation in activations becomes negligible as each activation collapses towards its respective class mean.
2. **Convergence to Simplex ETT (NC2):** the class-mean vectors converge to having equal lengths, as well as having equal-sized angles between any pair of class means. This configuration corresponds to a well-studied mathematical concept known as Simplex Equiangular Tight Frame (ETF).
3. **Convergence to Self-Duality (NC3):** in the limit of an ideal classifier, the class means and linear classifiers of a neural network converge to each other up to rescaling, implying that the decision regions become geometrically similar and the class means lie at the centers of their respective regions.
4. **Simplification to Nearest Class-Center (NC4):** The network classifier progressively tends to select the class with the nearest class mean for a given activation, typically based on standard Euclidean distance.
These NC properties provide valuable insights into how DNNs behave during the TPT. Recently it was demonstrated that collapsed models exhibit improved OOD detection performance (Haas et al., 2023). Additionally, they found that applying L2 normalization can accelerate the model's collapse process. However, to the best of our knowledge, no one has yet evidenced the following interplay between NC of ID and OOD data:
1. **ID/OOD Orthogonality (NC5):** As the training procedure advances, OOD and ID data tend to become increasingly more orthogonal to each other. In other words, the clusters of OOD data become more perpendicular to the configuration adopted by ID data (_i.e._, the Simplex ETF).
Building upon the insights gained from the aforementioned properties of NC, as well as the novel observation of ID/OOD orthogonality (NC5), we introduce a new OOD detection metric called "NECO", which stands for NEural Collapse-based Out-of-distribution detection. NECO involves calculating the relative norm of a sample within the subspace occupied by the Simplex ETF structure. This subspace preserves information from ID data exclusively and is normalised by the norm of the full feature vector.
We summarize our contributions as follows:
* We introduce and empirically validate a novel property of NC in the presence of OOD data.
* We proposed a novel OOD detection method **NECO**, a straightforward yet highly efficient post-hoc method that leverages the concept of NC. We demonstrate that **NECO** exhibits strong generalization across various network architectures. Furthermore, we offer a comprehensive theoretical analysis that sheds light on the underlying mechanisms of **NECO**.
* We establish through extensive evaluations on a range of OOD detection tasks that **NECO** achieves state-of-the-art performance compared to other OOD methods.
## 2 Related work
Neural Collapseis a set of intriguing properties that are exhibited by DNNs when they enter the TPT. NC in its essence, represents the state at which the within-class variability of the penultimate layer outputs collapse to a very small value. Simultaneously, the class means collapse to the vertices of a Simplex Equiangular Tight Frame (ETF). This empirically emergent structure simplifies the behaviour of the DNN classifier. Intuitively, these properties depicts the tendency of the network to maximally separate class features while minimizing the separation within them. (Papyan V, 2020) have shown that the collapse property of the model induce generalization power and adversarial robustness, which persists across a range of canonical classification problems, on different neural network architectures (_e.g._, VGG (Simonyan & Zisserman, 2015), ResNet (He et al., 2016), and
DenseNet (Huang et al., 2017) and on a variety of standard datasets (_e.g._, MNIST (Deng, 2012), CIFAR-10 and CIFAR-100 (Krizhevsky, 2012)), and ImageNet (Russakovsky et al., 2015)). NC behavior has been empirically observed when using either the cross entropy loss (Papyan V, 2020) or the mean squared error (MSE) loss (Han et al., 2022). Many recent works attempt to theoretically analyse the NC behavior (Yang et al., 2022; Kothapalli, 2023; Ergen and Pilanci, 2021; Zhu et al., 2021; Tirer and Bruna, 2022), usually using a mathematical framework based on variants of the unconstrained features model, proposed by Mixon et al. (2020).
Haas et al. (2023) states that collapsed models exhibit higher performance in the OOD detection task. However, to our knowledge no one has attempted to directly leverage the emergent properties of NC to the task of OOD detection.
OOD Detectionhas attracted a growing research interest in recent years. It can be divided into two pathways: supervised and unsupervised OOD detection. Due to the post-hoc nature of our method, we will focus on the latter. Post-hoc approaches can be divided into three main categories. Firstly, confidence-based methods. These method utilise the network final representation to derive a confidence measure as an OOD scoring metric (DeVries and Taylor, 2018; Huang and Li, 2021; Hendrycks and Gimpel, 2017; Liu et al., 2020; Hendrycks et al., 2022; Liang et al., 2018; Huang et al., 2021). Softmax score (Hendrycks and Gimpel, 2017) is the common baseline for this type of methods where they use the model softmax prediction as the OOD score. Energy (Liu et al., 2020) elaborates on that principle by computing the energy (_i.e._, the logsumely on the logits), with demonstrated advantages over the softmax confidence score both empirically and theoretically. ODIN (Liang et al., 2018) enhances the softmax score by perturbing the inputs and rescaling the logits. Secondly, distance/density based (Abati et al., 2019; Lee et al., 2018; Sabokrou et al., 2018; Zong et al., 2018; Lee et al., 2018; Ming et al., 2023; Ren et al., 2021; Sun et al., 2022; Techapanurak et al., 2019; Zaeemzadeh et al., 2021; van Amersfoort et al., 2020). These approaches identify OOD samples by leveraging the estimated density on the ID training samples. Mahalanobis (Lee et al., 2018) utilizes a mixture of class conditional Gaussians on the features distribution. (Sun et al., 2022) uses a non-parametric nearest-neighbor distance as the OOD score. Finally, feature/logit based method utilise a combination of the information within the model's logits and features to derive the OOD score. (Wang et al., 2022) utilizes this combination to create a virtual logit to measure the OOD nature of the sample. ASH (Djurisis et al., 2022) utilizes feature pruning/filling while relying on sample statistics before passing the feature vector to the DNN classifier. Our method lies within the latter sub category, baring different degrees of similarity with some of the methods. Perhaps the most similar previous work to our method are techniques like NuSA Cook et al. (2020) and ViM, as they leverage the principal/Null space to compute their OOD metric. More details are presented at 4.1.
## 3 Preliminaries
### background and hypotheses
In this section, we will establish the notation used throughout this paper. We introduce the following symbols and conventions:
* We represent the training and testing sets as \(D_{l}=(\mathbf{x}_{i},y_{i})_{i=1}^{n_{l}}\) and \(D_{\tau}=(\mathbf{x}_{i},y_{i})_{i=1}^{n_{\tau}}\), respectively. Here, \(\mathbf{x}_{i}\) represents an image, \(y_{i}\in\llbracket 0,C\rrbracket\) denotes its associated class identifier, and \(C\) stands for the total number of classes. It is assumed that the data in both sets are independently and identically distributed (i.i.d.) according to their respective unknown joint distributions, denoted as \(\mathcal{P}_{l}\) and \(\mathcal{P}_{\tau}\).
* In the context of anomaly detection, we make the assumption that \(\mathcal{P}_{l}\) and \(\mathcal{P}_{\tau}\) exhibit a high degree of similarity. However, we also introduce another test dataset denoted as \(D_{\text{OOD}}=(\mathbf{x}_{i}^{\text{OOD}},y_{i}^{\text{OOD}})_{i=1}^{\text{ noco}}\), where the data is considered to be i.i.d. according to its own unknown joint distribution, referred to as \(\mathcal{P}_{\text{OOD}}\), which is distinct from both \(\mathcal{P}_{l}\) and \(\mathcal{P}_{\tau}\).
* The DNN is characterized by a vector containing its trainable weights, denoted as \(\mathbf{\omega}\). We use the symbol \(f\) to represent the architecture of the DNN associated with these weights, and \(f_{\mathbf{\omega}}(\mathbf{x}_{i})\) denotes the output of the DNN when applied to the input image \(\mathbf{x}_{i}\).
* To simplify the discussion, we assume that the DNN can be divided into two parts: a feature extraction component denoted as \(h_{\mathbf{\omega}}(\cdot)\) and a final layer, which acts as a classifier and is denoted as \(g_{\mathbf{\omega}}(\cdot)\). Consequently, for any input image \(\mathbf{x}_{i}\), we can express the DNN's output as \(f_{\mathbf{\omega}}(\mathbf{x}_{i})=(g_{\mathbf{\omega}}\circ h_{\mathbf{\omega}})(\mathbf{x}_{i})\).
* In the context of image classification, we consider the output of \(h_{\mathbf{\omega}}(\cdot)\) to be a vector, which we denote as \(\mathbf{h}_{i}=h_{\mathbf{\omega}}(\mathbf{x}_{i})\in\mathbb{R}^{D}\) for image \(\mathbf{x}_{i}\) with \(D\) the dimension of the feature space.
* We define the matrix \(\mathbf{H}\in M_{n_{l},D}(\mathbb{R})\) as containing all the \(h_{\mathbf{\omega}}(\mathbf{x}_{i})\) values where \(\mathbf{x}_{i}\) belongs to the training set \(D_{l}\). Specifically, \(\mathbf{H}=[h_{\mathbf{\omega}}(\mathbf{x}_{1})\quad\ldots\quad h_{\mathbf{\omega}}(\mathbf{x} _{n_{l}})]\) represents the feature space within the ID data.
* We introduce \(D_{l}^{c}\) as a dataset consisting of data points belonging to class \(c\), and \(\mathbf{H}^{c}\) represents the feature space for class \(c\in\llbracket 0,C\rrbracket\).
* For a given combination of dataset and DNN, we define the empirical global mean \(\mu_{G}=1/\text{card}(D_{l})\sum_{\mathbf{x}_{i}\in D_{l}}h_{\mathbf{\omega}}(\mathbf{x}_ {i})\in\mathbb{R}^{D}\) and the empirical class means \(\mu_{c}=1/\text{card}(D_{l}^{c})\sum_{\mathbf{x}_{i}\in D_{l}^{c}}h_{\mathbf{\omega}}( \mathbf{x}_{i})\in\mathbb{R}^{D}\), where \(\text{card}(\cdot)\) represents the number of elements in a dataset.
* In the context of a specific dataset and DNN configuration, we define the empirical covariance matrix of \(\mathbf{H}\) to refer to \(\Sigma_{T}\in M_{D\times D}(\mathbb{R})\). This matrix encapsulates the total covariance and can be further decomposed into two components: the between-class covariance, denoted as \(\Sigma_{B}\), and the within-class covariance, denoted as \(\Sigma_{W}\). This decomposition is expressed as \(\Sigma_{T}=\Sigma_{B}+\Sigma_{W}\).
* Similarly for a given combination of an OOD dataset and a DNN, we define the OOD empirical global mean \(\mu_{G}^{\text{OOD}}\) and the OOD empirical class means \(\mu_{c}^{\text{OOD}}\), and the OOD feature matrix \(\mathbf{H}^{\text{OOD}}\in M_{n_{\text{OOD}},D}(\mathbb{R})\).
In the context of unsupervised OOD detection with a post-hoc method, we train the function \(f_{\mathbf{\omega}}(\cdot)\) using the dataset \(D_{l}\). Following the training process, we evaluate the performance of \(f_{\mathbf{\omega}}\) on a combined dataset consisting of both \(D_{\text{OOD}}\) and \(D_{\tau}\). Our objective is to obtain a confidence score that enables us to determine whether a new test data point originates from \(D_{\text{OOD}}\) or \(D_{\tau}\).
### neural collapse
Throughout the training process of \(f_{\mathbf{\omega}}\), it has been demonstrated that the latent space, represented by the output of \(h_{\mathbf{\omega}}\), exhibits four distinct properties related to NC. In this section, we delve deeper into the first two, with the remaining two being detailed in D. **The first Neural Collapse (NC1)** property is related to the Variability Collapse of the DNN. As training progresses, the within-class variation of the activations diminishes to the point where these activations converge towards their respective class means, effectively making \(\Sigma_{W}\) approach zero. To evaluate this property during training, (Papyan V, 2020) introduced the following operator:
\[\mathrm{NC1}=\mathrm{Tr}\left[\frac{\Sigma_{W}\Sigma_{B}^{\dagger}}{C}\right] \tag{1}\]
Here, \([.]^{\dagger}\) signifies the Moore-Penrose pseudoinverse. While the authors of Papyan V (2020) state that the convergence of \(\Sigma_{W}\) towards zero is the key criterion for satisfying NC1, they also point out that \(\mathrm{Tr}\left[\Sigma_{W}\Sigma_{B}^{\dagger}\right]\) is commonly employed in multivariate statistics for predicting misclassification. This metric measures the inverse signal-to-noise ratio in classification problems. This formula is adopt since it scales the intra-class covariance matrix \(\Sigma_{W}\) (representing noise) by the pseudoinverse of the inter-class covariance matrix \(\Sigma_{B}\) (representing signal). This scaling ensures that NC1 is expressed in a consistent reference frame across all training epochs. When NC1 approaches zero, it indicates that the activations are collapsing towards their corresponding class means.
**The second Neural Collapse (NC2)** property is associated with the phenomenon where the empirical class means tend to have equal norms and to spread in such a way as to equalize angles between any pair of class means as training progresses. Moreover, as training progresses, these class means tend to maximize their pairwise distances, resulting in a configuration akin to a Simplex ETF. This property manifests during training through the following conditions:
\[||\mu_{c}-\mu_{G}||_{2}-\|\mu c^{\prime}-\mu_{G}\|_{2}| \to 0 \tag{2}\] \[\frac{\langle\mu_{c}-\mu_{G},\mu_{c^{\prime}}-\mu_{G}\rangle}{\| \mu_{c}-\mu_{G}\|_{2}\|\mu_{c^{\prime}}-\mu_{G}\|_{2}} \to\frac{C}{C-1}\delta_{cc^{\prime}}-\frac{1}{C-1} \tag{3}\]
Here, \(\|\cdot\|_{2}\) represents the L2 norm of a vector, \(|\cdot|\) denotes the absolute value, \(\langle\cdot,\cdot\rangle\) is the inner product, and \(\delta_{\cdot}\) is the Kronecker delta symbol.
The convergence to the Simplex ETF is assessed through two metrics that each verify the following properties: the "equinormality" of class/classifier means, along with their "Maximum equiangularity". Equinormality of class means is measured using its variation, as follows:
\[\mathrm{EN}_{\text{class-means}}=\frac{\mathrm{std}_{\mathrm{c}}\left\{|| \mu_{c}-\mu_{G}||_{2}\right\}}{\mathrm{avg}_{\mathrm{c}}\left\{||\mu_{c}-\mu _{G}||_{2}\right\}}\, \tag{4}\]
where \(\mathrm{std}\) and \(\mathrm{avg}\) represent the standard deviation and average operators, respectively. The second property, maximum equiangularity, is verified through:
\[\mathrm{Equiangularity}_{\text{class-means}}=\text{Avg}_{\mathrm{c},c^{ \prime}}\left|\frac{\langle\mu_{c}-\mu_{G},\mu_{c^{\prime}}-\mu_{G}\rangle+ \frac{1}{C-1}}{\|\mu_{c}-\mu_{G}\|_{2}\|\mu_{c^{\prime}}-\mu_{G}\|_{2}}\right| \tag{5}\]
As training progresses, if the average of all class means is approaching zero, this indicates that equiangularity is being achieved.
## 4 Out of Distribution Neural Collapse
Neural collapse in the presence of OOD data.NC has traditionally been studied in the context of ID scenarios. However, recent empirical findings, as demonstrated in Haas et al. (2023) have shown that NC can also have a positive impact on OOD detection, especially for Lipschitz DNNs (Virmaux and Scaman, 2018). It has come to our attention that NC can influence OOD behavior as well, leading us to introduce a new property:
**(NC5) ID/OOD orthogonality:** This property suggests that, as training progresses, each of the vectors representing the empirical ID class means, tend to become orthogonal to the vector representing the empirical OOD data global mean. In mathematical terms, we express this property as follows:
\[\forall c,\ \frac{\langle\mu_{c},\mu_{G}^{\text{OOD}}\rangle}{\|\mu_{c}\|_{ 2}\|\mu_{G}^{\text{OOD}}\|_{2}}\to 0 \tag{6}\]
To support this observation, we examined the following metric:
\[\mathrm{OrthoDev}_{classes-OOD}=\text{Avg}_{c}\left|\frac{\langle\mu_{c}, \mu_{G}^{\text{OOD}}\rangle}{\|\mu_{c}\|_{2}\|\mu_{G}^{\text{OOD}}\|_{2}}\right| \tag{7}\]
This metric assesses the deviation from orthogonality between the ID class means and OOD mean. As training progresses, this deviation decreases towards zero if NC5 is satisfied.
To validate this hypothesis, we conducted experiments using the CIFAR-10 as ID dataset and CIFAR-100 alongside SVHN (Netzer et al., 2011) as OOD datasets. We employed two different architectures, ResNet-18 (He et al., 2015) and ViT (Dosovitskiy et al., 2020), and trained each of them for 350 epochs and 6000 steps (with a batch size of 128), respectively. During training, we saved network parameters at regular intervals (epochs/steps) and evaluated the metric of equation 7 for each saved network.
The results of these experiments on NC5 ( eq. 7) are shown in Figure 1, which illustrates the convergence of the OrthoDev. Both of these models were trained using ID data and were later subjected to evaluation in the presence of out-of-distribution (OOD) data. Remarkably, these models exhibited a convergence pattern characterized by a tendency to maximize their orthogonality with OOD data. This observation is substantiated by the consistently low values observed in the orthogonality equation 7. It implies that during the training process, OOD data progressively becomes more orthogonal to ID data during the TPT. In D, we present additional experiments conducted on different combinations of ID and OOD datasets. We find this phenomenon intriguing and believe it can serve as the basis for a new OOD detection criterion, which we will introduce in the next Subsection.
### Neural Collapse based Out of distribution detection (NECO) method
Based on this observation of orthogonality between ID and OOD samples, previous work (Wang et al., 2022; Cook et al., 2020) have utilised the null space for performing OOD detection. We now introduce notation and details these methods. Given an image \(\mathbf{x}\) represented by feature vector \(h_{\mathbf{\omega}}(\mathbf{x})\), we impose that \(f_{\mathbf{\omega}}(\mathbf{x})=W\times h_{\mathbf{\omega}}(\mathbf{x})\), where \(W\) is the matrix of the last fully connected layer. Cook et al. (2020); Wang et al. (2022) have highlighted that features \(h_{\mathbf{\omega}}(\mathbf{x})\) can be decomposed into two components: \(h_{\mathbf{\omega}}(\mathbf{x})=h_{\mathbf{\omega}}(\mathbf{x})^{W}+h_{\mathbf{\omega}}(\mathbf{x})^{ W^{\perp}}\). In this decomposition, \(f_{\mathbf{\omega}}(\mathbf{x})=W\times h_{\mathbf{\omega}}(\mathbf{x})^{W}\), and importantly, \(W\times h_{\mathbf{\omega}}(\mathbf{x})^{W^{\perp}}=0\). Hence, the component \(h_{\mathbf{\omega}}(\mathbf{x})^{W^{\perp}}\) does not directly impact classification but plays a role in influencing OOD detection. In light of this observation, NuSA introduced the following score: \(\text{NuSA}(\mathbf{x})=\frac{\sqrt{\|h_{\mathbf{\omega}}(\mathbf{x})\|-\|h_{\mathbf{\omega}}( \mathbf{x})\|^{W^{\perp}}}}{\|h_{\mathbf{\omega}}(\mathbf{x})\|}\), and ViM introduced \(\text{ViM}(\mathbf{x})=\|h_{\mathbf{\omega}}(\mathbf{x})^{W^{\perp}}\|\). To identify the null space, NuSA optimizes the decomposition after training. On the other hand, ViM conducts PCA on the latent space \(H\) (also after training), and decomposes it into a principal space, defined by the \(d\)-dimensional projection with the matrix \(P\in M_{d,D}(\mathbb{R})\) spanned by the \(d\) eigenvectors corresponding to the largest \(d\) eigenvalues of the covariance matrix of \(H\), and a null space, obtained by projecting on the remaining eigenvectors. In contrast, based on the implications of NC5, we propose a novel criterion that circumvents having to find the null space, specifically:
\[\text{NECO}(\mathbf{x})=\frac{\|Ph_{\mathbf{\omega}}(\mathbf{x})\|}{\|h_{\mathbf{\omega}}(\bm {x})\|}=\frac{\sqrt{h_{\mathbf{\omega}}(\mathbf{x})^{\top}PP^{\top}h_{\mathbf{\omega}}( \mathbf{x})}}{\sqrt{h_{\mathbf{\omega}}(\mathbf{x})^{\top}h_{\mathbf{\omega}}(\mathbf{x})}} \tag{8}\]
Our hypothesis is that if the DNN is subject to the properties of NC1, NC2 and NC5, then it should be possible to separate any subset of ID classes from the OOD data. By projecting the feature into the first \(d\) principal components extracted from the ID data, we should obtain a latent space representation that is close to the null vector for OOD data and not null for ID data. Consequently, by taking the norm of this projection and normalizing it with the norm of the data, we can derive an effective OOD detection criterion. However, in the case of vision transformers, the penultimate layer representation cannot straightforwardly be interpreted as features. Consequently, the resulting norm needs to be calibrated in order to serve as a proper OOD scoring function. To calibrate our NECO score, we multiply it by the penultimate layer biggest logit (MaxLogit). This has the effect of injecting class-based information into the score in addition to the desired scaling. It is worth noting that this scaling is also useful when the penultimate layer size is smaller than the number of classes, since in this case, it is not feasible to obtain maximum OrthoDev between all classes. We refer the reader to E for empirical observations on the distribution of NECO under the presence of OOD data.
### Theoretical justification
To gain a better understanding of the impact of NECO, we visualize the Principal Component Analysis (PCA) of the pre-logit space in Figure 2, with ID data (colored points) and the OOD (black). This analysis is conducted on the ViT model trained on CIFAR-10, focusing on the first two principal components. Notably, the ID data points have multiple clusters, one by class, which can be
Figure 1: Convergence to ID/OOD orthogonality for ViT-B (left), Resnet-18 (right) both trained on CIFAR-10 as ID and tested in the presence of OOD data. Dashed purple lines indicate the end of warm-up steps in the case of ViT and learning rate decay epochs for ResNet-18.
attributed to the influence of NC1, and the OOD data points are positioned close to the null vector. To elucidate our criterion, we introduce an important property:
**Theorem 4.1** (NC1+NC2+NC5 imply NECO).: _We consider two datasets living in \(\mathbb{R}^{D}\), \(\{D_{OOD},D_{\tau}\}\) and a DNN \(f_{\mathbf{\omega}}(\cdot)=(g_{\mathbf{\omega}}\circ h_{\mathbf{\omega}})(\cdot)\) that satisfy NC1, NC2 and NCS. There \(\exists\,d\ll D\) for PCA on \(D_{\tau}\) s.t. \(\text{NECO}(\mu_{G}^{\text{OOD}})=0\). Conversely, for \(\mathbf{x}\in D_{\tau}\) and considering \(\mathbf{x}\neq\vec{0}\) we have that \(\text{NECO}(\mathbf{x})\neq 0\)._
The proof for this theorem can be found in A
**Remark 4.1**.: _We have established that \(\text{NECO}(\mu_{G}^{\text{OOD}})=0\), which does not imply that \(\text{NECO}(\mathbf{x})=0\) for all \(\mathbf{x}\in D_{OOD}\). However, in addition to NCS, we can put forth the hypothesis that NC2 is also occurring on the mix of ID/OOD data, while NC1 doesn't occur on the OOD data by themselves. resulting in an OOD data cluster that is equiangular and orthogonal to the Simplex ETF structure. We refer the reader to Section D for justification. Furthermore, the observations made in Figure 2 appear to support our hypothesis._
## 5 Experiments & Results
In this section, we compare our method with state-of-the-art algorithms, on small-scale and large-scale OOD detection benchmarks. Following prior work, we consider ImageNet-1K, CIFAR-10 and CIFAR-100 as the ID datasets. We use both Transformer-based and CNN-based models to benchmark our method. Detailed experimental settings are as follows.
OOD Datasets.For experiments involving ImageNet-1K as the inliers dataset (ID), we assess the model's performance on five OOD benchmark datasets: Textures (Cimpoi et al., 2014), Places365 (Zhou et al., 2016), iNaturalis (Horn et al., 2017), a subset of 10 000 images sourced from (Huang and Li, 2021a), ImageNet-O (Hendrycks et al., 2021b) and SUN (Xiao et al., 2010). For experiments where CIFAR-10 (resp. CIFAR-100) serves as the ID dataset, we employ CIFAR-100 (resp. CIFAR-10) alongside the SVHN dataset (Netzer et al., 2011) as OOD datasets in these experiments. The standard dataset splits, featuring 50 000 training images and 10 000 test images, are used in these evaluations. Further details are provided in C.1. We refer the reader to E for additional testing results on the OpenOOD benchmark (Zhang et al., 2023).
Evaluation Metrics.We present our results using two widely adopted OOD metrics (Yang et al., 2021). Firstly, we consider FPR95, the false positive rate when the true positive rate (TPR) reaches \(95\%\), with smaller values indicating superior performance. Secondly, we use the AUROC metric, which is threshold-free and calculates the area under the receiver operating characteristic curve (TPR). A higher value here indicates better performance. Both metrics are reported as percentages.
Experiment Details.We evaluate our method on a variety of neural network architectures, including Transformers and CNNs. ViT (Vision Transformer) (Dosovitskiy et al., 2020) is a Transformer-based image classification model which treats images as sequences of patches. We take the official pretrained weights on ImageNet-21K (Dosovitskiy et al., 2020) and fine-tune them on ImageNet-1K, CIFAR-10, and CIFAR-100. Swin (Liu et al., 2021) is also a transformer-based classification
Figure 2: Feature projections on the first 2 principal components of a PCA fitted on CIFAR-10 (ID) using the ViT penultimate layer representation. OOD data are ImageNet-O (left), Textures (middle) and SVHN (right). The Figure shows how NC1(1) property is satisfied by ID data, and that OOD data lie around the origin.
model. We use the officially released SwinV2-B/16 model, which is pre-trained on ImageNet-21K and fine-tuned on ImageNet-1K. From the realm of CNN-based models, we use ResNet-18 (He et al., 2015). When estimating the simplex ETF space, the entire training set is used. Further details pertaining to model training are provided in C.2. As a complementary experiment, we also assess our approach on DeiT Touvron et al. (2021) in E
Baseline Methods.In our evaluation, we compared our method against twelve prominent post-hoc baseline approaches, which we list and provide ample details pertaining to implementation and associated hyper-parameters in B.
Result on ImageNet-1K.In Table 1 we present the results for the ViT model in the first half. The best AUROC values are highlighted in bold, while the second and third best results are underlined. Our method demonstrates superior performance compared to the baseline methods. Across four datasets, we achieve the highest average AUROC and the lowest FPR95. Specifically, we attain a FPR95 of \(27.51\%\), surpassising the second-place method, ViM, by \(1.95\%\). The only dataset where we fall slightly behind is Naturalist, with our AUROC being only \(0.33\%\) lower than the best-performing approach. In Table 1, we provide the results for the SwinV2 model in the second half. Notably, our method consistently outperforms all other methods in terms of FPR95 on all datasets. On average, we achieve the third-best AUROC and significantly reduce the FPR95 by \(7.47\%\) compared to the second-best performing method, Softmax score. We compare NECO with MaxLogit (Hendrycks et al., 2022) to illustrate the direct advantages of our scoring function. On average, we achieve a FPR95 reduction of \(5.62\%\) for ViT and \(7.99\%\) for Swin when transitioning from MaxLogit to NECO multiplied by the MaxLogit. This performance enhancement clearly underscores the value of incorporating the NC concept into the OOD detection framework. Moreover, NECO is straightforward to implement in practice, requiring simple post-hoc linear transformations and weight masking.
Results on CIFAR.Table 2 presents the results for the ViT model and Resnet-18 both on CIFAR-10 and CIFAR-100 as ID dataset, tested against different OOD datasets. On the first half of the table, we show the results using a ViT model. On the majority of the OOD datasets cases we outperform the baselines both in terms of FPR95 and AUROC. Only ASH outperform NECO on the use case CIFAR-100 vs SVHN. On average we surpass all the baseline on both test sets. On second half, we show the results using a ResNet-18 model. Similarly to ViT results, on average we surpass
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Method} & \multirow{2}{*}{
\begin{tabular}{c} **ImageNet-O** \\ **AUROC** \\ \end{tabular} } & \multicolumn{1}{c}{**Textures**} & \multicolumn{1}{c}{**Naturalist**} & \multicolumn{1}{c}{**SUN**} & \multicolumn{1}{c}{**Places365**} & \multicolumn{1}{c}{**Average**} \\ & & AUROC & FPR\(\downarrow\)AUROC & FPR\(\downarrow\)AUROC & FPR\(\downarrow\)AUROC & FPR\(\downarrow\)AUROC & FPR\(\downarrow\)AUROC & FPR\(\downarrow\)AUROC & FPR\(\downarrow\) \\ \hline ViT- & Softmax score & 35.31 & 52.65 & 86.64 & 40.40 & 77.12 & 14.84 & 86.54 & 37.55 & 52.63 & 88.24 & 45.97 \\ B/16 & MaxLogit & 92.23 & 34.00 & 91.69 & 36.90 & 98.72 & 92.15 & 93.98 & 90.46 & 46.75 & 92.99 & 33.13 \\ Energy & 93.03 & 31.65 & 92.13 & 31.15 & 91.49 & 94.46 & 92.65 & 36.90 & 93.43 & 47.25 & 94.50 \\ Energy-RedRed & 50.38 & 31.25 & 20.86 & 45.90 & 20.41 & 91.95 & 36.36 & 90.14 & 42.45 & 93.39 & 30.37 \\ VIM & 94.01 & 28.95 & 91.63 & 83.22 & 92.90 & 20.22 & 92.34 & 90.87 & 44.12 & 92.56 & 29.46 \\ Residual & 92.10 & 83.83 & 84.71 & 52.34 & 92.49 & 29.78 & 90.46 & 40.91 & 85.60 & 53.93 & 90.85 & 37.66 \\ GradNorm & 84.07 & 84.35 & 86.29 & 93.76 & 97.45 & 86.76 & 93.49 & 83.66 & 94.87 & 37.34 & 32.64 \\ Mahalanobis & 84.04 & 30.90 & 91.69 & 93.79 & **93.75** & **94.15** & 93.18 & 38.38 & 88.66 & 46.69 & 93.08 & 30.99 \\ KL-Matching & 89.50 & 54.30 & 54.12 & 53.85 & 97.80 & 51.33 & 83.26 & 92.18 & 92.05 & 99.55 & 85.74 & 93.44 \\ ASH-B & 69.28 & 88.52 & 60.58 & 83.29 & 67.28 & 90.52 & 96.19 & 53.53 & 50.89 & 62.68 & 64.48 & 88.67 \\ ASH-P & 93.12 & 92.00 & 91.37 & 53.78 & 98.75 & 62.52 & 92.36 & 93.50 & 90.41 & 81.81 & 93.32 & 92.63 \\ ASH-S & 92.85 & 29.45 & 91.00 & 39.73 & 98.50 & 72.88 & 92.61 & **33.12** & 93.55 & **41.68** & 92.90 & 30.25 \\
**NECO (ours)** & **94.53 & **25.20** & **92.86** & **32.44** & 93.42 & 93.46 & 93.55 & 93.56 & 90.38 & 24.06 & **94.25** & **94.27** \\ \hline SwinV2GMmax score & 60.79 & 87.01 & 87.01 & 89.18 & 93.87 & 67.66 & 81.26 & 85.65 & 80.27 & 87.75 & 85.66 \\ MaxLogit & 61.34 & 87.95 & 83.06 & 59.55 & 87.40 & 78.17 & 68.50 & 77.43 & 69.41 & 76.75 & 67.18 \\ Energy-Red & 63.12 & 85.75 & 77.19 & 64.44 & 88.54 & 73.80 & 78.60 & 79.70 & 76.68 & 73.59 & 73.94 \\ Energy-Red & 68.38 & 83.85 & 84.56 & 89.66 & 92.31 & 87.91 & 84.62 & 69.27 & 81.41 & 70.19 & **81.46** & 92.03 \\ VIM & 69.06 & 83.85 & 80.16 & 81.87 & 54.90 & 52.74 & 73.92 & 73.76 & 72.76 & 72.79 & 68.22 \\ Residual & 66.52 & 83.80 & 73.76 & 65.00 & 83.23 & 97.70 & 71.03 & 76.41 & 69.90 & 78.53 & 73.47 & 74.72 \\ GradNorm & 37.95 & 93.95 & 33.49 & 93.31 & 82.90 & 50.13 & 91.79 & 96.29 & 83.09 & 95.73 & 33.62 & 95.26 \\ Mahalanobis & **71.87** & 80.65 & 84.51 & 63.35 & 89.81 & 57.10 & 80.28 & 75.39 & 78.52 & 77.10 & 80.92 & 71.80 \\ KL-Matching & 38.60 & 85.70 & 75.30 & 75.16 & 82.59 & 75.22 & 77.63 & 75.21 & 71.89 & 72.53 & 74.88 \\ ASH-B & 47.96 & 95.38 & 89.90 & 48.69 & 97.55 & 52.11 & 95.64 & 52.96 & 91.48 & 74.86 & 96.64 \\ ASH-P & 47.39 & 87.95 & 95.18 & 98.90 & 72.03 & 92.49 & 99.68 & 26.12 & 99.38 & 27.05 & 98.83 \\ ASH-S & 40.36 & 36.95 & 86.43 & 16.15 & 99.00 & 22.15 & 93.53 & 23.79 & 20.70 & 79.16 \\
**NECO (ours)** & 65.03 & **80.55** & 82.27 & **54.67** & **91.89** & **34.41** & 82.13 & **62.26** & **81.46** & **64.08** & **80.56** & **91.19** \\ \hline \hline \
the baseline strategies by at least 1.28% for the AUROC on the CIFAR-10 cases and lowering the best baseline performance by 8.67% in terms of FPR95, on the CIFAR-100 cases. On average, our approach outperforms baseline methods in terms of AUROC. However, we notice that our method performs slightly worse on the CIFAR-100-ID/CIFAR-10-OOD task.
## 6 Conclusion
This paper introduces an novel OOD detection method that capitalizes on the Neural Collapse (NC) properties inherent in DNNs. Our empirical findings demonstrate that when combined with over-parameterized DNNs, our post-hoc approach, NECO, achieves state-of-the-art OOD detection results, surpassing the performance of most recent methods on standard OOD benchmark datasets. While many existing approaches focus on modelling the noise by either clipping it or considering its norm, NECO takes a different approach by leveraging the prevalent ID information in the DNN, which will only be enhanced with model improvements. The latent space contains valuable information for identifying outliers, and NECO incorporates an orthogonal decomposition method that preserves the equiangularity properties associated with NC.
We have introduced a novel NC property (NC5) that characterizes both ID and OOD data behaviour. Through our experiments, we have shed light on the substantial impact of NC on the OOD detection abilities of DNNs, especially when dealing with over-parameterized models. We observed that NC properties, including NC5, NC1, and NC2, tend to converge towards zero as expected when the network exhibits over-parameterization. This empirical observation provides insights into why our approach performs exceptionally on a variety of models and on a broad range of OOD dataset (we refer the reader to the complementary results in E), hence demonstrating the robustness of NECO against OOD data.
While our approach excels with Transformers and CNNs, especially in the case of over-parametrized models, we observed certain limitations when applying it to DeiT (see the results in E). These limitations may be attributed to the distinctive training process of DeiT (_i.e._, including a distillation strategy) which necessitates a specific setup that we did not account for in order to prevent introducing bias to the NC phenomenon. We hope that this work has shed some light on the interactions between NC and OOD detection. Finally, based on our results and observations, this work raises new research questions on the training strategy of DNNs that lead to NC in favor of OOD detection.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**CIFAR-100**} & **SVHN** & **Average** & **CIFAR-10** & **SVHN** & **Average** \\ \hline \multirow{3}{*}{VT} & Softmax score & 86.74 & 70.964 & 0.959 & 0.900 & 3.999 & 21.314 & 91.29 & 39.956 & 91.71 & 31.715 \\ & MatLogt & 98.0 & 69.54 & 99.900 & 0.23 & 99.23 & 1.09 & 92.81 & 26.73 & 96.0 & 18.68 & 94.52 & 22.71 \\ & Energy & 98.63 & 5.93 & 29.92 & 0.21 & 92.25 & 3.09 & 92.68 & 26.37 & 96.68 & 15.72 & 94.68 & 21.05 \\ & Energy+RedA & 98.0 & 58.22 & 29.22 & 0.22 & 92.36 & 98.08 & 4.53 & 29.50 & 96.15 & 58.20 & 58.20 \\ & ViM & 98.1 & 99.50 & 0.82 & 99.16 & 2.84 & 92.60 & 95.56 & 25.33 & 91.71 & 26.22 \\ & Residual & 98.9 & 79.09 & 96.5 & 12.64 & 77.97 & 98.54 & 28.42 & 92.43 & 44.65 & 93.53 & 36.54 \\ & GradNormal & 98.1 & 10.92 & 98.08 & 0.98 & 97.15 & 97.10 & 92.00 & 94.85 & 16.69 & 93.21 & 41.45 \\ & Mahalanobis & 87.2 & 89.55 & 95.31 & 28.61 & 97.13 & 12.28 & 98.44 & 20.20 & 93.91 & 37.11 & 95.67 & 93.57 \\ & KL-Maching & 83.4 & 88.26 & 95.57 & 96.09 & 98.13 & 13.77 & 97.34 & 83.01 & 94.17 & 93.83 & 40.04 \\ & ASH-B & 95.1 & 17.49 & 99.09 & 48.11 & 12.11 & 97.05 & 93.01 & 92.99 & 53.97 & 93.53 & 53.49 \\ & ASH-P & 97.89 & 27.22 & 99.00 & 99.26 & 98.94 & 83.15 & 98.22 & **97.4166** & 94.18 & 91.24 \\ & ASH-S & 97.6 & 77.39 & 99.02 & 98.73 & 40.33 & 93.09 & 96.06 & 94.14 & 18.55 & 94.17 & 27.56 \\ & **NECO (ours)** & **98.54** & **8.11** & **99.33** & **99.42** & **44.27** & **95.12** & **13.99** & 96.05 & 84.52 & **95.11** & **94.13** \\ \hline \multirow{3}{*}{ResNet-18} & Softmax score & 86.07 & 67.41 & 91.467 & 86.51 & 67.59 & 75.35 & 83.09 & 77.30 & 88.61 & 76.33 & 84.35 \\ & MaLengi & 85.4 & 95.21 & 93.37 & 13.52 & 99.55 & 45.36 & 74.24 & 83.72 & 72.47 & 85.97 & 36.48 & 35.36 \\ & Energy+RedA & 98.6 & 93.67 & 99.67 & 99.17 & 99.42 & 42.25 & 85.36 & 76.76 & 90.79 & 75.83 & 87.27 \\ & Energy+RedA & 84.2 & 60.04 & 92.31 & 35.17 & 82.27 & 47.61 & 70.85 & 85.99 & 74.41 & 91.05 & 72.07 & 85.25 \\ & VFM & 85.1 & 163.76 & 94.27 & 82.72 & 98.92 & 65.61 & 60.90 & 85.99 & 86.21 & 82.31 & 26.24 & 72.70 \\ & Residual & 96.76 & 76.53 & 90.21 & 48.18 & 43.26 & 93.49 & 99.69 & 72.52 & 86.00 & 95.92 & 93.40 \\ & GradNormal & 80.63 & 97.88 & 72.83 & 49.95 & 96.54 & 72.64 & 64.69 & 94.58 & 95.85 & 85.11 \\ & Mahalanobis & 81.23 & 72.35 & 90.39 & 55.41 & 85.81 & 64.33 & 85.94 & 79.96 & 86.13 & 67.91 & 88.52 \\ & KL-Maching & 77.38 & 81.82 & 86.73 & 49.47 & 82.82 & 85.30 & 73.52 & 77.24 & 82.49 & 79.18 & 94.08 \\ & ASH-B & 73.89 & 84.45 & 83.94 & 47.72 & 78.84 & 54.61 & 74.10 & 84.80 & 76.18 & 82.01 \\ & ASH-B & 83.96 & 89.40 & 92.47 & 72.64 & 89.37 & 62.27 & **76.58** & **92.78** & 89.43 & 76.20 & 76.21 \\ & ASH-S & 82.62 & 83.98 & 91.54 & 39.94 & 87.01 & 51.66 & 72.17 & 85.20 & 82.18 & **74.29** & 76.18 & 88.00 \\ & **NECO (ours)** & **86.61** & **95.7** & **95.20** & **40.5** & **91.27** & **39.12** & 70.28 & 85.70 & **88.57** & **94.38** & **79.42** & **76.03** \\ \hline \hline \end{tabular}
\end{table}
Table 2: OOD detection for NECO vs baseline methods. The ID dataset are CIFAR-10/CIFAR-100, and OOD datasets are CIFAR-100/CIFAR-10 alongside SVHN. Both metrics AUROC and FPR95 are in percentage. The best method is emphasized in bold, and the 2nd and 3rd ones are underlined. |
2310.19903 | A Multi-agent Reinforcement Learning Study of Emergence of Social
Classes out of Arbitrary Governance: The Role of Environment | There are several theories in economics regarding the roots or causes of
prosperity in a society. One of these theories or hypotheses -- named geography
hypothesis -- mentions that the reason why some countries are prosperous and
some others are poor is the geographical location of the countries in the world
as makes their climate and environment favorable or unfavorable regarding
natural resources. Another competing hypothesis states that man-made
institutions particularly inclusive political institutions are the reasons why
some countries are prosperous and some others are poor. On the other hand,
there is a specific political theory developed for the long-term social
development in Iran -- named Arbitrary Rule and Aridisolatic Society which
particularly emphasizes on the role of aridity to shape arbitrary political and
economical institutions in Iran, without any functional social classes in the
society. In this paper, by extending the AI-Economist -- a recently developed
two-level multi-agent reinforcement learning environment -- I show that when
the central planner is ruling the environment by arbitrary rules, the society
evolves through different paths in different environments. In the environment
having band-like vertical isolated patches of natural resources, all mobile
agents are equally exploited by the central planner and the central planner is
also not gaining any income, while in the society having more uniformly
distributed natural resources, the productivity and Maximin are higher and the
society generates a heterogeneous stratified social structure. All these
findings provide a partial answer to the above debate and reconcile the role of
geography and political institutions on the long-term development in a region. | Aslan S. Dizaji | 2023-10-27T13:31:53Z | http://arxiv.org/abs/2310.19903v1 | A Multi-agent Reinforcement Learning Study of Emergence of Social Classes out of Arbitrary Governance: The Role of Environment
###### Abstract
There are several theories in economics regarding the roots or causes of prosperity in a society. One of these theories or hypotheses -named geography hypothesis -mentions that the reason why some countries are prosperous and some others are poor is the geographical location of the countries in the world as makes their climate and environment favorable or unfavorable regarding natural resources. Another competing hypothesis states that man-made institutions particularly inclusive political institutions are the reasons why some countries are prosperous and some others are poor. On the other hand, there is a specific political theory developed for the long-term social development in Iran -named Arbitrary Rule and Aridisolatic Society which particularly emphasizes on the role of aridity to shape arbitrary political and economical institutions in Iran, without any functional social classes in the society. In this paper, by extending the AI-Economist -a recently developed two-level multi-agent reinforcement learning environment -I show that when the central planner is ruling the environment by arbitrary rules, the society evolves through different paths in different environments. In the environment having band-like vertical isolated patches of natural resources, all mobile agents are equally exploited by the central planner and the central planner is also not gaining any income, while in the society having more uniformly distributed natural resources, the productivity and Maximin are higher and the society generates a heterogeneous stratified social structure. All these findings provide a partial answer to the above debate and reconcile the role of geography and political institutions on the long-term development in a region.
## 1 Introduction
There are at least three general theories regarding the roots of prosperity in a society. One theory named geography hypothesis mentions that the reason why some countries are prosperous and some others are poor is due to the location of the countries in the world which makes them particularly amenable to use and extract natural resources or makes them unsuited. This second hypothesis called culture hypothesis mentions that the reason why some countries are rich and some others are poor is due to the inherent features of the people living in those countries considering their culture, religion, or ethnicity which make them particularly responsive to the ideas of work ethic, progress, and innovation or not. Finally, the third theory mentions that the reason behind prosperity of some countries and poverty in others is due to the fact that the leaders of poor countries are ignorant about how to rule their countries to guide their nations toward prosperity. All these theories are backed with
some historical data, but all of them simultaneously can be refuted by counter-examples (Acemoglu and Robinson, 2012).
More recently there is a new class of general theories regarding the nature and cause of prosperity in a country which points towards the importance of inclusive economic and particularly political institutions in generating a prosperous or less prosperous future for a country. This theory which is backed by Daron Acemoglu and colleagues mentions that inclusive institutions which make the economic and political landscape flat for all groups in the society -so motivate them to participate in fair economic and political activities -make a nation or a country prosperous (Acemoglu et al., 2011; Acemoglu and Robinson, 2012; Acemoglu et al., 2015; Acemoglu and Wolitzky, 2020; Acemoglu and Robinson, 2022a,b).
On the other hand, there is a specific theory for the long-term social development in Iran named Arbitrary Rule and Aridisolatic Society developed by Homa Katouzian (Katouzian, 2003) which indicates that the nature of social institutions in Iran is completely different from their counterparts in the west. As an instance, using historical data, Katonozian indicates that in Iran the state is the only functional entity of the society which is completely independent from all urban social classes, while there is not any functional urban social class which its identity is independent from the state, and as a result, all urban classes are more or less empirical. Moreover, while there is a large body of laws, but due to lack of functional urban classes, there is not any non-violable binding rules between the state and the society which makes the government essentially arbitrary. Katonozian attributes the emergence of the arbitrary governance to the aridity in the vast region of Iranian plateau. The aridity generates large number of small isolated villages which their individual surplus is not sufficient to found a feudal base but the sum of whole surplus of these villages could make an arbitrary governance with large transportation facilities across the country and infrastructure in the urban areas. Then the arbitrary rule could make all urban social classes dependent on itself while perpetuates its power across country until that point that some internal or external conditions ignite revolution. At this point, since the arbitrary governance is independent from any social classes, all urban classes are less or more against the government.
Here, I bring the summary of the Katouzian's theory using his own words: "To sum up, aridity did play a basic role in shaping the structure of the Iranian political economy and its institutional features, but it did so (to borrow Tolstoy's words) in its own peculiar way: (a) it served to create autonomous village units of production, none of which could produce a sufficiently large surplus to provide a feudal power base and (b) but, given the expanses of the region, the collective surplus of all these isolated and autonomous villages taken together was so large that, once taken by an external force, it could be used as the economic source of a countrywide despotic power. The despotic apparatus could then impose itself and its arbitrary will on all the social classes, and prevent the subsequent fragmentation of politician power until such time that a combination of internal and/or external pressures would destroy it, and -sooner or later -replace it by another despotic apparatus. The size of the direct and indirect collective agricultural surplus was so large as to enable these despotic states to spend on transport, communications, military and bureaucratic organizations, and so on, which both maintained their hold on the land and prevented the later emergence of feudal autonomy in agriculture, or bourgeois citizenship in towns." (Katouzian, 2003)
In this paper, I intend to reconcile the two theories posed by Homa Katouzian and Daron Acemoglu by showing the interplay of natural environment and the resultant institutions. I perform this by extending the AI-Economist framework (Zheng et al., 2022), a recently developed two-level multi-agent reinforcement learning environment. In this framework, one single agent is a rational social planner who designs a particular mechanism or policy generally having a goal of optimizing a particular kind of social welfare functions in the society. The other agents are a set of rational economic agents who behave in response to the implemented mechanism or policy generally following their own self-interest. This framework has been used to model the tax-gaming behavior of agents -optimizing their labors, trading, and building, while the central social planner maximizes productivity or equality in the society (Zheng et al., 2022). I explained and used the extension of the AI-Economist, the Modified AI-Economist, in an accompanying paper (Dizaji, 2023) in which I show the impacts of the governing systems or institutions on the origin of morality, prosperity or equality, and fairness in the society. Here using the same framework, but considering two parallel environments in which one of them is comprised of band-like isolated and the other one of uniformly distributed natural resources, I intend to show that if the central planner is an arbitrary ruler, each environment evolves through a different path. Band-like environment finally converges to an environment in which all the
agents are getting powerless in front of the naked power of the arbitrary governance, while the central planner's net total tax revenue is also getting zero. On the other hand, the uniform environment converges to a final situation in which the society is getting composed of stratified distinct social classes, and the central planner is also able to continue collecting the non-zero taxes. In the band-like environment, while the final Equality is higher (basically all the agents are equally exploited by the central government), the Productivity and Maximin are lower than the uniform environment. This interesting result obtained considering the fact that the total amounts of natural resources in the band-like environment are slightly more than the total amounts of natural resources in the uniform environment. Furthermore, the arbitrary nature of the central planner is devised by letting it to return some arbitrary amounts of tax values to more wealthier agents. Overall, this paper is another manifestation of the power of multi-agent reinforcement learning to model social and economical phenomena (Trott et al., 2021; Zheng et al., 2022; Zhang et al., 2022; Leibo et al., 2019, 2021; Johanson et al., 2022).
## 2 Modified AI-Economist
For a complete description of the AI-Economist, please refer to the original paper ((Zheng et al., 2022)) and Appendix A. Here, three major modifications that are made to the original framework for the purpose of this paper are explained.
First, one new resource material -iron -is added to the environment, and now with three building materials, the number of possible house types is diversified to three: a red house is made of wood and stone, a blue house is made of wood and iron, and a green house is made of stone and iron. Also, three different build-skills for three different house types are introduced. However, move and gather skill and labor, and trade labor will be equal and fixed across all materials and agents. Also, in the modified version of the AI-Economist, the social planner is able to observe the complete public map of the environment (Fig. 1).
Second, for the purpose of this paper, there are two different environments. The first one is composed of band-like vertical isolated patches of unique natural resources placed in the whole environment (Fig. 2(A)), and the second one is composed of uniformly distributed natural resources placed in the environment (Fig. 3(A)).
Third, the central planner collects the taxes as it is mentioned in the original framework of the AI-Economist, but here it returns arbitrary partial amounts of the net total tax revenue to more wealthier agents of the environment using the following two formula. In the following, \(urn\) refers to a uniform random number, \(nti\) refers to the net total income of all agents, \(noa\) refers to the number of all agents, \(nova\) refers to the number of wealthier agents. \(ri\) refers to a random integer, and finally \(nttr\) refers to the net total tax revenue of the central planner.
\[Last\;Income[Agent]>(0.7+0.1*urn)*\frac{nti}{noa} \tag{1}\]
\[Return\;Tax[Agent]=\frac{noa}{nova}*(1+(-1)^{ri}*urn*(1-\frac{nowa}{noa}))* \frac{nttr}{noa} \tag{2}\]
In the original framework of the AI-Economist, the net total tax revenue of the central planner is equally divided across all the agents while in this modified framework, the net total tax revenue of the social planner is somewhat arbitrarily divided among a group of pre-selected wealthier agents -having incomes in the previous tax period more than partially random limits (Eq. 1). Thus the central planner here is arbitrary and none-inclusive, and its discriminative power is in favor of the wealthier agents in the society. Simultaneously, Eq. 2 is devised by assuming that as an episode progresses, slowly, all the agents are getting incomes more than the pre-specified random limits, and thus they are included in the tax return scheme of the central planner. In that case, all the agents get an equal share of the net total tax revenue. Overall, these formula -in an ideal situation -can model the emergence of equality before the law ((Acemoglu and Wolitzky, 2020)). In the next section, it is indicated that this assumption is partially confirmed though by showing an interesting distinct patterns of convergence in two environments.
Figure 2: Sample plots obtained from running the Modified AI-Economist in the band-like environment with equality times productivity as the objective function of the central planner. (A) The environment across five time-points of an episode, (B) the movement of the agents across an episode, (C) the budgets of three resources plus coin and labor of the agents across an episode, (D) and the trades of three resources of the agents across an episode. As it is clear, at some point during an episode, the agents cannot earn more incomes (panel (C), the fourth plot from the left) and the trades of resources also largely decrease (panel (D)).
Figure 1: A schematic figure showing the two environments of the Modified AI-Economist used in this paper. In all simulations of this paper, there are 5 agents in the environment which simultaneously cooperate and compete to gather and trade three natural resources, using them to build houses and earn incomes, and at the end of each tax period, pay their taxes to the central planner. The central planner optimizes its own reward function which could be a combination of equality and productivity in the society, and returns arbitrary partial amounts of the total collected taxes to the mobile agents.
## 3 Results
Fig. 4 shows the amounts of three natural resources in the environment across an episode. It is clear from this plot, that the level of natural resources in the two environments remains constant while this level is slightly higher in the band-like environment compared to the uniform environment for two out of three natural resources. Fig. 5 shows that the Productivity and Maximin are higher in the uniform compared to the band-like environment, while the Equality is lower. Finally, Fig. 6 shows the amounts of tax return by the central planner to all agents across an episode for all 8 runs of this paper depicted in Fig. 9. The top-row shows the plots for the band-like environment in which the amounts of tax return are getting zero for all agents as an episode progresses for three out of four simulations. Moreover, the bottom-row shows the plots for the uniform environment in which the amounts of tax return are not getting zero across an episode for all four simulations. All these plots (from Fig. 2 to Fig. 6) speak to the following facts: in the band-like environment with isolated vertical patches of unique natural resources, as an episode continues the agents are earning less and less incomes until that point that their net total income is getting zero. Simultaneously, the net total tax revenue of the central planner is also getting decreased and equal to zero. Basically, in this environment at the end of one episode, all the agents are equally exploited by the central planner and there is not any difference among them. The central planner is also not gaining anything further from the agents, and we could say that the system is failed. On the other hand, in the uniform environment, the net total income of all agents is not getting zero, and as a result, the central planner's net total tax revenue is also not getting zero across an episode. Basically, until the end of an episode, the agents keep their
Figure 3: Sample plots obtained from running the Modified AI-Economist in the uniform environment with equality times productivity as the objective function of the central planner. (A) The environment across five time-points of an episode, (B) the movement of the agents across an episode, (C) the budgets of three resources plus coin and labor of the agents across an episode, (D) and the trades of three resources of the agents across an episode. As it is clear, the agents are able to earn more incomes (panel (C), the fourth plot from the left) and the trades of resources are also unchanged across an episode (panel (D)).
stratified social structure while the central planner is also able to continue collecting the taxes. This is the reason why in the uniform environment, the Equality is lower than the band-like environment, while both Productivity and Maximin are higher. The whole point of this paper is that aridity as it is manifested by the band-like environment is more prone to generate exploitative central planner and a society without any functional social classes, while more favorable environment such as the uniform environment is inclined to generate less exploitative central planner with more stratified social classes. Overall, the model used in this paper is a small step toward reconciling the interplay of environments and institutions on the long-term development of a region.
Figure 4: The level of natural resources across the two environments, band-like and uniform. As it is clear from the plots, this level remains almost constant across one episode and overall it is slightly higher for band-like environment for two out of three natural resources compared to the uniform environment.
Figure 5: Productivity, Equality, and Maximin across the two environments, band-like and uniform. As it is clear from the plots, the Productivity and Maximin are higher while the Equality is lower in the uniform compared to the band-like environment.
## 4 Final Remarks
Current LimitationsThere are at least three limitations to the current study. The first limitation comes from the fact that for each set of input parameters of the Modified AI-Economist, only one simulation has been run to generate one set of results. Then the similar runs are pooled together to have average results across different conditions. Thus it is reasonable to run multiple times a simulation with a unique set of parameters and then average them all together. The second limitation is the number of episodes which each training has been run and is equal to 5000. While even with this amount of episodes, the average reward plots across training iterations (Fig. 10) show that almost all simulations have been converged, it is wise to try more RL iterations. Finally, the third limitation comes from the fact that the discriminative nature of the central planner in this paper is only due to the arbitrary tax return scheme, however, the central planner has an objective function treating all mobile agents equally which has been considered unchanged across the two environments. One immediate modification to the current modeling, is to include partially the agents in the objective function of the central planner to see how this changes the dynamics in the two environments.
Future DirectionsBeside above modifications, two other important extensions for this project can be envisioned. The first one is to test other values for the constants in Eq. 1 or other modeling frameworks for Eqs. 1 and 2. This way we could make sure that the results obtained here are robust. The second direction is to model punishment in this framework. One important feature of arbitrary governance is that there is not any non-violable set of laws between the central planner and the mobile agents even considering the case of punishment, so it would be interesting to model this phenomenon in the general framework of the AI-Economist.
## Broader Impact
As I discussed in the accompanying paper (Dizaji, 2023), for the works similar to the current paper, it is possible to envision many policy implications, however, we should be cautions about interpreting these results more than what is appropriate, due to many simplifying assumptions inherent in any mathematical modeling. Overall, the limited modeling framework used in this paper shows that in the discussion of the economics development we should consider the roles of geography and political institutions together, and also compare the results of Productivity, Equality, and Maximin cautiously across different environments.
Figure 6: Amount of tax return to all agents across 20 years of one simulation for all 8 runs of this paper with the order of Fig. 9 from left-to-right and top-to-bottom. The top-row shows the plots for 4 simulations of the band-like environment, while the bottom-row shows the plots for 4 simulations of the uniform environment. As it is clear from the plots, in three out of four plots of the band-like environment, at some point during an episode, the amounts of tax returns to all agents are getting zero due to the reason that the net total income of all agents and thus the central planner’s net total tax revenue are both getting zero. This situation does not happen for the agents in the uniform environment, and in all plots until the end of an episode, they keep their socially stratified structure.
For further information, please refer to the Ethics section of the original AI-Economist paper (Zheng et al., 2022). Particularly, in that section, it has been emphasized on the full-transparency of the codes of a project with a similar scope. As a result, I provided an open Github repository ([https://github.com/aslansd/modified-ai-economist](https://github.com/aslansd/modified-ai-economist)) having all the required codes, simulations, and notebooks to generate the runs and plots of this paper.
## Acknowledgments and Disclosure of Funding
_AutocurriculaLab_ has been funded in March 2022 and since then has been supported by multiple agencies. Hereby, I acknowledge their supports.
|
2302.02854 | NA-SODINN: a deep learning algorithm for exoplanet image detection based
on residual noise regimes | Supervised deep learning was recently introduced in high-contrast imaging
(HCI) through the SODINN algorithm, a convolutional neural network designed for
exoplanet detection in angular differential imaging (ADI) datasets. The
benchmarking of HCI algorithms within the Exoplanet Imaging Data Challenge
(EIDC) showed that (i) SODINN can produce a high number of false positives in
the final detection maps, and (ii) algorithms processing images in a more local
manner perform better. This work aims to improve the SODINN detection
performance by introducing new local processing approaches and adapting its
learning process accordingly. We propose NA-SODINN, a new deep learning binary
classifier based on a convolutional neural network (CNN) that better captures
image noise correlations in ADI-processed frames by identifying noise regimes.
Our new approach was tested against its predecessor, as well as two
SODINN-based hybrid models and a more standard annular-PCA approach, through
local receiving operating characteristics (ROC) analysis of ADI sequences from
the VLT/SPHERE and Keck/NIRC-2 instruments. Results show that NA-SODINN
enhances SODINN in both sensitivity and specificity, especially in the
speckle-dominated noise regime. NA-SODINN is also benchmarked against the
complete set of submitted detection algorithms in EIDC, in which we show that
its final detection score matches or outperforms the most powerful detection
algorithms.Throughout the supervised machine learning case, this study
illustrates and reinforces the importance of adapting the task of detection to
the local content of processed images. | Carles Cantero, Olivier Absil, Carl-Henrik Dahlqvist, Marc Van Droogenbroeck | 2023-02-06T15:22:52Z | http://arxiv.org/abs/2302.02854v2 | # NA-SODINN: a deep learning algorithm for exoplanet image detection based on residual noise regimes
###### Abstract
Context:Supervised machine learning was recently introduced in high-contrast imaging (HCI) through the SODINN algorithm, a convolutional neural network designed for exoplanet detection in angular differential imaging (ADI) data sets. The benchmarking of HCI algorithms within the Exoplanet Imaging Data Challenge (EIDC) showed that (i) SODINN can produce a high number of false positives in the final detection maps, and (ii) algorithms processing images in a more local manner perform better.
Aims:This work aims to improve the SODINN detection performance by introducing new local processing approaches and adapting its learning process accordingly.
Methods:We propose NA-SODINN, a new deep learning architecture that better captures image noise correlations by training an independent SODINN model per noise regime over the processed frame. The identification of these noise regimes is based on a novel technique, named PCA-pmaps, which allows to estimate the distance from the star in the image from which background noise starts to dominate over residual speckle noise. NA-SODINN is also fed with local discriminators, such as S/N curves, which complement spatio-temporal feature maps when training the model.
Results:Our new approach is tested against its predecessor, as well as two SODINN-based hybrid models and a more standard annular-PCA approach, through local ROC analysis of ADI sequences from VLT/SPHERE and Keck/NIRC-2 instruments. Results show that NA-SODINN enhances SODINN in both the sensitivity and specificity, especially in the speckle-dominated noise regime. NA-SODINN is also benchmarked against the complete set of submitted detection algorithms in EIDC, in which we show that its final detection score matches or outperforms the most powerful detection algorithms, reaching a performance similar to that of the Regime Switching Model algorithm.
Conclusions:Throughout the supervised machine learning case, this study illustrates and reinforces the importance of adapting the task of detection to the local content of processed images.
## 1 Introduction
The direct imaging of exoplanets through 10-m class ground-based telescopes is now a reality of modern astrophysics (e.g., Bohn et al. 2021; Chauvin et al. 2017; Keppler et al. 2018; Marois et al. 2008b, 2010; Rameau et al. 2013; Wagner et al. 2016). Reaching this milestone is the result of significant advances in the field of high-contrast imaging (HCI). For instance, extreme adaptive optics (AO) are routinely used during observations to correct image degradation caused by the Earth's atmosphere (Snik et al. 2018). In the same way, dedicated HCI instruments, such as Subaru/SCExAO (Lozi et al. 2018) or VLT/SPHERE (Beuzit et al. 2019), make use of state-of-the-art coronagraphs (Soummer 2005; Mawet et al. 2009) in order to block out the starlight to mitigate the huge flux ratio (or contrast) between a host star and its companions. Despite all these approaches, a high contrast image is still affected by speckle noise, due to residual aberrations that arise in the optical train of the telescope and instrument (Males et al. 2021). Speckles are scattered starlight blobs in the image which can mimic the expected signal of an exoplanet in both shape and contrast. Therefore, beyond dedicated instrumental developments, powerful image post-processing algorithms are needed to disentangle true companions from speckles. In order to help algorithms to achieve this goal, different observing strategies have been proposed, the most popular being angular differential imaging (ADI, Marois et al. 2006). An ADI data set consists of a sequence of high contrast images acquired in pupil-stabilized mode, where the instrument derotator tracks the telescope pupil instead of the field, in such a way that the instrument and optics in the telescope stay aligned while the image rotates in time due to the Earth rotation. As a result, speckles associated with the telescope and instrument optical train remain mostly fixed in the focal plane while the astrophysical signal rotates around the star as a function of the parallactic angle.
Currently, there exists a plethora of post-processing detection algorithms that work on ADI sequences. Most of these algorithms belong to the PSF-subtraction family, which aims to model the speckle field and subtract it from each frame in the ADI sequence, de-rotate the residual images according to the parallactic angles, and finally collapse them into a final frame (Marois et al. 2008a), commonly referred to as processed frame. Examples of these techniques are the locally optimized combination of images (LOCI, Lafreniere et al. 2007)
and its variants TLOCI (Marois et al., 2014) and MLOCI (Wahaj et al., 2015), principal component analysis (PCA, Soummer et al., 2012; Amara and Quanz, 2012), the low-rank plus sparse decomposition (LLSG, Gomez Gonzalez et al., 2016), and the non-negative matrix factorization (NMF, Ren et al., 2018). PSF subtraction is usually followed by a detection algorithm, which can be either based on an S/N map (Mawat et al., 2014) or on a more advanced technique, such as the standardized trajectory intensity mean (STIM, Pairet et al., 2019) or the regime-switching model (RSM, Dahlqvist et al., 2020). Another family of algorithms, based on an inverse problem approach, relies on directly modeling the expected planetary signal and tracking it along the ADI sequence. This is typically done by estimating the contrast of the potential planetary signal via maximum likelihood estimation. Examples of these methods include ANDROMEDA (Cantalloube et al., 2015), the forward model matched filter (FMMF, Ruffio et al., 2017), the exoplanet detection based on patch covariances (PACO, Flasewur et al., 2018), or the temporal reference analysis of planets (TRAP, Samland et al., 2021). Recently, a new post-processing approach based on machine learning has emerged in HCI. In particular, SODIRF and SODINN (Gomez Gonzalez et al., 2018) are two binary classifiers that use a random forest and a convolutional neural network, respectively, to distinguish between companion signatures and residual noise in processed frames. More recently, Gebhard et al. (2022) proposed a modified version of the half-sibling regression by Scholkopf et al. (2016) using a ridge regression with generalized cross-validation.
Most of these techniques were benchmarked in the context of the Exoplanet Imaging Data Challenge (EIDC, Cantalloube et al., 2020, 2022), the first platform designed for a fair and common comparison of processing algorithms for exoplanet detection and characterization in high-contrast imaging. From the whole set of conclusions provided by the first EIDC phase Cantalloube et al. (2020), we rely on two of them to motivate this paper. First, we observed that detection algorithms that exploit the local behaviour of image noise obtained the highest detection score in the challenge leader-board. Second, we found that supervised machine learning algorithms produced a relatively high number of false positives, compared with more standard algorithms. Thereby, with the aim of enhancing the supervised machine learning models, we explore, in this paper, a new local noise approach, through which they can better exploit noise statistics in the ADI data set. This approach relies on the existence of two noise regimes in the processed frame: a speckle-dominated residual noise regime close to the star, and a background-dominated noise regime further away. Our goal is to spatially define these regimes in the processed frame through the study of their statistical properties and then adapt the SODINN neural network to work separately in each of them in order to improve its detection performance. Therefore, in Sect. 2 we first revisit noise statistics in HCI and present a novel statistical method that allows to empirically delimit noise regimes in processed frames. Then, in Sect. 3, we introduce the NA-SODINN detection algorithm, a neural network architecture optimized to work on noise regimes. Our deep learning method is also fed with local discriminators, such as S/N curves that contain additional physical-motivated features and help the trained model to better disentangle an exoplanet signature from speckle noise. In Sect. 4, NA-SODINN is evaluated through local ROC analysis using on a series of ADI data sets obtained with various instruments. During the evaluation, NA-SODINN is benchmarked against other state-of-the-art HCI detection algorithms. Section 5 concludes the paper.
## 2 Noise regimes in processed ADI images
The term _local_ is often used in image processing to describe a process applicable to a smaller portion of the image, such as the neighborhood of a pixel, in which pixel values exhibit a certain amount of correlation. In HCI, defining image locality thus implies a good comprehension of the physical information captured in the image. A common manner to define locality is linked to the understanding of noise distribution along the image field-of-view, and how this can prevent the detection of exoplanets. For example, after some pre-processing steps (including background subtraction), a high-contrast image is composed of three independent components: (1) residual starlight under the form of speckles, (2) the signal of possible companions, and (3) the statistical noise associated with all light sources within the field-of-view, generally dominated by background noise in infrared observations. In these raw images, exoplanets are hidden because starlight speckles and/or background residuals dominate at all angular separations, and act as a noise source for the detection task. According to their origin, starlight speckles can be classified as instrumental speckles (Hinkley et al., 2007; Goebel et al., 2016), which are generally long-lived and therefore referred to as quasi-static speckles, and atmospheric speckles, which have a much shorter lifetime (Males et al., 2021). Speckles intensity is known to follow a modified Rician probability distribution (Soummer et al., 2007). Here, the locality of the noise is driven by the distance to the host star (Marois et al., 2008a), which already gives an indication on how local noise will be defined in a processed image. Consequently, a large fraction of post-processing algorithms currently work and process noise on concentric annuli around the star. For example, the annular-PCA algorithm (Absil et al., 2013; Gomez Gonzalez et al., 2016) performs PSF subtraction with PCA on concentric annuli. Nevertheless, more sophisticated local approaches have recently been proposed in the literature. For instance, both the TRAP algorithm (Samland et al., 2021) and the half-sibling regression algorithm (Gebhard et al., 2022) take into account the symmetrical behavior of speckles around the star when defining pixels predictors for the model.
In this section, we aim to introduce an alternative local processing, well-suited for the SODINN framework as explained later in the paper, based on the spatial division of the processed frame into (at least) two noise regimes. For illustrative purposes, we make use, in this section, of two ADI sequences chosen from the set of nine ADI sequences used in EIDC (Cantalloube et al., 2020) (see Table A.1 for more information about the EIDC data sets). Our two ADI sequences, referred to as _sph2_ and _nirc3_, were respectively obtained with the VLT/SPHERE instrument (Beuzit et al., 2019) and the Keck/NIRC-2 instrument (Serabyn et al., 2017). They have the advantage of not containing any confirmed or injected companion, which makes them appropriate for algorithm development and tests that rely on the injection of exoplanet signatures in the image.
### Spatial noise structure after ADI processing
Performing PSF subtraction on each high-contrast image in an ADI sequence generates a sequence of residual images where speckle noise is significantly reduced, and partly whitened (Mawat et al., 2014). After de-rotating these residual images based on their parallactic angle, and combining them into a final frame, the remaining speckles are further attenuated and whitened. This final frame is commonly referred to as _processed frame_. Because of the different post-processing steps and the whitening operator that removes correlation effects, the major
ity of HCI detection algorithms make use of the central limit theorem to state that residual noise in processed frames follows a Gaussian distribution, an assumption that even today has not been proven experimentally. From practice, it is known that this Gaussian assumption leads to high false positive detection rates (Marois et al. 2008a; Mawet et al. 2014) since residual speckle noise in processed frames is never perfectly Gaussian, and still dominates for small angular separations. Pairet et al. (2019) found experimentally that the tail decay of residual noise close to the star is better explained by a Laplacian distribution than a Gaussian distribution. Later, Dahlqvist et al. (2020) reached the same conclusion by applying a Gaussian and a Laplacian fit to the residuals of PCA-, NMF-, and LLSG-processed frames. These experimental results suggest the presence of two residual noise regimes in the processed frame: a non-Gaussian noise regime close to the star, dominated by residual speckle noise, and a Gaussian regime further away, dominated by background noise.
### Identification of noise regimes
Based on the current understanding of the local statistics of noise in a processed frame, we aim now to spatially delimit both noise regimes in the image. To do so, we try to find the best radial distance approximation from the star where residual speckle noise starts to become negligible compared to background noise (Fig. 1), which is uniform over the whole field-of-view.
#### 2.2.1 Paving the image field-of-view
In order to find the radius at which background noise starts to dominate in the image, we study the evolution of noise statistics as a function of angular separation. We first pave the full image field-of-view through concentric annuli of \(\lambda/D\) width (Fig. 1). Each annulus contains pixels that are expected to be drawn from the same parent population (Marois et al. 2008a). Note that, in the presence of residual speckles, pixels that contain information from the same speckle are all spatially correlated. When background noise dominates over residual speckle noise, we can instead assume that all pixels in an annulus are independent, since photon noise occurs on a pixel-wise basis. In HCI, a common procedure to guarantee the independence of pixel samples when performing statistical analysis is to work by integrating pixel intensities on non-overlapping circular apertures of \(\lambda/D\) diameter within the annulus (Mawet et al. 2014), as shown in Fig. 2. This procedure is based on the characteristic spatial scale of residual speckles (\(\sim\lambda/D\) size). However, Bonse et al. (2022) have recently showed that, in the presence of speckle noise, this independence assumption on non-overlapping apertures is incorrect. Instead, they propose to (i) only consider the central pixel value in each circular aperture, to produce a more statistically independent set of pixels, and (ii) possibly repeat the experiment with various spatial arrangements of the non-overlapping apertures to reduce statistical noise in the measured quantities. We follow this recommendation and therefore, for the rest of this study, we define our annulus samples by only taking the central pixel value for each non-overlapping circular aperture (Fig. 2).
One limitation in using non-overlapping apertures is the small sample statistics problem, especially at small angular distances (Mawet et al. 2014). Small samples make statistical analysis not significant so that derived conclusions are not strong enough statistically speaking. In order to avoid this issue, we propose to use the concept of a rolling annulus (Fig. 2) that always contains a minimum number of independent pixels \(N\). It can be understood as an annular window around the star for which the inner boundary moves in \(1\lambda/D\) steps, while the outer boundary is set to achieve the criterion on the minimum number
Figure 1: Processed frame from \(sph2\) data set with both speckle-dominated and background-dominated residual noise regimes and their annular split (black circle). The best approximation of this split is what we aim to find in this section.
Figure 2: Rolling annulus with \(N=100\) over the processed frame of Fig. 1. The first rolling one (in red), the ninth rolling one (in blue) and the eighteenth rolling one (in green) are displayed over the central pixel pavement in the image. The full list of rolling annuli is shown on the black line below.
of independent pixels. An example of this process with \(N=100\) pixels is shown in Fig. 2, where the first rolling annulus that achieves the condition, composed of all central pixels of the non-overlapping apertures between 1 and \(6\lambda/D\), is displayed in red color over the processed frame. Then, the rolling annulus moves away from the star changing its boundaries as illustrated with the black line at the bottom of Fig. 2. For example, the ninth rolling annulus (in blue) with \(N=100\) is located between 9 and 10 \(\lambda/D\), and the eighteenth rolling annulus (in green) is at \(18\lambda/D\) distance, achieving the \(N=100\) condition without the need to expand the region to another annulus. In this paper, we select \(N=100\) minimum samples, considered to be the minimum number of samples required to reach a reliable statistical power and significance for our statistical analysis.
#### 2.2.2 Statistical moments
Once the processed frame is paved, we first study the evolution of different statistical moments as a function of the angular separation to the star: the variance (amount of energy/power), the skewness (distribution symmetry), and the excess kurtosis (distribution tails). Figure 3 shows this evolution for the case of the _sph2_ (top row) and _nirc3_ (bottom row) data sets, on which we apply annular-PCA to produce the processed frames. We observe that the variance decreases as the rolling annulus moves away from the star. This trend is common to both data sets and is what we would expect in physical terms as the intensity of residual speckles varies rapidly with angular separation, especially at short distance. We also see that this behaviour is damped when using a larger number of principal components (PCs), which leads to a more effective speckle subtraction. Regarding the skewness analysis, we adopt the convention of Bulmer (1979), which states that a distribution is symmetrical when its skewness ranges from \(-0.5\) to \(0.5\). For both data sets, we clearly observe a loss of symmetry at small angular separations. The presence of speckles can provoke this distribution asymmetry due to their higher intensity values in comparison with the background. Looking now at the excess kurtosis in Fig. 3, we observe a strong leptokurtic1 trend for the entire set of PCs at small angular separations, and for both data sets. This perfectly matches with the fact that a Laplacian distribution fits better the tail decay of residual noise (Pairet et al., 2019), since it is, by definition, leptokurtic. At higher angular separations instead, we observe differences between both data sets. In the _sph2_ processed frames, we detect one mesokurtic regime approximately between 6-13\(\lambda/D\) followed by a weaker leptokurtic regime approximately between 14-19\(\lambda/D\). For _nrc3_, we only observe one mesokurtic regime at large distance from the star, beyond about 3-6\(\lambda/D\) (Fig. 3).
Footnote 1: In statistics, a leptokurtic distribution has a kurtosis greater than the kurtosis of a normal distribution (mesokurtic).
#### 2.2.3 Normality test combination analysis
Another way to explore the spatial distribution of noise is to use hypothesis testing. Assuming that residual speckle noise is non-Gaussian by nature, while background noise is Gaussian (see Sect. 2.1), we can assess the probability of the null hypothesis \(H_{0}\)
Figure 3: Statistical moments evolution based on a rolling annulus which paves the full annular-PCA processed frame. The top and bottom rows refer, respectively, to _sph2_ and _nrc3_ ADI sequences. Colour curves on each subplot refers to a different principal component.
that data is normally distributed, _i.e._, explained solely by background noise. We rely on a combination of a series of normality tests, making use of four of the most powerful tests: the Shapiro-Wilk test (\(sw\), Shapiro & Wilk 1965), the Anderson-Darling test (\(ad\), Anderson & Darling 1952), the D'Agostino-K2 test (\(ak\), D'Agostino & Pearson 1973), and the Lilliefors test (\(li\), Lilliefors 1967). This choice is motivated by the fact that they have been well-tested in many studies, including Monte-Carlo simulations (Yap & Sim 2011; Marmolejo-Ramos & Gonzalez-Burgos 2013; Ahmad & Khan 2015; Patricio et al. 2017; Wijekularathna et al. 2019; Uhm & Yi 2021). It is worthwhile to remark that the goal is not to benchmark the robustness of all these tests. Our purpose, instead, is to collect a larger amount of statistical evidence for a same hypothesis, that can then be combined to increase the statistical power when making a decision regarding the null hypothesis. Moreover, regarding the statistical requirements, the only constraints to be verified before using these tests are the independence and sufficient size of the sample. In terms of sample size, (Jensen-Clem et al. 2017) shows that normality tests can exhibit lower statistical power with sample sizes under 100 observations. Here, the independence and the size constraints are met by the proposed approach to pave the field-of-view, using the central pixels of non-overlapping apertures within rolling annuli of \(N=100\) apertures. Additionally, we follow for this analysis the recommendation of Bonse et al. (2022) to perform our statistical tests with various spatial arrangements for the non-overlapping apertures. We leverage the fact that different aperture arrangements within a same annulus contain valuable noise diversity that can directly benefit the analysis when making a decision about the null hypothesis.
Our analysis is thus composed as follows. Given a processed frame, we test the null hypothesis \(H_{0}\) in a specific rolling annulus through the following consecutive steps:
1. Randomly select a normality test \(t\) from \(\mathcal{T}=\{sw,ad,ak,li\}\).
2. Randomly select an angular displacement \(\theta\) of circular apertures for each single annulus within the rolling annulus. Assuming \(N_{ann}\) single annuli, then, \(\Theta=\{\theta\}_{i\geq\{1,\ldots,N_{ann}\}}\), where \(\Theta\) thus represents a random aperture arrangement.
3. Define the sample of central pixels \(X(\Theta)\).
4. Using the selected statistical test \(t\), compute the p-value associated with the null hypothesis for the sample \(X(\Theta)\), denoted as \(p(t,\Theta)\).
5. Repeat steps 1-4 \(m\) times. Because these \(m\) p-values computed in step 4 are not statistically independent, we use the harmonic mean as proposed by Vovk & Wang (2020) to combine them into a global p-value noted \(\bar{p}\).
6. Compare \(\bar{p}\) with a predefined significance threshold \(\alpha\), and reject \(H_{0}\) if \(\bar{p}<\alpha\).
By repeating steps 1-6 for each rolling annulus in the processed frame and for various numbers of principal components in our annular-PCA post-processing algorithm, we can build what we call as _PCA p-value map_, or PCA-pmap for short. Figures 4 and 5 show examples of PCA-pmaps for the _sph2_ and _nirc3_ data sets, respectively. For both, we only considered the first 29 principal components to produce the annular-PCA space (\(y\)-axis in figures). Each cell in a PCA-pmap shows, through the number in white and its background color, the combined p-value \(\bar{p}\) computed in step 5 with \(m=30\). P-values below the pre-defined threshold \(\alpha\) are marked with yellow stars on the figures. In order to minimize the Type I error (false rejection of the null hypothesis), we selected a conservative threshold value \(\alpha=0.01\) in Figures 4 and 5. In the case of _sph2_ (Fig. 4), we clearly observe the presence of three noise regimes: a first regime dominated by non-Gaussian noise due to residual speckles between \(1-5\lambda/D\) distance, a second regime where noise is more consistent with Gaussian statistics, probably dominated by background noise between \(5-12\lambda/D\), and finally, a third regime with non-Gaussian noise beyond \(12\lambda/D\), where speckles are dominating again as we approach the limit of the well-corrected area produced by the SPHERE adaptive optics (Cantalloube et al. 2019). This would also explain the slightly leptokurtic behavior observed at those separations in Fig. 1. For the _nirc3_ data set (Fig. 5), we see two noise regimes, with speckle noise dominating approximately between \(1-3\lambda/D\) distance, and background noise dominating beyond \(3\lambda/D\).
In addition to the detection of noise regimes, we leverage the fact that a PCA-pmap can be also used as a method to choose an optimal PCA-space according to the residual noise instead of other used metrics, such as the cumulative explained variance ratio (CEVR, Gomez Gonzalez et al. 2018). For each rolling annulus in Figs. 4 and 5, we plot the 90% CEVR with a white dashed curve, a common variance limit used in the literature to capture relevant information in the data. Hence, this curve informs about which principal component should be the lowest for each annulus to avoid adding useless independent information. In order to complement the added value of the CEVR over the PCA-pmap, we provide information on the exoplanet signature evolution along the PCA-space. By injecting a number of fake companions in each rolling annulus, with random coordinates and random flux level between 1 and 3 times the estimated noise level, and computing their S/N, we can estimate for which principal component the companion S/N is maximum. Beyond this principal component, self-subtraction of the exoplanetary signal starts to increase more rapidly than noise suppression (Gonzalez et al. 2017). We indicate the principal component where the exoplanet S/N is maximum in average through white circles in Figs. 4 and 5. By comparing the plausibility of the null hypothesis together with the CEVR metric and the principal component at which the S/N of exoplanet is maximum, we can define the PCA-space in our ADI sequence based on a more complex analysis of noise. We make use of this particular approach in PCA-pmaps later in section 4.
### Field-of-view splitting strategy
At this point, we can see that, for both _sph2_ and _nirc3_, similar estimations of the noise regimes are reached using the two proposed methods: the study of statistical moments and the PCA-pmaps. Figure 3 provides a first insight into the spatial structure of residual noise and thereby, brings us closer to estimate the radius split (Fig. 1) in the processed frame. Indeed, the significant increase of the variance together with the leptokurtic behaviour and the positively skewed trend at small angular separations, suggest that this regime is still dominated by residual starlight speckles. On the other hand, PCA-pmaps contain more statistical diversity through the combination of p-values with which very similar regime estimations are reached. Thus, both analysis are complementary statistically speaking. Yet, from now on, we elect to use PCA-pmaps to define noise regime as a baseline, since they can also be used for other purposes.
The noise analysis described above suggests that there can be more than two noise regimes in the processed frame, depending on the structure of the data. Despite the fact that this is not enough to extract a general conclusion for all HCI instruments, it suggests that noise regions should be defined on a case-by-case basis. Regarding the nature of residual noise in a processed
frame, our tests do not necessarily mean that residual speckle noise is non-Gaussian in the innermost, individual annuli. Instead, compound distributions could be at the origin of the non-Gaussian noise behavior in the innermost rolling annuli. Compound distributions refer to the sampling of random variables that are not independent and identically distributed. For large angular separations (e.g., the green annulus in Fig. 2), we generally observe that the variance is approximately the same for all central pixels. Because they can be considered as independent random variables, we can apply the central limit theorem to state that these samples follow a Gaussian distribution, as expected for background noise. However, for small angular separations (red annulus in Fig. 2) where residual speckle noise dominates over background noise, the samples are taken from distributions that might be Gaussian, but with different variances. If they are Gaussian and their variance follows an exponential distribution, then according to Gneiting (1997), the compound distribution follows a Laplacian, as observed by Pairet et al. (2019). This explanation, which is not a proof, would reconcile the belief that residual speckle noise should be locally Gaussian. Because of small sample statistics, there is however no proper way to test this interpretation on individual annuli in the innermost regions. Likewise, the compound distribution problem could also explain why we observe a non-Gaussian behavior in the outermost annuli of the _sph2_ processed frame. In those separations, our rolling annuli contain samples drawn at exactly the same radial distance, which would lean us to assume that the variances of the underlying distributions are all identical. However, due to different physical reasons, such as the possible presence of a wind-driven halo or of telescope spiders, there is no guarantee for a perfect circular symmetry inside this speckle-dominated annulus. In such a scenario, the compound distribution problem combined with the
Figure 4: PCA-pmap of _sph2_ ADI sequence, showing the combined p-value \(\tilde{p}\) both as a color code and as values, as a function of the distance to the star through the rolling annulus (\(x\) axis) and the number of principal components used in the PCA-based PSF subtraction (\(y\) axis). Yellow star markers indicate when the null hypothesis \(H_{0}\) (Gaussian noise) is rejected. The white dashed line shows the 90% CEVR at each rolling annulus. White circles in bold highlight the principal component that maximizes the S/N of fake companion recoveries.
variability in the variances of the corresponding samples leads to a non-Gaussian behavior as well. For all these reasons, we believe that splitting the processed frame field-of-view in different noise regimes is duly motivated and, in the next sections, we detail how we have implemented this splitting to improve the detection of exoplanets.
## 3 Implementation
So far, we have focused on understanding the spatial structure of residual noise in the processed frame, which has allowed us to empirically define the regions dominated by speckle and background noise. Now, we aim to use this local noise approach in order to help post-processing algorithms to enhance their detection performance. Most HCI algorithms have the potential of being applied separately on different noise regimes. Here, we are particularly interested in the case of deep learning. Neural networks
Figure 5: Same as Fig. 4, for the _nirc3_ ADI sequence.
are good candidates to capture image noise dependencies due to their ability to recognize hidden underlying relationships in the data, and make complex decisions. In order to maximize the added value of working in noise regimes and show its benefits for the detection task, we propose to revisit SODINN (Gomez Gonzalez et al., 2018), the first supervised deep learning algorithm for exoplanet imaging. In this section, we first provide a brief overview of SODINN, and then present our novel NA-SODINN algorithm, an adaptation of SODINN working on noise regimes, aided with additional handcrafted features.
### Baseline model: the SODINN algorithm
SODINN stands for _Supervised exOplanet detection via Direct Imaging with deep Neural Network_. It is a binary classifier that uses a convolutional neural network (CNN) to distinguish between two classes of square image sequences: sequences that contain an exoplanet signature (\(c_{+}\), the positive class), and sequences that contain only residual noise (\(c_{-}\), the negative class). Figure 6 (bottom) shows an example sequence for each class, where the individual images are produced with various number of principal components. Gomez Gonzalez et al. (2018) refers to these image sequences as Multi-level Low-rank Approximation Residual (MLAR) samples.
The first step in SODINN is to build a training data set composed of thousands of different \(c_{+}\) and \(c_{-}\) MLAR sequences. A \(c_{+}\) sequence is formed through three consecutive steps that are summarized in Fig. 6. (i) First, a PSF-like source is injected at a random pixel within a given annulus of the ADI sequence. The flux of this injection is the result of multiplying the normalized off-axis PSF by a scale factor randomly chosen from a pre-estimated flux range that corresponds to a pre-defined range of S/N in the processed frame. (ii) Singular value decomposition (SVD, Halko et al., 2011) is then used on this synthetic ADI sequence to perform PSF subtraction for different number of singular vectors (or principal components), thereby producing a series of processed frames. (iii) Finally, square patches are cropped around the injection coordinates for each processed frame. This forms a series of \(c_{+}\) MLAR sequences, where each sequence contains the injected companion signature for different numbers of principal components. The patch size is usually defined as two times the FWHM of the PSF. Likewise, we construct a \(c_{-}\) sequence by extracting MLAR sequences for pixels where no fake companion injection is performed. The number and order of singular vectors is the same as those used for the \(c_{+}\) sequences. For the case of \(c_{-}\) sequences, SODINN must deal with the fact that, using only one ADI sequence, we obtain a single realization of the residual noise, so that the number of \(c_{-}\) sequences we can grab per annuli is not enough to train the neural network without producing over-fitting. SODINN solves this problem by increasing the number of \(c_{-}\) sequences in a given annulus through the use of data augmentation techniques, such as random rotations, shifts, and averaging. This procedure of generating \(c_{+}\) and \(c_{-}\) sequences is repeated thousands of times for each annulus in the field-of-view. When the entire field-of-view is covered, MLAR sequences of a same class from all annuli are mixed and the balanced training set (same amount of \(c_{+}\) and \(c_{-}\) samples) is built.
The training set is then used to train the SODINN neural network. This produces a detection model that is specific for the ADI sequence from where MLAR sequences where generated. The SODINN network architecture is composed of two concatenated convolutional blocks. The first block contains a convolutional-LSTM (Shi et al., 2015) layer with 40 filters, and kernel and stride size of (1,1), followed by a spatial 3D dropout (Srivastava et al., 2014) and a MaxPooling-3D (Boureau et al., 2010). The second block contains the same except for it has now 80 filters, and kernel and stride size of (2,2). These first two blocks extract the feature maps capturing all spatio-temporal correlations between pixels of MLAR sequences. After that, they are flattened and sent to a fully connected dense layer of 128 hidden units. Then, a rectifier linear unit (ReLU, Nair & Hinton, 2010) is applied to the output of this layer followed by a dropout regularization layer. Finally, the output layer of the network consists of a sigmoid unit. The network weights are initialized randomly using a Xavier uniform initializer, and are learned by back-propagation with a binary cross-entropy cost function. SODINN uses an Adam optimizer with a step size of 0.003, and mini-batches of 64 training samples. An early stopping condition monitors the validation loss. The number of epochs is usually set to 15, with which SODINN generally reaches \(\sim 99\%\) validation accuracy (Gomez Gonzalez et al., 2018).
Once the detection model is trained and validated, it is finally used to find real exoplanets in the same ADI sequence. Because the input of the model is an MLAR structure, we first map the entire field-of-view by creating MLAR samples (with no injection) centered on each pixel. The goal of the trained model is therefore to assign a probability value for each of these new MLAR sequences to belong to the \(c_{+}\) class. Computing a probability for each individual pixel leads to a probability map, from which exoplanet detection can be performed by choosing a detection threshold.
### Model adaptation: the NA-SODINN algorithm
In SODINN, the training set is built by mixing all MLAR sequences from a same class, generated on every annulus in the field-of-view. In the presence of different noise regimes, this way to proceed can complicate the training of the model, as the statistics of an MLAR sequence generated in the speckle-dominated regime differ from a sequence of the same class generated in the background-dominated regime instead. In order to deal with, this, we train an independent SODINN detection model per noise regime instead of a unique model for the full frame field-of-view. Thereby, each detection model is only trained with those MLAR sequences that contain statistical properties from the same (or similar) probabilistic distribution function. Therefore, our region
Figure 6: SODINN labeling stage. _Top_: steps for generating MLAR samples (see text for more details). \(N_{f}\) is the number of frames in the ADI sequence and \(N_{pe}\) is the number of principal components in the cube of processed frames and therefore in the final MLAR sequence. _Bottom_: example of an MLAR sequence of each class.
of interest in the field-of-view is now smaller. This means that the number of pixels available to generate MLAR sequences is reduced and therefore, that we are losing noise diversity in comparison with a model that is trained in the full frame. However, this diversity loss comes with the benefit of better capturing the statistics of noise within a same noise regime, which improves the training.
In order to compensate for the noise diversity loss associated with the training on individual noise regimes, we attempt to reinforce the training by means of new handcrafted features. An interesting discriminator between the \(c_{+}\) and \(c_{-}\) classes, which is also physically motivated, comes from their behavior in terms of signal-to-noise ratio (S/N). The most accepted and used S/N definition in the HCI literature is from Mawet et al. (2014). It states that, given a \(1\lambda/D\) wide annulus in a processed frame at distance \(r\) (in \(\lambda/D\) units) from the star, paved with \(N=2\pi r\) non-overlapping circular apertures (see Fig. 2), the S/N for one of these apertures is defined as
\[\mathrm{S/N}=\frac{\bar{x}_{t}-\bar{x}_{N-1}}{\sigma_{N-1}\sqrt{1+\frac{1}{N- 1}}}\,, \tag{1}\]
where \(\bar{x}_{t}\) is the aperture flux photometry in the considered test aperture, \(\bar{x}_{N-1}\) the average intensity over the remaining \(N-1\) apertures in the annulus, and \(\sigma_{N-1}\) their standard deviation. In order to maximize the S/N, image processing detection algorithms need to be tuned through finding the optimal configuration of their parameters (see _e.g._, Dahlqvist et al. 2021b). Here, rather than optimizing the algorithm parameters, we use the fact that we can leverage the behavior of the S/N versus some of the algorithm parameters in our deep learning approach. This is especially the case for the number of principal components used in the PSF subtraction. We define an S/N curve as the evolution of the S/N computed for a given circular aperture as a function of the number of principal components (Gonzalez et al. 2017). Fig. 7 shows an example of 1,000 S/N curves generated from the _sph2_ ADI sequence. We clearly see in Fig. 7 that, in the presence of an exoplanet signature (blue curves), the S/N curve first increases and then decreases, which leads to the appearance of a peak at a given number of principal components. This behavior, capturing the competition between noise subtraction and signal self-subtraction, was already documented elsewhere (_e.g._, Gonzalez et al. 2017). The peak in the S/N curve indicates the number of principal components for which the contrast between the companion and the residual noise in the annulus is maximum.
For a given 1-FWHM circular aperture, the MLAR sequence (no matter the class) and the S/N curve are linked from a physical point of view. Actually, the evolution of the S/N as a function of the number of principal components can be readily extracted from intermediate products used in the production of the training data set. Therefore, the information conveyed through the S/N curve is already partly contained in the MLAR patches. But while the MLAR sequence contains localized information on the signal and noise behavior, the S/N curve conveys an annulus-wise information, obtained through aperture photometry. Indeed, each aperture S/N estimation depends on the noise in the rest of the annulus (Eq. 1), so that it also contains information that connects with other circular apertures at the same angular separation from the star. This dependency is not captured in MLAR sequences. S/N curves make this rich summary statistics directly available to the neural network to improve the neural network training. One complication in using S/N curves in the training relates to data augmentation, which is mandatory to build up a sufficiently large training data set for SODINN. Because these augmentation operations modify the intensity and distribution of pixels in the MLAR sequence, there is no direct way to compute the associated S/N curve of an augmented MLAR sequence through Eq. 1. To deal with this, we make simplifying assumptions for each augmentation operation in SODINN: (i) image rotations do not affect the S/N curve as the same pixels are kept in the final sequence, (ii) averaging two sequences can be approximated as averaging their S/N curves, and (iii) image shifts do not affect the S/N curve as long as the shift is sufficiently small.
By adding the noise regimes approach and the S/N curves to SODINN, we are building a new detection algorithm. We refer to this novel framework, depicted in Fig. 8, as _Noise-Adaptive SODINN_, or NA-SODINN for short. As its predecessor, NA-SODINN is composed of the same three steps: (i) producing the training set from an ADI sequence, (ii) training a detection model with this training set, and (iii) applying the model to find companions in the same ADI sequence. However, in the first step, NA-SODINN generates as many training sets as detected residual noise regimes. Each of these sets are composed of MLAR sequences and their corresponding S/N curves generated from the corresponding noise regime, including data augmentation. In the second step, NA-SODINN trains an independent detection model for each regime by using its corresponding training set. For each MLAR sequence in the training set, the feature maps created through convolutional blocks are now concatenated with their respective S/N curves after the flattened layer (Fig. 8). In last step, NA-SODINN does inference in individual noise regimes. It applies the trained model of each regime to infer its corresponding probability map of the same regime (Fig. 8). Finally, NA-SODINN builds the final probability detection map by joining all probability regime maps inferred with each detection model. Thus, our NA-SODINN algorithm is conceived to keep the main characteristics of the pioneering SODINN algorithm (Gomez Gonzalez et al. 2018), such as its architecture, and adapt its optimization process to our local noise approach.
Figure 7: S/N curves generated from the _sph2_ cube of processed frames at a \(8\lambda/D\) distance from the star. Curves in blue contain the exoplanet signature and curves in red just residual noise. The flux of injections is randomly selected from a range that is between one and three times the level of noise. Dotted curves over populations show the mean of each class.
## 4 Model evaluation
Now that NA-SODINN has been introduced, we aim to thoroughly evaluate its detection ability. In the first part of this section, we explain the evaluation strategy and benchmark NA-SODINN with respect to its predecessor SODINN using the same _sph2_ and _nrc3_ ADI sequences. Then, in the second part, we apply NA-SODINN to the first phase of EIDC (Cantalloube et al., 2020), providing probability maps for each ADI sequence in the data challenge and running the same statistic analysis to compare the NA-SODINN performance with the rest of HCI algorithms.
### Performance assessment
The evaluation of HCI detection algorithms consists of minimizing the false positive rate (FPR) while maximizing the true positive rate (TPR) at different detection thresholds applied in the final detection map. This information is summarized by a curve in the Receiver Operating Characteristics (ROC) space, where each point in the curve captures both metrics at a given threshold value (Gomez Gonzalez et al., 2018; Dahlqvist et al., 2020). In order to produce ROC curves for various versions of SODINN applied on a given ADI sequence \(D\), we first build the evaluation set \(\mathcal{D}_{eval}=\{D_{1},D_{2},D_{3},\ldots,D_{s}\}\) containing \(s\) synthetic data sets \(D_{i}\), where each synthetic data set is a copy of \(D\) with one fake companion injection per noise regime. Here, we limit the number of injected companions to one at a time, as having more than one companion per data cube is unnecessary due our approach to detect exoplanets locally. The coordinates of these injections are randomly selected within the considered noise regime boundaries, and their fluxes are randomly set within a pre-defined range of fluxes that correspond to a S/N range between one and two in the processed frame. Hence, each algorithm provides \(s\) final detection maps, from which true positives (TPs) and false positives (FPs) indicators are computed across the whole noise regime field-of-view at different detection thresholds. Then, all these indicators are averaged and the corresponding ROC curve for the considered noise regime is produced. Instead of using the FPR as in standard ROC curves, here we used the mean number of FPs within the whole field-of-view, which is more representative of the HCI detection task and facilitates the interpretation of our performance simulations.
We perform the proposed ROC curve analysis on both \(sph2\) and \(nrc3\) ADI sequences with \(s=100\) for each. For this assessment, a detection is defined as a blob in the final detection map with at least one pixel above the threshold inside a circular aperture of diameter equal to the FWHM centered at the position of each injection of both \(\mathcal{D}_{eval}^{sph2}\) and \(\mathcal{D}_{eval}^{nrc3}\). With the aim to benchmark NA-SODINN, we include in this evaluation the annular-PCA algorithm (Absil et al., 2013), as implemented in the VIP Python package (Gonzalez et al., 2017; Christiaens et al., 2023), the SODINN framework by Gomez Gonzalez et al. (2018), and two hybrid detection models. These hybrid models
Figure 8: Illustration of the three steps within the NA-SODINN algorithm working flow. _Left: generation of the training set_. NA-SODINN uses the annular-PCA algorithm to perform PSF-subtraction and produce the cube of processed frames. Then, it detects residual noise regimes by applying the PCA-pmap technique in this cube and build both the training and inference data sets at each regime, which are composed of both MLAR samples and S/N curves. _Middle: model training_. NA-SODINN trains as many detection models as detected noise regimes using their respective training data sets (note that for the sake of simplicity, we have not duplicated the central deep neural network). This case contains two regimes, the speckle- and background-dominated noise regimes, so that two models are trained. _Right: detection map_. Finally, NA-SODINN uses each trained model to assign a probability value to belong to the \(c_{*}\) class to each pixel of the corresponding noise regime field-of-view.
are modifications of SODINN to include only one of the two additional features introduced in NA-SODINN: the adaptation to noise regimes, or the addition of S/N curves in the training. Hereafter, we refer to them respectively as _SODINN+Split_ and _SODINN+S/N_. In the same spirit as an ablation study, these two hybrid models are included in our evaluation in order to provide information about the added value of each approach separately for the task of detection.
An important aspect to consider when comparing algorithms in ROC space is to optimally choose their model parameters. In the case of annular-PCA, we use five principal components for each annulus as a good compromise to get a high S/N for injected companions, especially in the speckle-dominated regime. For the various versions of SODINN, we need to define two main parameters: the list of principal components \(\mathcal{PC}=(pc_{1},pc_{2},\ldots,pc_{m})\) that are used to produce each sample in both the MLAR sequence and S/N curve, and the level of injected fluxes used for making \(c_{+}\) class samples (see Sect. 3.1). For SODINN, we used the criterion based on the cumulative explained variance ratio (CEVR), as proposed by Gomez Gonzalez et al. (2018), to define the range of \(\mathcal{PC}\). For NA-SODINN and the hybrid models, we instead rely on the novel PCA-pmaps technique presented in Sect. 2, and we choose a list of \(m=13\) principal components centered around the principal component where the maximum S/N is reached (\(pc_{peak}\) hereafter, denoted by a white circle in the PCA-pmap). By comparing \(pc_{peak}\) with the principal component where the 90% CEVR is reached in PCA-pmaps for both \(sph2\) and \(nrc3\) ADI sequences (Figs. 4 and 5), we observe that at some angular separations, the S/N peak is not well captured by the CEVR metric. This suggests that the use of CEVR as a figure of merit for choosing the \(\mathcal{PC}\) list is not always optimal for the training. Regarding the injected fake companion fluxes, we choose for all SODINN-based models a range of fluxes that correspond to an S/N between one and three in the PCA-processed frame. This range of fluxes does generally not lead to class overlapping, where \(c_{+}\) and \(c_{-}\) class samples would look too similar. However, in order to avoid FPs in the final detection map, the user may consider higher flux ranges in those data sets where the level of noise is higher. Finally, to build the ROC curve, we consider a list of S/N thresholds ranging from 0.1 to 4.4 in steps of 0.01 for annular PCA, while for the SODINN-based models we use a list of probability thresholds from 0.09 to 0.99 in steps of 0.01. All SODINN-based models are trained on balanced training sets containing around \(10^{5}\) samples for each class.
Figures 9 and 10 display a series of ROC spaces -one for each detected noise regime-, respectively for the \(sph2\) and \(nrc3\) ADI sequences. Each of these ROC spaces displays one ROC curve per algorithm, which informs about its detection performance on that specific noise regime for different thresholds. We observe from both figures that NA-SODINN outperforms both its predecessor and the hybrid models, especially for the noise regimes dominated by residual speckle noise. In the case of the \(sph2\) noise regime comprised between 12-19 \(\lambda/D\), corresponding to the outer edge of the SPHERE well-corrected region, we observe that SODINN presents a significant number of FPs. We associate this trend to the fact that this noise regime contains a significant number of residual speckles with similar intensities as used for the injected fake companion during training, causing a class overlap situation. This is partly due to the much larger number of independent statistical samples at these larger separations, which increases the chances of finding stronger outliers in the noise. In order to overcome this problem, the user can increase the S/N range hyper-parameter used to generate \(c_{+}\) sequences, at the expense of decreasing the ability of finding
Figure 9: ROC analysis per noise regime for the \(sph2\) data set showing the performance of SODINN, NA-SODINN, annular-PCA, and hybrid SODINN models. The values plotted alongside each curve highlight some of the selected thresholds.
faint companions. Despite the complexity of this noise regime, NA-SODINN manages to reduce the FPs to sufficiently low levels while at the same time improving the TPR, which is always above 90% at every threshold. This behavior is further illustrated in Figs. B.1 and B.2 of Appendix B, where the NA-SODINN and SODINN probability maps are compared at different threshold levels. Regarding hybrid models, we generally observe that they land between the SODINN and NA-SODINN detection performance, with SODINN+S/N generally being the best hybrid model. These results thus suggest that both working with separate noise regimes and adding S/N curves in the neural network significantly enhance the detection performance of SODINN. When these approaches are used in synergy, as in NA-SODINN, the improvement is even more significant.
### Na-SODINN in EIDC
By design, the Exoplanet Imaging Data Challenge (EIDC, Cantalloube et al., 2020) can be used as a laboratory to compare and evaluate new detection algorithms against other state-of-the-art HCI detection algorithms. For instance, Dahlqvist et al. (2021) used the EIDC to highlight the improvement of the automated version of their RSM algorithm. Here, we use the first sub-challenge of the EIDC to generalize the ROC analysis presented above, and evaluate how NA-SODINN performs with respect to the state-of-the-art HCI algorithms that entered the data challenge. Besides the _spfl2_ and _nrc3_ data sets used so far, the first EIDC sub-challenge includes seven additional ADI sequences in which a total of 20 planetary signals with different contrasts and position coordinates were injected. Two of these seven ADI sequences are from the SPHERE instrument (Beuzit et al., 2019), identified as _sphl_ and _sphl3_, two more from the NIRC-2 instrument (Serabyn et al., 2017), identified as _nirc1_ and _nrc3_, and the remaining three from the LMIRCam instrument (Skrutskie et al., 2010), with _lmr1_, _lmr2_ and _lmr3_ ID names. For each of these nine data sets, EIDC provides a pre-processed temporal cube of images, the parallactic angles variation corrected from true north, a non-coronagnraphic PSF of the instrument, and the pixel-scale of the detector. Each algorithm entering the EIDC had to provide a detection map for each ADI sequence. The following standard metrics are then used to assess the detection performance on each submitted detection map:
* True Positive Rate: TPR = \(\frac{TP}{TP+FN}\),
* False Positive Rate: FPR = \(\frac{FP}{FP+TN}\),
* False Discovery Rate: FDR = \(\frac{FP}{FP+TP}\),
* F1-score: F1 = \(\frac{2TP}{TP+FP+FN}\).
We apply our NA-SODINN framework to the EIDC, and as in the ROC analysis, we use PCA-pmaps as a tool for both estimating residual noise regimes and choosing the list of principal components \(\mathcal{PC}\) at each angular separation. For the injection flux ranges, we use an S/N range between one and four times the level of noise in the processed frame. Each model is trained with balanced training sets that contain around \(10^{5}\) samples per class. Because all three LMIRCam cubes contain more than 3,000 frames (Table A.1), we decided to reduce this number to around 250-300 frames to limit the computational time. To do that, we average a certain number of consecutive frames along the time axis in the sequence. Figure 11 shows a grid of all resulting NA-SODINN probability maps from EIDC ADI sequences where we observe, by visual inspection, that NA-SODINN finds most of the injected fake companions, while producing only faint false positives that all fall below our default detection threshold \(\tau=0.9\). In order to quantify this information, we follow the same approach as in Cantalloube et al. (2020) by considering the area under the curve (AUC) for the TPR, FPR, and FDR as a function of the threshold, which allows to mitigate the arbitrariness of the threshold selection by considering their evolution for a pre-defined range. The \(\text{AUC}_{\text{TPR}}\) should be as close as possible to one and the \(\text{AUC}_{\text{FPR}}\) and \(\text{AUC}_{\text{FPR}}\) as close as possible to zero. The F1-score ranges between zero and one, where one corresponds to a perfect algorithm, and is computed only on a single threshold \(\tau_{\text{amb}}\) that is chosen by the participant.
Figure C.1 shows the result of this analysis for all NA-SODINN probability maps of Fig. 11, in which all TPR, FPR, and FDR metrics (and their respective AUCs) are computed for different probability threshold values ranging from zero to one. Here, we mainly see that the \(\text{AUC}_{\text{FPR}}\) is generally higher along the range of thresholds for NIRC-2 and LMIRCam than for SPHERE data sets, the \(\text{AUC}_{\text{FPR}}\) is close to zero for all data sets, and the \(\text{AUC}_{\text{TPR}}\) is almost perfect for SPHERE data sets. To compute the F1-score, we choose a \(\tau_{\text{amb}}=0.9\) probability threshold. From our test with NA-SODINN, we consider this value as
Figure 10: Same as Fig. 9 for the _nrc3_ data set.
the minimum probability threshold for which one can rely on the significance of detections, maximizing TPs while minimizing FPs. Thus, any pixel signal above this r\({}_{\rm{znd}}\) on each probability map of Fig. 11 is considered as a detection for the computation of the F1-score. Finally, through the AUC\({}_{\rm{TPR}}\), AUC\({}_{\rm{FBR}}\) and F1-score metrics obtained with the NA-SODINN algorithm, we are able to update the general EIDC leader-board (Cantalhouse et al. 2020). Figure 12 shows how NA-SODINN ranks compared to the algorithms originally submitted to the EIDC, for each considered metric. We clearly observe that NA-SODINN ranks at the top, or close to the top, for each of the EIDC metrics, with results generally on par with the RSM algorithm by Dahlqvist et al. (2020). In particular, NA-SODINN provides the highest area under the true positive curve, while preserving a low false discovery rate.
Figure 11: NA-SODINN probability maps obtained on the whole set of EIDC ADI sequences (Table A.1). For the submitted probability threshold \(\tau\) = 0.90, we highlight with green circles the correct detection of injected companions (true positives), and with red circles the non-detection of injected companions (false negatives). No false positive is reported in our maps, as all the remaining non-circled peaks in the probability maps are below the threshold. Large white circles delineate the noise regimes at each case.
## 5 Conclusions
In this paper, we explore the possibility to enhance exoplanet detection in the field of HCI by training a supervised classification model that takes into account the noise structure in the PCA-processed frame. SODINN (Gomez Gonzalez et al., 2018), the pioneering detection algorithm in HCI on using deep learning, is adapted to learn from different noise regimes in the processed frame and local discriminators between the exoplanet and noise, such as S/N curves. With these two approaches working in synergy, we build a new detection algorithm, referred to as Noise-Adaptive SODINN, or NA-SODINN for short. Although our findings related to the spatial structure of noise distributions are showcased by adapting the SODINN detection framework, we believe that other algorithms dealing with processed frames could be adapted in a similar way.
The NA-SODINN detection capabilities are tested through two distinct analyses. First, we perform a performance assessment based on ROC curves using two ADI sequences provided by the VLT/SPHERE and Keck/NIRC-2 instruments. Here, NA-SODINN is evaluated with respect to annular-PCA, the original SODINN, and two SODINN-based hybrid models that use only one of the two proposed approaches, _i.e._, the noise regime splitting or the S/N curves addition. We find that hybrid models improve the detection performance of SODINN in all noise regimes, which demonstrates the interest of the local noise approaches considered in this paper. Moreover, we find that NA-SODINN reaches even higher detection performance, especially in the speckle noise regime, by combining both approaches in the same framework. Next, in order to benchmark NA-SODINN against other state-of-the-art HCI algorithms, we apply NA-SODINN to the first phase of the Exoplanet Imaging Data Challenge (Cantalloube et al., 2020), a community-wide effort meant to offer a platform for a fair and common comparison of exoplanetary detection algorithms. In this analysis, we observe that NA-SODINN is ranked at the top (first or second position) of the challenge leader-board for all considered evaluation metrics, providing in particular the highest true positive rate among all entries, while still keeping a low false discovery rate. Our new NA-SODINN framework therefore opens the door to more accurate searches of new and/or non-confirmed worlds in individual HCI data sets, as well as in large HCI surveys.
###### Acknowledgements.
The authors would like to thank the python open-source scientific community, and in particular the developers of the Keras deep learning library (Abadi et al., 2015) and the VIP high-contrast imaging package (Gonzalez et al., 2017; Christiates et al., 2023). The authors acknowledge stimulating discussions with Faustine Cantalloube, Rakesh Nath, Markus Bonse, and Emily O. Garvin, as well as the whole Exoplanet Imaging Data Challenge team. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 819155), and from the Wallonia-Brussels Federation (grant for Concerted Research Actions).
## References
* Abadi et al. (2015) Abadi, M., Agarwal, A., Barham, P., et al. 2015, TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems, software available from tensorflow.org
* Absil et al. (2013) Absil, O., Milli, J., Mawet, D., et al. 2013, A&A, 559, 1
* Ahmad & Khan (2015) Ahmad, F. & Khan, R. A. 2015, Pakistan Journal of Statistics and Operation Research, 11, 331
* Amara & Quanz (2012) Amara, A. & Quanz, S. P. 2012, MNRAS, 427, 948
* Anderson & Darling (1952) Anderson, T. W. & Darling, D. A. 1952, The Annals of Mathematical Statistics, 23, 193
* Beuzit et al. (2019) Beuzit, J.-L., Vigan, A., Mouillet, D., et al. 2019, A&A, 631, 1
* Bohn et al. (2021) Bohn, A. J., Ginski, C., Kenworthy, M. A., et al. 2021, A&A, 648, 1
Figure 12: Updated EIDC leader-board after the NA-SODINN submission. Ranking based on the F1-score (on top), the AUC of the TPR (on middle) and the AUC of the FDR (on bottom). Colors refer to HCI detection algorithm families: PSF-based subtraction techniques providing residual maps (red) or detection maps (orange), inverse problems (blue) and supervised machine learning (green). The light, medium and dark tonalities correspond to SPHERE, NIRC2, and LMIRCam data sets respectively.
* (19) Bonse, M., Garvin, E., Gebhard, T., et al. 2022, Bulletin of the American Astronomical Society, 54
* (20) Bourueau, Y.-L., Ponce, J., & LeCun, Y. 2010, in International Conference on Machine Learning (ICML), Haifa, Israel, 111-118
* (21) Bullmer, M. G. 1979, Principles of Statistics (Over Publications, Mineola, NNew York, USA)
* (22) Cantalloube, F., Christianeis, V., Cantero, C., et al. 2022, in Proceedings of SPIE, Vol. 12185, Adaptive Optics Systems VIII, ed. D. Schmidt, L. Schreiber, & E. Vernet (SPIE), 8-24
* (23) Cantalloube, F., Dobin, K., Milli, J., Brandner, W., & Vigan, A. 2019, The Messenger, 176, 25
* (24) Cantalloube, F., Gomez-Gonzalez, C., Absil, O., et al. 2020, in Proceedings of SPIE, Vol. 11448, Adaptive Optics Systems VII (SPIE), 1-36
* (25) Cantalloube, F., Mouillet, D., Mugnier, L. M., et al. 2015, A&A, 582, 1
* (26) Chauvin, G., Desidera, S., Lagrange, A.-M., et al. 2017, A&A, 605, L1
* (27) Christianeis, V., Gonzalez, C. A. G., Farkas, R., et al. 2023, Journal of Open Source Software, 8
* (28) D'Agostino, R., & Pearson, E. S. 1973, Biometrika, 60, 613
* (29) Dahlqvist, C.-H., Cantalloube, F., & Absil, O. 2020, A&A, 633, 1
* (30) Dahlqvist, C.-H., Cantalloube, F., & Absil, O. 2021a, A&A, 656, 1
* (31) Dahlqvist, C.-H., Louppe, G., & Absil, O. 2021b, A&A, 646, 1
* (32) Elsrault, O., Denis, L., Thiebaut, E., & Langlois, M. 2018, A&A, 618, 1
* (33) Gebhard, T. D., Bonse, M. J., Quanz, S. P., & Scholkopf, B. 2022, A&A, 666, 1
* (34) Gneiting, T. 1997, Journal of Statistical Computation and Simulation, 59, 375
* (35) Goebel, S. B., Guyon, O., Hall, D. N. B., Joanovic, N., & Atkinson, D. E. 2016, in Proceedings of SPIE, Vol. 9909, Adaptive Optics Systems V (SPIE), 417-425
* (36) Gomez Gonzalez, C., Absil, O., Absil, P.-A., et al. 2016, A&A, 589, 1
* (37) Gomez Gonzalez, C., Absil, O., & Van Droogenbroeck, M. 2018, A&A, 613, 1
* (38) Gonzalez, C. G., Wertz, O., Absil, O., et al. 2017, Astronomical Journal, 154, 7:1
* (39) Halko, N., Martinsson, P.-G., Shkolinsky, & Tygert, M. 2011, SIAM Journal on Scientific Computing, 33, 2580
* (40) Hinkley, S., Oppenheimer, B. R., Soummer, R., et al. 2007, AJ, 654, 633
* (41) Jensen-Clem, R., Mawet, D., Gomez Gonzalez, C. A., et al. 2017, Astronomical Journal, 155, 19
* (42) Keppler, M., Benisty, M., Muller, A., et al. 2018, A&A, 617, 1
* (43) Lafreniere, D., Marois, C., Doyon, R., Nadeau, D., & Artigau, E. 2007, AJ, 660, 770
* (44) Lilliefors, H. W. 1967, Journal of the American Statistical Association, 62, 399
* (45) Lozi, J., Guyon, O., Jovanovic, N., et al. 2018, in Proceedings of SPIE, Vol. 10703, Adaptive Optics Systems VI, ed. D. Schmidt, L. Schreiber, & L. M. Close (SPIE)
* (46) Males, J. R., Fitzgerald, M. P., Belikov, R., & Guyon, O. 2021, Publications of the Astronomical Society of the Pacific, 133, 1
* (47) Marmololo-Ramos, F., & Gonzalez-Papos, J. 2013, Methodology, 9, 137
* (48) Marois, C., Correia, C., Galicher, R., et al. 2014, in Proceedings of SPIE, Vol. 9148, Adaptive Optics Systems IV, ed. E. Marchetti, L. M. Close, & J.-P. Veran (SPIE)
* (49) Marois, C., Lafreniere, D., Doyon, R., Macintosh, B., & Nadeau, D. 2006, AJ, 641, 556
* (50) Marois, C., Lafreniere, D., Macintosh, B., & Doyon, R. 2008a, AJ, 673, 647
* (51) Marois, C., Macintosh, B., Barman, T., et al. 2008b, Science, 322, 1348
* (52) Marois, C., Zuckerman, B., Konopacky, Q. M., Macintosh, B., & Barman, T. 2010, Nature, 468, 1080
* (53) Mawet, D., Milli, J., Wahhaja, Z., et al. 2014, AJ, 792, 1
* (54) Mawet, D., Serabyn, E., Lieuver, K., et al. 2009, AJ, 709, 53
* (55) Nair, V. & Hinton, G. 2010, in International Conference on Machine Learning (ICML), Haifa, Israel, 807-814
* (56) Paiet, B., Cantalloube, F., Gomez Gonzalez, C. A., Absil, O., & Jacques, L. 2019, MNRAS, 487, 2262
* Simulation and Computation, 46, 7535
* (58) Rameau, J., Chauvin, G., Lagrange, A.-M., et al. 2013, AJ, 772, L15:1
* (59) Ren, B., Pueyo, L., Zhu, G. B., Debes, J., & Duchene, G. 2018, AJ, 852, 1
* (60) Ruffoij, J. B., Macintosh, B., Wang, J. J., et al. 2017, AJ, 842, 1
* (61) Samland, M., Bouwman, J., Hogg, D. W., et al. 2021, A&A, 646, 1
* (62) Scholkopf, B., Hogg, D. W., Wang, D., et al. 2016, Proceedings of the National Academy of Sciences (PNAS), 113, 7391
* (63) Serabyn, E., Huby, E., Matthews, K., et al. 2017, Astronomical Journal, 153, 1
* (64) Shapiro, S. S., & Wilk, M. B. 1965, Biometrika, 52, 591
* (65) Shi, X., Chen, Z., Wang, H., et al. 2015, in Advances in Neural Information Processing Systems (NeurIPS), Vol. 1, 802-810
* (66) Skrutskie, M. F., Jones, T., Hinz, P., et al. 2010, in Proceedings of SPIE, Vol. 7735, Ground-based and Airborne Instrumentation for Astronomy III, ed. I. S. McLean, S. K. Ramsay, & H. Takami (SPIE)
* (67) Snik, F., Absil, O., Baudoz, P., et al. 2018, in Proceedings of SPIE, Vol. 10706, Advances in Optical and Mechanical Technologies for Telescopes and Instrumentation III, ed. R. Geyl & R. Navarro (SPIE)
* (68) Soummer, R. 2005, The Astrophysical Journal Letters, 618, 161
* (69) Soummer, R., Ferrari, A., Aime, C., & Jolissaint, L. 2007, AJ, 669, 642
* (70) Soummer, R., Pueyo, L., & Larkin, J. 2012, The Astrophysical Journal Letters, 755, 1
* (71) Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, L., & Salakhutdinov, R. 2014, Journal of Machine Learning Research, 15, 1929
* Simulation and Computation, 1
* (73) Vovk, V. & Wang, R. 2020, Biometrika, 107, 791
* (74) Wagner, K., Apai, D., Kasper, M., et al. 2016, Science, 353, 673
* (75) Wahhajh, Z., Cieza, L. A., Mawet, D., et al. 2015, A&A, 581, 1
* Simulation and Computation, 51, 757
* (77) Yap, B. W. & Sim, C. H. 2011, Journal of Statistical Computation and Simulation, 81, 2141
## Appendix A EIDC data sets
## Appendix B Probability maps for selected EIDC data sets
Figure 11: Evaluation example of _nrc3_ data set where both SODINN and NA-SODINN probability maps are displayed for the two noise regimes at \(\tau\) =0.50, 0.80 and 0.99 probability thresholds. Green, red and cyan squares in the maps refer to the TPs, FPs and FNs, respectively
Figure 20: Evaluation example of \(sph2\) data set where both SODINN and NA-SODINN probability maps are displayed for the three noise regimes at \(\tau=\)0.50, 0.80 and 0.99 probability thresholds. Green, red and cyan squares in the maps refer to the TPs, FPs and FNs, respectively. |
2306.01672 | First-Principles Property Assessment of Hybrid Formate Perovskites | Hybrid organic inorganic formate perovskites, AB(HCOO)$_3$, is a large family
of compounds which exhibit variety of phase transitions and diverse properties.
Some examples include (anti)ferroelectricity, ferroelasticity,
(anti)ferromagnetism, and multiferroism. While many properties of these
materials have already been characterized, we are not aware of any study that
focuses on comprehensive property assessment of a large number of formate
perovskites. Comparison of the materials property within the family is
challenging due to systematic errors attributed to different techniques or the
lack of data. For example, complete piezoelectric, dielectric and elastic
tensors are not available. In this work, we utilize first-principles density
functional theory based simulations to overcome these challenges and to report
structural, mechanical, dielectric, piezoelectric, and ferroelectric properties
for 29 formate perovskites. We find that these materials exhibit elastic
stiffness in the range 0.5 to 127.0 GPa , highly anisotropic linear
compressibility, including zero and even negative values; dielectric constants
in the range 0.1 to 102.1; highly anisotropic piezoelectric response with the
longitudinal values in the range 1.18 to 21.12 pC/N, and spontaneous
polarizations in the range 0.2 to 7.8 $\mu$C/cm$^2$. Furthermore, we propose
and computationally characterize a few formate perovskites, which have not been
reported yet. | Abduljelili Popoola, Partha Sarathi Ghosh, Maggie Kingsland, Ravi Kashikar, Derrick DeTellem, Yixuan Xu, Shengqian Ma, Sarath Witanachchi, Sergey Lisenkov, Inna Ponomareva | 2023-06-02T16:45:47Z | http://arxiv.org/abs/2306.01672v1 | # First-Principles Property Assessment of Hybrid Formate Perovskites
###### Abstract
Hybrid organic inorganic formate perovskites, AB(HCOO)\({}_{3}\), is a large family of compounds which exhibit variety of phase transitions and diverse properties. Some examples include (anti)ferroelectricity, ferroelasticity, (anti)ferromagnetism, and multiferroism. While many properties of these materials have already been characterized, we are not aware of any study that focuses on comprehensive property assessment of a large number of formate perovskites. Comparison of the materials property within the family is challenging due to systematic errors attributed to different techniques or the lack of data. For example, complete piezoelectric, dielectric and elastic tensors are not available. In this work, we utilize first-principles density functional theory based simulations to overcome these challenges and to report structural, mechanical, dielectric, piezoelectric, and ferroelectric properties for 29 formate perovskites. We find that these materials exhibit elastic stiffness in the range 0.5 to 127.0 GPa, highly anisotropic linear compressibility, including zero and even negative values; dielectric constants in the range 0.1 to 102.1; highly anisotropic piezoelectric response with the longitudinal values in the range 1.18 to 21.12 pC/N, and spontaneous polarizations in the range 0.2 to 7.8 \(\mu\)C/cm\({}^{2}\). Furthermore, we propose and computationally characterize a few formate perovskites, which have not been reported yet.
Hybrid organic inorganic perovskites (HOIP) are receiving a lot of attention presently owing to the rapid progress in their synthesis and characterization. They have the chemical formula ABX\({}_{3}\), where A is typically an organic molecule, B is a metallic cation, while X site could be halogen or molecular linker. They exhibit a variety of phase transitions and rich range of properties, such as ferromagnetism, (diel/ferro)ectricity, non-linear optical properties, caloric effects, ferroelasticity, multiferroicity, among others [1; 2]. Among chemically diverse HOIPs, AB(HCOO)\({}_{3}\), is one of the largest families, where one can find most of the aforementioned properties. The structural phase transitions in these materials are primarily driven by the hydrogen bond stabilization and often occur close or even above room temperature, which is highly desirable feature [1]. For example, most ethyl ammonium metal formate perovskites exhibit transition in the range 293 to 400 K [3; 4; 5; 6]. Magnetic properties of formates are mostly determined by weak magnetic interactions mediated by formate linker causing them to exhibit magnetic ordering at low temperatures only, typically below 50 K [1]. Furthermore, [AZE][M(HCOO)\({}_{3}\)] (AZE = azetidinium; M = Mn\({}^{2+}\), Cu\({}^{2+}\) and Zn\({}^{2+}\)) family was reported to have extraordinarily large dielectric constants higher than 10\({}^{4}\) in the vicinity of room temperature [7; 8; 9]. Often times, the value exhibit strong frequency dependence, which resemble behavior of ferroelectric relaxors [10]. Many formates undergo transitions into polar space groups and, therefore, are possible candidates for ferroelectricity, which is defined by the presence of spontaneous electric polarization reversible by electric field. However, the value of spontaneous polarization is typically below 5 \(\mu\)C/cm\({}^{2}\), which makes its experimental measurement very challenging [11]. Table 2 provides polarization values from the literature and conditions for which it was reported/computed. The simultaneous realization of ferroelectricity, ferromagnetism and/or ferroelasticity in some hybrid formates classify them as multiferroics. It was shown that DMA-Zn(HCOO)\({}_{3}\) becomes multiferroic on substitution of Zn with transition metals such as Ni, Mn, Co and Fe [2; 12; 13]. DMA-Co(HCOO)\({}_{3}\) is another hybrid in which multiferroicity has been observed [14]. All abbreviations for A sites used in this study are listed in Table 1.
Mechanical properties have been investigated for several members of formate families and are reviewed in Ref.[35]. Some representative data from the literature for Young's and elastic moduli are compiled in Table 3. The exotic negative linear compressibility, defined as an increase in lattice parameter(s) under hydrostatic pressure, has been computationally predicted in HAZ-M(HCOO)\({}_{3}\) (M = Mn,Fe,Co) [36; 37] and NH\({}_{4}\)Zn(HCOO)\({}_{3}\)[38]. Negative linear compressibility finds applications in pressure sensors and actuators, and possibly in design of artificial muscles[39].
Evidences of pyroelectricity have reported in some hybrid formates perovskites. For instance, pyroelectric coefficient was measured to exhibit a maximum of 5.16\(\times\)10\({}^{-2}\) C/m\({}^{2}\) K under a poling electric field of 7.7 kV/cm at 192 K for DMA-Mn(HCOO)\({}_{3}\)[19]. In another instance, pyroelectric current was reported and used to study the order-disorder transition under different pressures in DMA-Co(HCOO)\({}_{3}\)[21]. Some other hybrid formates perovskites in which pyroelectric current has been measured include DMA-Mg(HCOO)\({}_{3}\)[43], DMA-Mn(HCOO)\({}_{3}\), DMA-Mn\({}_{0.5}\)Ni\({}_{0.5}\)(HCOO)\({}_{3}\)[17; 18],
Gua-Cu(HCOO)\({}_{3}\)[33], CH\({}_{3}\)NH\({}_{2}\)NH\({}_{2}\)Mn(HCOO)\({}_{3}\)[28] and DMA-Zn(HCOO)\({}_{3}\)[22]. The dependence of pyroelectric current on applied magnetic field has also been demonstrated in DMA-Ni(HCOO)\({}_{3}\)[20].
Although, the aforementioned studies highlight the outstanding progress that has been made in characterization of these materials, the survey also reveals scarcity of such investigations, especially in the light of the fact that formates subgroup hosts at least 64 known members [1; 44; 45]. It should also be recognized that many such characterizations, spontaneous polarization for example, are rather challenging experimentally. On the other hand, computational investigation is an inexpensive, reliable and efficient tool to overcome these challenges and achieve a comprehensive assessment of structural, piezoelectric, dielectric and elastic properties for a wide range of materials in the formate family.
Therefore, in this study, we aim: (i) to predict structural parameters, polarization, piezoelectric coefficients, dielectric constants and elastic stiffness of 29 formate
\begin{table}
\begin{tabular}{c c c c c c} Material & NH\({}_{2}\)NH\({}_{3}\) & C\({}_{2}\)H\({}_{5}\)NH\({}_{3}\) & C(NH\({}_{2}\))\({}_{3}\) & (CH\({}_{3}\))\({}_{2}\)NH\({}_{2}\) & CH\({}_{3}\)NH\({}_{3}\) & NH\({}_{2}\)CHNH\({}_{2}\) \\ \hline abbreviation & HAZ & EA & Gua & DMA & MA & FA \\ \end{tabular}
\end{table}
Table 1: Abbreviations for A sites used in the study, following Stroppas book[1]
\begin{table}
\begin{tabular}{c c c c c} Material & Type & Literature & Conditions & P (\(\mu\)C/cm\({}^{2}\)) \\ \hline A-Mn & Comp. & 1.80[15] & T = 0 K & – \\ P-Mn & Comp. & 1.00[15] & T = 0 K & – \\ AF-Mn & Comp. & 5.10[15] & T = 0 K & – \\ PF-Mn & Comp. & 5.90[15] & T = 0 K & – \\ \hline DMA-Mn & Exp. & 0.30[16] & T = 150 K; B = 9 T (during growth) & 7.52 \\ & Exp. & 2.70 – 3.61[17; 18] & T = 150 K; B = 0 – 5 T; E = 5 kV/cm & – \\ & Exp. & 0.8–2.4[19] & T = 184 K; E = 3.1-7.7 kV/cm & – \\ DMA-Ni & Exp & 0.42 – 0.52[20] & T = 150 K; B = 0 – 10 T & – \\ DMA-Co & Exp. & 0.30[21] & T = 125 K & 7.44 \\ DMA-Zn & Exp. & 0.45[22] & T = 125 K & 7.77 \\ \hline NH\({}_{4}\)Mn & Exp. & 0.97[23] & T = 140 K & 2.45 \\ NH\({}_{4}\)Mg & Exp. & 1.15[3] & T = 93 K & – \\ NH\({}_{4}\)Zn & Exp. & 0.02 – 0.93[24] & T = 120 – 248 K & 2.43 \\ & Exp. & 4.00[25] & T = 273 K; P = 1.44 GPa & – \\ & Exp. & 1.03[23; 26] & T = 163 K & – \\ NH\({}_{4}\)Sc & Comp. & 3.71[27] & T = 0 K & – \\ NH\({}_{4}\)Ti & Comp. & 2.46[27] & T = 0 K & – \\ NH\({}_{4}\)V & Comp. & 2.40[27] & T = 0 K & – \\ NH\({}_{4}\)Cr & Comp. & 2.51[27] & T = 0 K & – \\ NH\({}_{4}\)Mn & Comp. & 2.38[27] & T = 0 K & – \\ NH\({}_{4}\)Fe & Comp. & 2.37[27] & T = 0 K & – \\ NH\({}_{4}\)Co & Comp. & 2.36[27] & T = 0 K & – \\ NH\({}_{4}\)Ni & Comp. & 2.17[27] & T = 0 K & 2.27 \\ NH\({}_{4}\)Cu & Comp. & 2.20[27] & T = 0 K & – \\ NH\({}_{4}\)Zn & Comp. & 2.30[27] & T = 0 K & 2.43 \\ \hline CH\({}_{3}\)NH\({}_{2}\)NH\({}_{2}\)Mn & Exp. & 0.14[28] & T = 150 K; B = 10 T & – \\ \hline NH\({}_{3}\)(CH\({}_{2}\))\({}_{4}\)NH\({}_{3}\)Mg\({}_{2}\) & Exp. & 1.51[3] & T = 93 K & – \\ \hline HAZ-Mn & Exp. (estimated) & 3.58[29] & T = 110 K & 2.17 \\ HAZ-Co & Exp. (estimated) & 2.61[29] & T = 405 K & 2.53 \\ HAZ-Zn & Exp. (estimated) & 3.48[29] & T = 110 K & 2.32 \\ HAZ-Mg & Exp. (estimated) & 3.44[29] & T = 400 K & 2.53 \\ & Comp. & 2.63[30] & T = 150 – 375 K & 2.53 \\ \hline EA-Mg & Exp. (estimated) & 3.43[3] & T = 93 K & 1.26 \\ \hline Gua-Cr & Comp. & 0.22[31] & T = 0 K & – \\ Gua-Cu\({}_{0.5}\)Mn\({}_{0.5}\) & Comp. & 9.90[32] & T = 0 K & – \\ Gua-Cu\({}_{0}\) & Comp. & 0.11 – 0.37[33; 34] & T = 0 K & 0.21 \\ \end{tabular}
\end{table}
Table 2: Polarizations from literature and those computed in this work. A = NH\({}_{3}\)CH\({}_{2}\)CH\({}_{3}\), P = PH\({}_{3}\)CH\({}_{2}\)CH\({}_{3}\), AF = NH\({}_{3}\)CH\({}_{2}\)CF\({}_{3}\), PF = PH\({}_{3}\)CH\({}_{2}\)CF\({}_{3}\).
compounds using first-principles density functional theory (DFT) based simulations; (ii) to provide a comprehensive comparative assessment of the aforementioned properties; (iii) to catalog the properties which could aid screening of promising materials.
## I Computational Methodology
Table 4 lists the hybrid formates that we have investigated and the associated experimental references from which the structures have been retrieved along with temperatures at which the structures were recorded. The bottom part of the panel lists some of the HOIPs in the same family, which did not become part of this study. The experimental structures were used to initialize DFT based computations as implemented in VASP package [58; 59; 60; 61]. Technically, all experimental structures were first fully relaxed using Perdew-Burke-Ernzerhof (PBE) version of the generalized gradient approximation for exchange correlation functional [62]. In order to model hydrogen bonds, we used dispersion corrections of zero-damping D3 [63; 64], which was previously shown to provide good agreement with experimental structures [65; 66; 67; 68; 69]. The electron-ion interactions are treated with the projected augmented wave (PAW) potentials [70]. We used the plane wave cutoff energy in range 700-850 eV and non-Gamma centered k-point mesh which corresponds to k-point densities in range 0.19 - 0.57 A\({}^{-1}\). Note that k-point density and cutoff energy for each material are given in Table S1 in supplementary material. The Hubbard correction as proposed by Dudarev _et al._[71] is introduced to account for the Coulomb repulsion between localized d-electrons of transition metals. Unit cell parameters and atomic positions were relaxed until stress and forces are less than 0.1 GPa and 1 meV/\(\AA\), respectively. The energy convergence criterion for self-consistent calculations was \(10^{-6}\) eV. The crystal polarization is evaluated by the Berry phase method developed by King-Smith and Vanderbilt [72; 73]. We computed the _intrinsic_ piezoelectric constants \(e_{ij}\) and \(d_{ij}\) (in matrix notations) defined as the linear response of the polarization to the applied strain and stress, respectively. The \(d_{ij}\) coefficients were obtained from \(d_{ij}=e_{ik}(C^{-1})_{kj}\), where \(C\) is the single crystal elastic constant matrix. The constants \(e_{ik}\) and \(C_{ij}\) were computed using finite difference method as implemented in VASP [74]. Hubbard U were employed for transition metal atoms. The following values computed using the linear response ansatz of Cococcioni et al [75] from PAW approach in VASP were utilized: 6.5 eV for Mn, 7.2 eV for Fe, 4.6 eV for Co and 5.1 eV for Ni.
## II Results and Discussion
### Structure
The ground state structural parameters are reported in Table 5. Comparison with experimental data, where available, was also provided in the table. We find that, in most cases, the lattice parameters are within 1% of experimental (see Supplementary material, Table S1). The pictorial representation of how experimental lattice parameters compare with computational ones is given in Fig. 1. The figure reveals good agreement between experiment and computations. We thus conclude that our computational approach provide reliable structural predictions. The ground state structures are available from Ref.[76].
Note, that we also augmented our list of HOIPs with the following structures DMA-Zn, DMA-Co, HONH\({}_{3}\)-Fe and Pna2\({}_{1}\) phase of HAZ-Mg, which so far have not been reported experimentally. Such structures were obtained by replacing Mn in DMA-Mn with Zn or Co, Mn in HONH\({}_{3}\)-Mn with Fe, and Zn in HAZ-Zn with Mg, followed by full structural relaxation. These hypothetical structures are underscored in Table 4. Majority of the fully relaxed HOIPs structures retained their experimental space groups. However, there were some exceptions. The experimental structures of MA-Co are available in both Pnma and P2\({}_{1}\)/c phases whereas experimental structures of MA-(Mn,Zn) are available only in Pnma phase. Our computations predicted that MA
\begin{table}
\begin{tabular}{l c c c} Material & Type & Young’s Moduli (GPa) & Elastic Moduli (GPa) & Ref. \\ \hline DMA-Ni & Exp. & 24.5 & [35] \\ DMA-Mn & Exp. & 19.0 & [35] \\ DMA-Co & Exp. & 21.5 & [35] \\ DMA-Zn & Exp. & 19.0 & [35] \\ \hline Gua-Cu & Exp. & 15.0 – 21.0 & [40] \\ Gua-Zn & Exp. & 24.0 – 29.0 & [40] \\ Gua-Mn & Exp. \& Comp. & 23.5(6) – 28.6(4) & [41] \\ \hline AZE-Cu & Exp. \& Comp. & 11.5(4) – 12.6(3) & [41] \\ \hline HAZ-Zn & Exp. & 24.5 – 26.5 & [42] \\ HAZ-Mn & Exp. & 24.5 – 28.6 & [42] \\ \hline NH\({}_{4}\)Zn & Exp. \& Comp. & 18.2 – 34.4 & [38] \\ \end{tabular}
\end{table}
Table 3: Elastic properties from the literature. AZE = (CH\({}_{2}\))\({}_{3}\)NH\({}_{2}\)
(Mn,Zn,Co) are mechanically unstable in Pnma phase while P2\({}_{1}\)/c phase of MA-Co is mechanically stable. To ensure mechanical stability of MA-(Mn,Zn) we deformed the Pnma structure along the eigenvector associated with negative value of \(C_{44}\) and subjected such deformed structure to full structural relaxation, which resulted in P2\({}_{1}\)/c structure. It is therefore plausible, that these materials may undergo another structural phase transition to P2\({}_{1}\)/c phase at low temperatures. Both P2\({}_{1}\)/c and Pnma phases of MA-(Mn,Zn,Co) are reported in Table 5 and Ref. [76]. However, dielectric and mechanical properties were calculated from P2\({}_{1}\)/c phase of the structures.
It has been reported in previous experimental studies that at low temperature, DMA-Zn crystallizes in space group _Cc_ with no partial occupancy at N position and possesses crystal structure similar to DMA-Mn [77].
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline material & space group & a(Å) & b(Å) & c(Å) & \(\beta\) (\({}^{\circ}\)) & V (Å\({}^{3}\)) & \(\mu_{B}\) & P (\(\mu\)C/cm\({}^{2}\)) & ref \\ \hline Gus-Mn (AFM) & Pnma & 8.42 (8.52) & 11.91 (11.98) & 9.12 (9.06) & 90 & 915 (925) & 4.7 & non-polar & [46] \\ Gus-Fe (AFM) & Pnma & 8.35 (8.42) & 11.78 (11.85) & 8.98 (8.95) & 90 & 883 (892) & 3.8 & non-polar & [46] \\ Gus-Co (AFM) & Pnma & 8.28 (8.33) & 11.64 (11.75) & 8.99 (8.91) & 90 & 866 (873) & 2.8 & non-polar & [46] \\ Gus-Ni (AFM) & Pnma & 8.24 (8.26) & 11.63 (11.64) & 8.91 (8.83) & 90 & 883 (850) & 1.8 & non-polar & [46] \\ Gus-Cu (AFM) & Pan2\({}_{1}\) & 8.50 (8.52) & 9.07 (9.03) & 12.77 (11.35) & 90 & 869 (874) & 0.6 & (0, 0, 2.21) (0.11-0.37) & [33, 34, 46] \\ Gua-Zn & Pnma & 8.27 (8.35) & 11.66 (11.73) & 8.99 (8.91) & 90 & 868 (872) & 0.0 & non-polar & [46] \\ \hline HONH\({}_{3}\)-Mn (AFM) & P2\({}_{1}\)2\({}_{1}\)2\({}_{1}\) & 7.70 (7.81) & 8.04 (7.96) & 13.06 (13.17) & 90 & 819 (819) & 4.5 & non-polar & [51] \\ HONH\({}_{3}\)-Co (AFM) & P2\({}_{1}\)2\({}_{1}\)2\({}_{1}\) & 7.67 (7.68) & 7.82 (7.76) & 13.00 (13.02) & 90 & 780 (776) & 2.7 & non-polar & [51] \\ HONH\({}_{3}\)-Ni (AFM) & P2\({}_{1}\)2\({}_{1}\) & 7.59 (7.62) & 7.98 (7.78) & 12.80 (12.73) & 90 & 773 (755) & 1.8 & non-polar & [51] \\ HONH\({}_{3}\)-Fe (AFM) & P2\({}_{1}\)2\({}_{1}\) & 7.70 (8.0) & 8.00 & 13.05 & 90 & 802 & 3.6 & non-polar & [51] \\ \hline HONH\({}_{3}\)-Zn & P2\({}_{1}\)2\({}_{1}\) & 7.65 (7.69) & 7.83 (7.74) & 13.18 (13.02) & 90 & 790 (770) & 0.0 & non-polar & [51] \\ HONH\({}_{3}\)-Mg & P2\({}_{1}\)2\({}_{1}\) & 7.67 (7.69) & 7.83 (7.79) & 12.73 (11.86) & 90 & 770 (770) & 0.0 & non-polar & [51] \\ \hline HAZ-Co (AFM) & Pan2\({}_{1}\) & 8.63 (8.63) & 7.74 (7.76) & 11.46 (11.55) & 90 & 765 (776) & 2.7 & (0, 2, 8.21) (2.61 at 405 K) & [53] \\ HAZ-Mn (FM) & Pan2\({}_{1}\) & 8.99 (8.93) & 7.83 (8.28) & 116 (11.69) & 90 & 820 (817) & 4.7 & (0, 0, 2.51) (1.58 at 110 K) & [59] \\ HAZ-Zn & Pan2\({}_{1}\) & 8.65 (8.66) & 7.75 (7.72) & 11.49 (11.48) & 90 & 771 (768) & 0.0 & (0, 0, 2.59) (26.3-48 at 0-110 K) & [29, 30] \\ HAZ-Mg & P2\({}_{1}\)2\({}_{1}\) & 8.87 & 7.63 & 11.45 & 90 & 775 & 0.0 & (0, 0, 2.90) (3.44 at 400 K) & [29] \\ HAZ-Mg & P2\({}_{1}\)2\({}_{1}\) & 8.00 (7.89) & 13.90 (13.75) & 7.28 (7.93) & 90 & 809 (809) & 809 (809) & non-polar & [29] \\ \hline NH\({}_{4}\)-Co (AFM) & P\({}_{3}\) & 12.54 (12.59) & 12.54 (12.59) & 8.34 (8.22) & 90 & 1136 (1128) & 2.6 & (0, 0, 2.35) & [23] \\ NH\({}_{4}\)-Fe (AFM) & P\({}_{6}\) & 12.58 (12.62) & 12.58 (12.62) & 8.57 (8.36) & 90 & 1174 (1153) & 3.8 & (0, 0, 2.41) & [23] \\ NH\({}_{4}\)-Mn (AFM) & P\({}_{6}\) & 12.55 (12.67) & 12.55 (12.67) & 8.71 (8.54) & 90 & 1198 (1197) & 4.5 & (0, 0, 2.45) & [23] \\ NH\({}_{4}\)-Zn & P\({}_{3}\) & 12.56 (12.59) & 12.56 (12.59) & 8.38 (8.20) & 90 & 1144 (1126) & 0.0 (0, 0, 2.43) & (09.3-10 as 120–163 K) & [23, 24, 26] \\ \hline MA-Co (AFM) & Pnma & 8.18 (8.28) & 11.67 (11.67) & 8.28 (8.15) & 90 & 790 (789) & 2.8 & non-polar & [49] \\ MA-Co (AFM) & P2\({}_{1}\)/c & 8.25 (8.18) & 11.69 (11.67) & 8.28 (8.15) & 93.6 (91.9) & 878 (789) & 2.8 & non-polar & [49] \\ MA-Mn (AFM) & P\({}_{8}\) & 8.39 (8.68) & 11.93 (11.95) & 8.42 (8.17) & 90 & 843 (847) & 4.7 & non-polar & [48] \\ MA-Mn (AFM) & P\({}_{1}\)/c & 8.41 & 8.47 & 14.12 & 121.9 & 853 & 4.7 & non-polar & [50] \\ MA-Ni (AFM) & P\({}_{8}\) & 8.15 (8.18) & 11.62 (11.52) & 8.24 (8.08) & 90 & 781 (762) & 1.8 & non-polar & [50] \\ MA-Zn & P\({}_{8}\) & 8.31 (8.41) & 11.69(11.71) & 8.17 (8.10) & 90 & 794 (798) & 0.0 & non-polar & [48] \\ MA-Zn & P\({}_{1}\)/c & 8.27 & 13.89 & 8.26 & 122.4 & 802 & 0.0 & non-polar & [51] \\ \hline DMA-Co (AFM) & Ce & 14.14 & 8.44 & 8.62 & 121.8 & 859
However, no structural file has been provided. Therefore, we have initiated our calculation for DMA-Zn by replacing Mn with Zn in experimentally reported DMA-Mn [47]. In case of HAZ-Mg, previous experimental study reports a non-polar crystal structure P2\({}_{1}\)2\({}_{1}\)2\({}_{1}\)[29], but a recent DFT study [78] shows entropy driven effects are responsible for stabilizing the structure in Pna2\({}_{1}\) space group. Therefore, we have initiated our calculation for Pna2\({}_{1}\) phase of HAZ-Mg by replacing Zn with Mg in experimentally reported Pna2\({}_{1}\) phase of HAZ-Zn [29].
NH\({}_{4}\)-Co experimentally is reported in P6\({}_{3}\) space group at low temperature. However, relaxed structure in the same space group was found to be mechanically unstable so further relaxation resulted in P3 space group.
For all structures with transition metal atoms we computed energies for different magnetic orderings and selected the one with the lowest energy as the ground state. It should be noted that in agreement with previous studies[36; 37; 79], we find only very small differences in energy between structures with different magnetic orderings. The magnetic orderings are given in Table 5.
### Polarization
An inherent periodicity of crystal lattice makes polarization, \(\mathbf{P}\), a multivalued quantity. To overcome this challenge the polarization is typically computed along a distortion path that connects polar structure to nonpolar one[80]. However, for the case of HOIP the nonpolar high symmetry structure is typically associated with partial occupancy and therefore cannot be used as a reference point. One approach to construct nonpolar phase was suggested in Ref.[81]. Another approach is to model experiments, where polarization is obtained from the measurement during its reversal. Such an approach was used in Ref.[79; 68], where the polarization reversal was achieved from creation of inverted structure and generating a roto-distortion path between the structure and its inversion. The inversion was applied with respect to the inversion center of high symmetry experimental structure, where available, or with respect to B-site. The roto-distortion path consists of distortion of the framework and rotation of the A site molecule. We used same approach for EA-M, HAZ-M, DMA-M as these compounds have inversion center in their high temperature phase. Example of polarization evolution along such a path is given in Fig. 2 (b) and Fig. 2 (c).
Figure 1: Comparison between computational and experimental lattice parameters. Only the structures where space group is the same for both computations and experiment are compared.
For Gua-Cu, rotations of the Gua molecules resulted in metallic structures, which did not allow for polarization calculations. So we created a non-polar structure using pseudosymmetry module of Bilbao Crystallographic server[82] and generated a distortion path between the polar and nonpolar structures. The polarization along such a path is given in Fig. 2(e). For NH\({}_{4}\)-M family, the high temperature high symmetry structure is P6\({}_{3}\)22 and does not have an inversion center. In this case, we used U2 axis of P6\({}_{3}\)22 to generate the structure with reversed polarization direction. Technically, we applied the following transformation \(x\to y\), \(y\to x\) and \(z\rightarrow-z\) on the Wykoff positions of NH\({}_{4}\)-M in P6\({}_{3}\) phase. Note that for NH\({}_{4}\)-Co, we report polarization for P6\({}_{3}\) phase, although it was found to be mechanically unstable in calculations. An example of polarization along such path is given in Fig. 2 (f).
Polarizations along the roto-distortion paths for all polar materials studied are given in Fig. S1 of Supplementary material, while the associated structures are given in Ref.[76]. The Figures also report the energy along the path. The energies are not likely to be physical as no optimization has been performed. However, they do reveal two minima, that is double-well potential. The typical barrier height is below 200 meV/atom which is considered surmountable[83]. The comparison of our results with experimental and computationally predicted values available from the literature can be found in Table 2 and Fig. 3. We find excellent agreement between our computational data and computational data from the literature. However, there exist discrepancies with experimental data. This could be attributed to the difference in temperature, and in some cases in phase, the difference in the direction of measurement. In our case we report the value along the polar direction. The data reveal that the polarization values for the formate family is in the range of 0.2-7.8 \(\mu\)C/cm\({}^{2}\) with largest values found in DMA-M. The values are a factor of ten lower than the ones for prototypical oxide ferroelectrics including BaTiO\({}_{3}\) and PbTiO\({}_{3}\)[84].
### Piezoelectric response
The independent components of piezoelectric tensors, e\({}_{ij}\) and d\({}_{ij}\), which are allowed by symmetry are given in Table 6 and Table 7, respectively. Figure 4 provides comparative picture. For the formates with Pna2\({}_{1}\) space group, we mostly find largest values for e\({}_{15}\) and d\({}_{15}\) components of the tensor. For materials in Cc and P2\({}_{1}\)2\({}_{1}\)2\({}_{1}\) space groups, the largest components are e\({}_{35}\) (d\({}_{35}\)) and e\({}_{36}\) (d\({}_{36}\)) respectively, and can reach 0.26 C/m\({}^{2}\) (25.36 pC/N) and 0.18 C/m\({}^{2}\) (14.64 pC/N) in DMA-Zn and HAZ-Mg, respectively. The longitudinal coefficients along the crystallographic directions, e\({}_{ii}\) and d\({}_{ii}\), \(i=\)1, 2, 3, range from 0.01 to 0.14 C/m\({}^{2}\) and 0.01 to 11.46 pC/N, respectively, with the largest of these values belonging to DMA-Zn. The transverse coefficients e\({}_{ij}\) and d\({}_{ij}\), \(i,j=\)1, 2, 3 are in the range 0.00 to 0.20 C/m\({}^{2}\) and 0.07 to 9.15 pC/N, respectively, with the largest values belonging to DMA-Zn.
The directional dependence of the longitudinal piezoelectric stress and strain responses was analyzed using MTex[85] and is presented in Fig. 5 and Fig. 6, respectively, for a representative material in each family. For all materials we find response to be highly anisotropic. The longitudinal piezoelectric stress coefficient can reach 0.22 C/m\({}^{2}\) in DMA-Co in \(\langle\frac{1}{2},0,\frac{\sqrt{3}}{2}\rangle\) direction, while the strain coefficient can reach 12.93 pC/N in the vicinity of \(\langle 1/2,0,1\rangle\) direction. 3D visualizations of the piezoelectric stress/strain surfaces for the rest of the materials are given in Fig. S2 and Fig. S3 in the Supplementary material.
Thus, our data indicate that the intrinsic piezoelectric strain response in the formate family can reach 26.7 pC/N (in HAZ-Mn) for the shear stress component and 21.12 pC/N (in DMA-Zn along \(\langle\frac{1}{2},0,\frac{\sqrt{3}}{2}\rangle\) direction) for the longitudinal one. DMA family exhibits the best values.
### Dielectric response
The symmetry allowed components of the dielectric tensor are reported in Table 9. The typical value is 5. However, computations predict Gua-M to exhibit distinctively high values, up to 100.00, comparable in order of magnitude to dielectric constants of BaTiO\({}_{3}\)[86]. The comparative view of the dielectric constants is given in Fig. 7, which confirm that Gua-M family exhibits largest response. The nature of such unusual response deserves further investigation.
### Mechanical Properties
Mechanical properties describe the materials response to external mechanical stimuli, such as pressure, stress or strain. The independent components of stiffness tensors are given in Table 8. They satisfy the Bohn conditions for the elastic stability [87; 88] as checked by VASPKIT[89]. The typical diagonal elements are in the range 3.3 to 127.0. Comparative view of the stiffness tensor components among all the formates is given in Fig. 8. We computed average Bulk modulus (\(B\)), Young modulus (\(E\)), shear modulus (\(G\)), Poisson's ratio (\(\nu\)) and Cauchy's pressure (CP) for bulk polycrystals within the Hills' approximation as implemented in VASPKIT[89; 90; 91; 92; 93; 94; 95] and reported them in Table 10. The values compare well with the experimental results, listed in Table 3.
Poisson's ratio, defined as the ratio of transverse compressive strain to longitudinal tensile strain, and Pugh's ratio, commonly expressed as \(B/G\) ratio, can be used to characterize ductility or brittleness of crystals. The former one typically ranges from 0.0 to 0.5. Ductility-brittleness border line is usually drawn at Poisson ratio
of 0.26 and at Pugh ratio of 1.75 [96; 97]. As shown in Fig. 9, most of the formates studied in this work are ductile and therefore are able to withstand large stresses and exhibit malleability.
Figure 10 shows the directional dependence of linear compressibility, defined as linear expansion or compression of materials upon application of isotropic pressure. Interestingly, the data predict that a few formates have negative values (indicated by red color) and some exhibit nearly zero values. For example, HONH\({}_{3}\)-Ni, MA-Co and NH\({}_{4}\)-Mn exhibit negative values along \(\langle 0,1,0\rangle\), \(\langle\)-0.6447,0,-0.7644\(\rangle\) and \(\langle 0,0,1\rangle\) directions, respectively. Directional dependence of linear compressibility for other materials are presented in supplementary material Fig. S4. Previously negative linear compressibility was predicted for HAZ-Co, HAZ-Mn, HAZ-Fe and NH\({}_{4}\)-Zn [36; 37; 38] and explained on the basis of strut-hinge model[98; 99].
## III Conclusion and Outlook
In summary, we have used DFT computations to assess structural, electric, piezoelectric, and mechanical properties of 29 hybrid formate perovskites. We predict that the ground state phase of most MA-M (M = Co, Mn,
Figure 3: Comparison of our computational polarization values with experimental and computational results from the literature[3; 16; 21; 22; 23; 24; 25; 29]. Note, ”est.” indicates that the polarization was estimated from the separation between positive and negative charge.
Figure 2: Structural evolution along roto-distotion path schematically shown by overlapping structures along the path (a). Variation of polarization and energy along the path for a representative of each family, as given in the legend (b)-(f)
Figure 4: Comparative view of the components of the (a) piezoelectric stress and (b) piezoelectric strain tensors
Figure 5: Piezoelectric stress surface for a representative from each family, as indicated in the titles.
Figure 6: Piezoelectric strain surface for a representative from each family, as indicated in the titles.
Figure 7: Comparative view of the components of the dielectric tensor.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} & e\({}_{15}\) & e\({}_{24}\) & e\({}_{31}\) & e\({}_{32}\) & e\({}_{33}\) & & & & & \\ Gua-Cu & \(-0.011\) & \(0.069\) & \(-0.017\) & \(0.018\) & \(0.051\) & & & & \\ \hline & e\({}_{15}\) & e\({}_{24}\) & e\({}_{31}\) & e\({}_{32}\) & e\({}_{33}\) & & & & \\ EA-Mg & \(-0.261\) & \(-0.014\) & \(-0.035\) & \(-0.045\) & \(-0.015\) & & & & \\ \hline & e\({}_{11}\) & e\({}_{12}\) & e\({}_{13}\) & e\({}_{15}\) & e\({}_{24}\) & e\({}_{26}\) & e\({}_{31}\) & e\({}_{32}\) & e\({}_{33}\) & e\({}_{35}\) \\ DMA-Co & \(0.065\) & \(0.071\) & \(0.160\) & \(0.077\) & \(0.012\) & \(-0.009\) & \(-0.020\) & \(0.025\) & \(0.107\) & \(0.175\) \\ DMA-Mn & \(0.124\) & \(0.129\) & \(0.165\) & \(0.058\) & \(0.041\) & \(0.006\) & \(-0.004\) & \(0.073\) & \(0.122\) & \(0.187\) \\ DMA-Zn & \(0.112\) & \(0.062\) & \(0.196\) & \(0.114\) & \(0.019\) & \(-0.017\) & \(0.020\) & \(0.001\) & \(0.141\) & \(0.258\) \\ \hline & e\({}_{14}\) & e\({}_{25}\) & e\({}_{36}\) & & & & & & \\ HONH\({}_{3}\)-Mn & \(-0.124\) & \(0.003\) & \(-0.211\) & & & & & & \\ HONH\({}_{3}\)-Co & \(-0.102\) & \(-0.027\) & \(-0.031\) & & & & & & \\ HONH\({}_{3}\)-Ni & \(-0.226\) & \(0.103\) & \(0.247\) & & & & & & \\ HONH\({}_{3}\)-Fe & \(-0.065\) & \(-0.004\) & \(-0.135\) & & & & & & \\ \hline HONH\({}_{3}\)-Zn & \(-0.097\) & \(-0.130\) & \(-0.218\) & & & & & & \\ HONH\({}_{3}\)-Mg & \(-0.005\) & \(-0.002\) & \(-0.193\) & & & & & & \\ \hline & e\({}_{15}\) & e\({}_{24}\) & e\({}_{31}\) & e\({}_{32}\) & e\({}_{33}\) & & & & \\ HAZ-Co & \(-0.185\) & \(0.138\) & \(-0.012\) & \(0.035\) & \(-0.054\) & & & & \\ HAZ-Mn & \(-0.143\) & \(0.060\) & \(0.004\) & \(0.037\) & \(-0.067\) & & & & \\ HAZ-Zn & \(-0.194\) & \(0.104\) & \(-0.021\) & \(0.028\) & \(-0.050\) & & & & \\ HAZ-Mg (Pna\({}_{21}\)) & \(-1.172\) & \(0.090\) & \(-0.032\) & \(0.022\) & \(-0.088\) & & & \\ & e\({}_{14}\) & e\({}_{25}\) & e\({}_{36}\) & & & & & & \\ HAZ-Mg (P2\({}_{1}\)2\({}_{1}\)2\({}_{1}\)) & \(-0.190\) & \(-0.074\) & \(-0.177\) & & & & & \\ \hline & e\({}_{14}\) & e\({}_{15}\) & e\({}_{31}\) & e\({}_{33}\) & & & & & \\ NH\({}_{4}\)-Co & \(0.001\) & \(-0.057\) & \(0.017\) & \(0.011\) & & & & & \\ NH\({}_{4}\)-Fe & \(0.078\) & \(-0.046\) & \(0.037\) & \(0.023\) & & & & & \\ NH\({}_{4}\)-Zn & \(0.055\) & \(-0.049\) & \(0.031\) & \(0.019\) & & & & & \\ NH\({}_{4}\)-Mn & \(0.034\) & \(-0.069\) & \(-0.013\) & \(-0.015\) & & & & \\ \end{tabular}
\end{table}
Table 6: Piezoelectric stress constants \(e_{ij}\) in C/m\({}^{2}\). Materials which do not have experimentally reported structure are underscored.
\begin{table}
\begin{tabular}{c c c c c c c c c c} & d\({}_{15}\) & d\({}_{24}\) & d\({}_{31}\) & d\({}_{32}\) & d\({}_{33}\) & & & & \\ Gua-Cu & \(-1.05\) & \(7.36\) & \(-1.41\) & \(0.39\) & \(1.23\) & & & & \\ \hline & d\({}_{15}\) & d\({}_{24}\) & d\({}_{31}\) & d\({}_{32}\) & d\({}_{33}\) & & & & & \\ EA-Mg & \(-40.55\) & \(-1.24\) & \(-0.14\) & \(-0.84\) & \(0.01\) & & & & \\ \hline & d\({}_{11}\) & d\({}_{12}\) & d\({}_{13}\) & d\({}_{15}\) & d\({}_{24}\) & d\({}_{26}\) & d\({}_{31}\) & d\({}_{32}\) & d\({}_{33}\) & d\({}_{35}\) \\ DMA-Co & \(-2.77\) & \(0.18\) & \(6.52\) & \(7.21\) & \(0.59\) & \(-0.57\) & \(-6.19\) & \(1.19\) & \(7.37\) & \(15.05\) \\ DMA-Mn & \(-1.29\) & \(1.58\) & \(7.11\) & \(8.16\) & \(4.04\) & \(2.19\) & \(-8.56\) & \(2.81\) & \(11.29\) & \(23.16\) \\ DMA-Zn & \(-2.34\) & \(-1.39\) & \(9.15\) & \(11.85\) & \(1.05\) & \(-1.32\) & \(-7.10\) & \(-0.50\) & \(11.46\) & \(25.36\) \\ \hline & d\({}_{14}\) & d\({}_{25}\) & d\({}_{36}\) & & & & & & & \\ HONH\({}_{3}\)-Mn & \(-8.06\) & \(0.33\) & \(-11.01\) & & & & & & \\ HONH\({}_{3}\)-Co & \(-5.47\) & \(-2.35\) & \(-1.25\) & & & & & & \\ HONH\({}_{3}\)-Ni & \(-9.65\) & \(9.97\) & \(11.38\) & & & & & & \\ HONH\({}_{3}\)-Fe & \(-3.66\) & \(-0.75\) & \(-5.85\) & & & & & & \\ HONH\({}_{3}\)-Zn & \(-6.03\) & \(-13.16\) & \(-10.86\) & & & & & & \\ HONH\({}_{3}\)-Mg & \(-0.22\) & \(-0.16\) &
Figure 8: Comparative view of the components of the stiffness tensor (C\({}_{ij}\)).
Figure 10: 3D plots of linear compressibility for a representative material in each family as given in the titles. Green and red colors correspond to positive and negative values, respectively.
Figure 9: Pugh (a) and Poisson (b) ratios of formates studied in this work.
Zn) formates is different from the low temperature phase reported experimentally, which suggests additional phase transitions at very low temperatures. The spontaneous polarizations range from 0.2 to 7.8 \(\mu\)C/cm\({}^{2}\) with the largest values being in DMA-M family. They are expected to be reversible by the electric field as the upper estimate for the energy barrier is 200 meV/atom. We also find polarization values often exceeding experimentally reported ones, which we attribute to the difference in the direction of measurement. Thus, our study could guide towards optimization of materials performance. Typical dielectric constants are typically 5.0. Nevertheless, Gua family exhibits outstandingly large values in range 4.6 - 102.1, which, however, need to be further validated. Intrinsic piezoelectric strain and stress constants are in the range 0.1 - 25.8 \(\mu\)C/cm\({}^{2}\) and 0.1 - 26.7 pC/N, respectively. The responses were also found to be highly anisotropic. Components of elastic stiffness tensor range from 0.3 to 127.0 GPa. On the basis of Pugh and Poisson ratio we found most of the materials to be ductile. Computations predict that linear compressibility is highly anisotropic and many materials (e.g. HONH\({}_{3}\)-Ni, NH\({}_{4}\)-Mn, Gua-Ni and MA-Co) exhibit either zero or even negative values. All computational data are available from Ref. [76].
Our study reveals that additional investigations are needed to validate and explain outstanding dielectric response of Gua-M formates, and large piezoelectric response of DMA-M formates, along with the large negative compressibility values for HONH\({}_{3}\)-Ni and NH\({}_{4}\)-Mn. Investigation on the origin of negative and/or nearly zero values of compressibility is also required.
\begin{table}
\begin{tabular}{c c c c c} & \(\epsilon_{11}\) & \(\epsilon_{22}\) & \(\epsilon_{33}\) & \\ \hline Gua-Mn & 5.31 & 79.19 & 34.42 & \\ Gua-Fe & 4.62 & 102.12 & 30.26 & \\ Gua-Co & 5.26 & 67.72 & 30.55 & \\ Gua-Ni & 4.94 & 73.59 & 35.51 & \\ Gua-Cu & 6.79 & 6.85 & 6.26 & \\ Gua-Zn & 5.33 & 5.20 & 5.68 & \\ \hline & \(\epsilon_{11}\) & \(\epsilon_{22}\) & \(\epsilon_{33}\) & \\ EA-Mg & 4.82 & 4.63 & 4.67 & \\ \hline & \(\epsilon_{11}\) & \(\epsilon_{22}\) & \(\epsilon_{33}\) & \(\epsilon_{13}\) & Expt & Ref. \\ DMA-Co & 4.92 & 4.61 & 5.35 & 0.33 & \\ DMA-Mn & 4.93 & 4.53 & 5.53 & 0.41 & 3 – 6 & [100] \\ DMA-Zn & 5.50 & 4.98 & 6.00 & 0.41 & 8 – 10 & [101] \\ \hline & \(\epsilon_{11}\) & \(\epsilon_{22}\) & \(\epsilon_{33}\) & & \\ HONH\({}_{3}\)-Mn & 5.90 & 6.04 & 5.21 & & \\ HONH\({}_{3}\)-Co & 5.84 & 5.93 & 4.95 & & \\ HONH\({}_{3}\)-Ni & 5.56 & 6.48 & 5.08 & & \\ HONH\({}_{3}\)-Fe & 5.14 & 5.29 & 4.46 & & \\ HONH\({}_{3}\)-Zn & 6.26 & 6.04 & 5.15 & & \\ HONH\({}_{3}\)-Mg & 4.84 & 5.01 & 4.43 & & \\ \hline & \(\epsilon_{11}\) & \(\epsilon_{22}\) & \(\epsilon_{33}\) & \(\epsilon_{13}\) & \\ MA-Co (P\(2_{1}\)/c) & 5.21 & 5.34 & 5.89 & 0.22 & \\ MA-Zn (P\(2_{1}\)/c) & 5.17 & 5.83 & 6.06 & \(-0.20\) & \\ \hline MA-Mn (P\(2_{1}\)/c) & 4.62 & 4.85 & 5.29 & \(-0.13\) & \\ MA-Ni (Pnma) & 5.21 & 13.69 & 5.11 & & \\ \hline & \(\epsilon_{11}\) & \(\epsilon_{22}\) & \(\epsilon_{33}\) & & \\ \hline HAZ-Co & 4.87 & 5.05 & 5.03 & & \\ HAZ-Mn & 4.66 & 4.84 & 4.83 & & \\ HAZ-Zn & 5.30 & 5.41 & 5.49 & & \\ HAZ-Mg (P\(na2_{1}\)) & 4.31 & 4.65 & 4.50 & & \\ HAZ-Mg (P\(2_{1}\)\(2_{1}\)\(2_{1}\)) & 5.17 & 4.75 & 9.29 & & \\ \hline & \(\epsilon_{11}\) & \(\epsilon_{22}\) & \(\epsilon_{33}\) & \(\epsilon_{13}\) & \\ FA-Mn & 4.36 & 4.72 & 5.07 & 0.27 & \\ \hline & \(\epsilon_{11}\) & \(\epsilon_{22}\) & \(\epsilon_{33}\) & & \\ NH\({}_{4}\)-Co & 5.38 & 5.38 & 6.04 & & \\ NH\({}_{4}\)-Fe & 4.77 & 4.77 & 5.28 & & \\ NH\({}_{4}\)-Mn & 5.51 & 5.51 & 5.94 & & \\ NH\({}_{4}\)-Zn & 5.28 & 5.28 & 6.17 & & \\ \end{tabular}
\end{table}
Table 9: Dielectric constants. Materials which do not have experimentally reported structure are underscored.
## IV Acknowledgment
The work is supported by the National Science Foundation under the grant EPMD-2029800.
|
2302.13781 | Distribution in the Geometrically Growing System and Its Evolution | Recently, we developed a theory of a geometrically growing system. Here we
show that the theory can explain some phenomena of power-law distribution
including classical demographic and economic and novel pandemic instances,
without introduction of delicate economic models but only on the statistical
way. A convexity in the low-size part of the distribution is one peculiarity of
the theory, which is absent in the power-law distribution. We found that the
distribution of the geometrically growing system could have a trend to flatten
in the evolution of the system so that the relative ratio of size within the
system increases. The system can act as a reverse machine to covert a diffusion
in parametric space to a concentration in the size distribution. | Kim Chol-jun | 2023-02-24T01:30:26Z | http://arxiv.org/abs/2302.13781v1 | # Distribution in the Geometrically Growing System and Its Evolution
###### Abstract
Recently, we developed a theory of a geometrically growing system. Here we show that the theory can explain some phenomena of power-law distribution including classical demographic and economic and novel pandemic instances, without introduction of delicate economic models but only on the statistical way. A convexity in the low-size part of the distribution is one peculiarity of the theory, which is absent in the power-law distribution. We found that the distribution of the geometrically growing system could have a trend to flatten in the evolution of the system so that the relative ratio of size within the system increases. The system can act as a reverse machine to covert a diffusion in parametric space to a concentration in the size distribution.
keywords:power-law; firm size distribution; the COVID-19 pandemic ; population in city; spectral hardening;
JEL code:C11; O1
Significance: Most economic systems that seem to show the power law distribution are analyzed by Gibrat's model, alias a geometrically growing system, which seems to give the log-normal distribution. We showed that the system can give an asymptotic power law if the correlation between parameters is considered. In this paper we show the system can lead to the spectral hardening provided the diffusion, or the increment of variances, along with the growth of the system.
## 1 Introduction
First, we explain the problem and some terminology. A system is composed of members and we call each member an item. The item has a measurable property, which is called a size. The population in city and firm size can be regarded as sizes while city and firm stand for item within country, which is in turn the system. The power law, alias Zipf's law or the Pareto distribution, states that the probability of an item is inversely proportional of a power of the size of the item: \(p(Z)=\frac{M}{Z^{\gamma}}\), where \(p(Z)\) stands for the frequency of item of size \(x\), \(\gamma\) for exponent of power and \(M\) for the normalization constant.
Historically, Pareto (1896) showed that the distribution for income follows the power law. Estoup (1916) and Zipf (1932) observed the power law in word frequency in a novel and Auerbach (1913) and Zipf (1949) indicated the law for the population size of city. Much diverse things show the power-law distribution, for reviewing which we can refer to many works (e.g. see Mitzenmacher, 2004; Newman, 2005). In fact, the author was interested in the cosmic ray spectrum, which seems a typical power-law. Salpeter (1955) had found that the mass distribution of stars follows the power law.
Several generative models for the power-law distribution are proposed, which we can categorize into some groups. The first models are based on a preferential attachment or "rich-get-richer" process (Barabasi & Albert, 1999; Simon, 1955; Yule, 1924). The second ones pursue the scale invariance, which is a peculiarity of the power law distribution (Bak, Tang & Wiesenfeld, 1987; Sneppen et al., 1995). The third ones begin with demanded optimization (Mandelbrot, 1953). And others composite models derive the power-law distribution from specially assumed elementary distributions of the parameters (Gabaix, 1999; Gibrat, 1931; Miller, 1957; Reed & Jorgensen, 2004). Those models show many possibilities generating the power-law distribution. However, all those are based on special assumptions. Though a postulation is the start of logic, but it should better have generality. And the logic should better cover wider range of size in data.
## 2 The formalism for the distribution in the geometrically growing system
Recently, we developed a theory of a geometrically growing system (GGS) (Chol-jun, 2022) on the basis of statistically maximally plausible assumptions, i.e. the normality of distribution of parameters. If the size of each item in an system grows geometrically or proportionately, we call the system geometrically growing. A GGS can be modeled by
\[Z=(1+\alpha)^{t}Z_{0}, \tag{1}\]
where \(Z\) is the size of an item in system, \(\alpha\) is the growth rate (hereafter simply, growth), \(t\) stands for the the age of growth1 and \(Z_{0}\) is an initial size of the item.
Footnote 1: In Chol-jun (2022) the age was denoted by \(n\).
We can assumed the normal distribution for not only \(\alpha\) but also \(t\), which is statistically maximally plausible. Here we can introduce a correlation \(R\) between \(\alpha\) and \(t\) without loss of generality because the correlation can be given even in completely arbitrary configuration of \(\alpha\) and \(t\).2 If the correlation is positive (\(R>0\)), then the log-size (\(Y=\log Z\))at the upper limit can be approximate by
Footnote 2: The instance of the COVID-19 pandemic in Chol-jun (2022) showed a systematic correlation: the countries that had later outbreak of the pandemic show relatively lower growth, i.e. the positive correlation is obtained, which might be because they could have a warning or preparation.
\[Y_{3}=(Ax_{3}+B)^{2}+C, \tag{2}\]
while if the correlation is zero or negative (\(R\leqslant 0\)), it is approximated by3
Footnote 3: \(F\) and \(G\) are interchanged in comparison with Chol-jun (2022).
\[Y_{4}=Fx_{4}+G, \tag{3}\]
where \(x_{3},x_{4}\) are variables following the standard normal distribution \(N(0,1)\) and, if \(\mu_{t},\sigma_{t},\mu_{\alpha},\sigma_{\alpha},\mu_{i}\) and \(\sigma_{i}\) stand for the means and standard deviations of \(t,\alpha\) and \(Y_{0}=\log Z_{0}\), the parameters are given as follows:
\[A=a,\qquad B=\frac{\sqrt{4a^{2}b^{2}+d^{2}}}{2a},\qquad C=c-\texttt{sgn}(R) \frac{d^{2}}{4a^{2}}, \tag{4}\]
\[F=\texttt{sgn}(R)(a^{2}+b^{2})+c,\qquad G=\sqrt{2a^{4}+4a^{2}b^{2}+d^{2}}, \tag{5}\]
where
\[a=\sqrt{\sigma_{\alpha}\sigma_{t}|R|},\qquad b=\frac{|\mu_{ \alpha}\sigma_{t}+\texttt{sgn}(R)\mu_{t}\sigma_{\alpha}|}{2\sqrt{\sigma_{\alpha }\sigma_{t}}},\qquad c=\mu_{i}+\mu_{t}\mu_{\alpha}-\texttt{sgn}(R)b^{2},\] \[\qquad\qquad\qquad\qquad d=\sqrt{\sigma_{i}^{2}+(\mu_{t}^{2} \sigma_{\alpha}^{2}+\mu_{\alpha}^{2}\sigma_{t}^{2})(1-|R|)+\sigma_{t}^{2} \sigma_{\alpha}^{2}(1-R^{2})}, \tag{6}\]
and \(\texttt{sgn}(R)\) stands for the sign of \(R\) and \(|\cdot|\) for the absolute value.
Thus, if \(R>0\), the log-size behaves such as a \(\chi^{2}\) variable while for the case of \(R\leqslant 0\), as a normal variable. We can derive the probability density function (PDF) of size for both case:
\[p_{Z_{3}}(Z) =\frac{\exp\left[-\frac{1}{2}\left(\frac{\sqrt{(\log Z-C)}-B}{A} \right)^{2}\right]}{2\sqrt{2\pi}AZ\sqrt{(\log Z-C)}}\qquad\qquad\qquad\qquad \qquad\qquad\qquad\text{for $R>0$}, \tag{7}\] \[p_{Z_{4}}(Z) =\frac{1}{\sqrt{2\pi}FZ}\exp\left[-\frac{(\log Z-G)^{2}}{2F^{2}}\right] \qquad\qquad\qquad\qquad\qquad\text{for $R\leqslant 0$}. \tag{8}\]
We call \(p_{Z_{3}}(Z)\) the log-completely squared chi (\(\chi\)) distribution with 1 degree of freedom (shortly, log-CS\({}_{1}\) or log-CS) while \(p_{Z_{4}}(Z)\) is well-known log-normal distribution. What is interesting is that the asymptotic exponent, i.e. the asymptotic slope in log-log scale diagram of the PDF, of the log-CS tends toward a constant:
\[\gamma_{3\times}=\lim_{Y\rightarrow+\times(Z\rightarrow+c)\text{ in }R>0}\frac{d(\log(p_{Z_{3}}(Z)))}{d(\log Z)}=-\left(1+\frac{1}{2A^{2}}\right)=- \left(1+\frac{1}{2\sigma_{\alpha}\sigma_{t}R}\right), \tag{9}\]
which says that the log-CS has asymptotic power-law \(p(Z)=\frac{M}{Z^{0.3\omega}}\) behavior. Especially, the asymptotic exponent depends only on the variances of the age and the growth. By the way, the exponent is negative so that we usually consider only its absolute value.
For the log-normal distribution, local slope is determined by the variance (more exactly, the standard deviation) \(F\), which in turn depends on the means and variances of the parameters.
## 3 Statistics of the COVID-19 pandemic and its evolution
The propagation of the pandemic can be considered as a typical example of the geometrically growing system. In spite of seasonal rise and falls, the lock-down measures, administration of vaccines and appearance of variants, the propagation of the pandemic has been accelerated over 2 years since outbreak. The power-law distribution of infected in countries were reported in the early stage when the pandemic was propagating between countries (Beare & Toda, 2020; Blasius, 2020). However, once propagation between countries had been saturated, the distribution should be deviated from the power-law.
Chol-jun (2022) showed that the distribution of accumulated infected in countries in May 2021 could be approximated by the log-CS excellently. Note that the approximation is not a best-fitting but derived from the history of the pandemic. In fact, distributions of the age and growth are similar to the normal and their correlation turned out to be positive.2 Figure 1 shows the consistency between observation and the log-CS approximation at several stages of the pandemic: as examples, in late February 2020 (the early stage), late July 2020 (the saturation of propagation between countries) and early February 2022 (the propagation of the Omicron variant).4 Especially for July 2020, the observation curve is weaving around the log-CS approximation. Distribution in early February 2022 seems to be getting distorted by the unprecedented quick propagation of the Omicron variant.
Footnote 4: Data for COVID-19 propagation is available, for example, at the website of Our World in Data [https://ourworldindata.org/coronavirus/](https://ourworldindata.org/coronavirus/).
In the above count and probability histograms we could hardly sense a change of the slope of the probability
Figure 1: The distribution of the accumulated infected cases of the COVID-19 pandemic in countries and its evolution. The distribution at several stages of propagation and the log-CS\({}_{X1}\) approximation in (a) count histogram and (b) the probability density. The evolution of (c) the tail exponent of the distribution in probability (see (b)) and (d) the variance in log-normal approximation.
curve. If we try to use a maximum likelihood (ML) estimation of the power-law exponent (Newman, 2005)
\[\hat{\gamma}=1+N\left(\sum_{i=1}^{N}\log\frac{z_{i}}{z_{\min}}\right)^{-1}, \tag{10}\]
where \(z_{i}\) stands for the size for data items and \(N\) for the number of the items, we should set \(z_{\min}\), i.e. the lowest allowable size or truncation size. However, the distribution is not the power-low over all domain of size but only in the tail part of big size. We could set \(z_{\min}\) as the modal (most probable) size in the probability histogram (Fig. 1(b)) and determine the tail exponent. Figure 1(c) shows that the tail exponent appears to decrease over all the past time in spite of local rises.
We could propose another proxy for the tail exponent: the variance of the log-normal distribution. In fact, the distribution can be approximated by the log-normal (Eq. 8) as well. This distribution has not an asymptotic exponent and local slope is determined by the variance \(F\) (Eq. 5). We can estimate this variance also in the maximum likelihood approach:
\[\hat{F}^{2}=\frac{1}{N}\sum_{i=1}^{N}(\log z_{i}-\hat{G})^{2},\qquad\hat{G}= \frac{1}{N}\sum_{i=1}^{N}\log z_{i}. \tag{11}\]
This approach has advantage over the above evaluation of the tail exponent because the selection of an optimal truncation size or \(z_{\min}\) does not matter. In both the log-CS and log-normal distributions the slope increases for bigger size so that the tail exponent could be evaluated greater for greater size and vice versa even though the distribution still remains the same. The greater variance corresponds to the smaller local slope or the tail exponent. In the evolution of the COVID-19 pandemic, the variance seems to increase (Fig. 1(d)), which is coincident with decreasing the tail exponent as aforementioned.
## 4 Statistics for population in city and in country
The distribution of population in city is a classical instance of the power-law. The growth of population over centuries shows an exponential or geometrical profile though sometimes was so saturated that expressed by the logistic function. Therefore, we can express the evolution of population by the geometrically growing system (GGS).
First, we analyzed the population in city within a country using data in stellarium-0.21.1.5 Analyzing the population in cities for U.S., Gabaix (1999) indicated that the populations in biggest cities follow the power-law distribution and Zipf exponent is almost unity. We obtained Zipf exponent 1.29 for U.S. (Fig. 2(a)), which differs from 1 according to Gabaix (1999) and 0.639 for Iraq, for example. We perform the best-fitting analysis for population in cities of U.S. with various approximations: the log-CS, the log-normal and power-law (Fig. 2(b)). We infer the best-fit parameters by a Markov Chain Monte Carlo (MCMC) method, especially making use of the Metropolis-Hastings algorithm (Hastings, 1970; Metropolis et al., 1953). \(R^{2}\) with best-fit parameters is evaluated: \(R^{2}=0.9956\) for the log-CS, \(R^{2}=0.9905\) for the power-law and \(R^{2}=0.9833\) for the log-normal approximation. Therefore, we can prefer the log-CS as the closest approximation in considering population in city within country. Ioannides & Skouras (2013) claimed that most cities in the U.S. obeys a log-normal, but the upper tail and therefore most of the population obeys a power-law. In fact, the log-CS and log-normal distributions are almost indistinguishable except for in the infinity (Choi-jun, 2022).
Footnote 5: We use the dataset for cities compiled as observation locations on the globe in stellarium-0.21.1, which is an open-source astronomical software. The software is available at Stellarium Github webpage [https://github.com/stellarium/stellarium/releases/](https://github.com/stellarium/stellarium/releases/). The dataset covers \(\sim\)24,000 cities with their location, population and other information gathered between 2006 and 2019. Because of discontinuity in lower population, we limit cities to the population over 20,000.
Next, we perform the best-fitting analysis for the distribution of population in countries and areas over the world, making use of World Population Prospects (WPP) 2015 dataset6 with the approximations (Fig. 2(c)). With the best-fit parameters inferred by the MCMC method, \(R^{2}\) is evaluated: \(R^{2}=0.9851\) for the log-CS, \(R^{2}=0.9380\) for the power-law and \(R^{2}=0.9920\) for the log-normal approximation. Therefore, in this case we can prefer the log-normal as the closest approximation.
Footnote 6: The data is available at the website of World Population Prospects [https://population.un.org/wpp/](https://population.un.org/wpp/).
On the other hand, we try to apply our approach of the GSS. We expect the correlation between the age and the growth2. For that, we have inspected the age of countries in the Korean Great Encyclopedia7. We date the starting epoch of country by appearance of the first administration, e.g. the first dynasty or city-state, the independence and so on. However, for many African and American countries, we should consider that the
Figure 2: The distribution of population in city and country and its evolution. (a) A plot of log(Rank) vs. log(Population) for cities of U.S. The linear regression gives log(Rank) = 6.1946 - 0.6387 log(Population). (b) The bet-fitting to the distribution of population in cities of U.S. The log-CS appears the closest. (c) The population in countries and areas over the world in 2015. The evolution of (d) the tail exponent and (e) the variance in the log-normal approximation for population in countries and areas over the world.
establishment of the colony had changed greatly the composition of population in those countries. We are afraid that we might take missed or distorted official record of real history for many countries and areas in extracting typical dates. The growth rate is so evaluated for each country that the population was originated from a couple of Adam and Eve, which approximation was applied to COVID-19 pandemic in Chol-jun (2022). Surprisingly, the growth rate and age show a exactly inverse relation, furthermore its exponent is almost unity (Fig. 3(a)). This gives a negative correlation between the age and growth and, of course, we could expect that the world population should follow the log-normal distribution.
Also from Britanica8 we extracted the date of the first habitation of the tribe or immigration. This kind of age that could be called "habitation age" are much older than the previous "administration age." But the negative correlation is obtained still (Fig. 3(b)). We also considered another kind of age which is extrapolated from the current growth of population: the origin of age is set so that the initial population was also a couple. We call such an age "extrapolation age." Figure 3(c) shows relation between the age and the growth those are obtained by the extrapolation from the period 1950-2015 of WPP dataset. Also a negative correlation is given. And, for any kind of age, the inverse relation between the age and the growth still holds and their exponent are near unity.
Footnote 8: Encyclopedia Britannica Ultimate. Reference Suite. Chicago: Encyclopaedia Britannica, 2014.
We analyze this fact. From Eq. (1) we can derive
\[\log z-\log z_{0}=t\log(1+\alpha)\approx t\cdot\alpha. \tag{12}\]
If \(t\alpha=b\) and \(b\) is determined to extent of \(k\) times, then \(t\) or \(\alpha\) are also determined to extent of \(k\) times and in log-log diagram of \(t\) vs. \(\alpha\) appears a stripe of width \(\log k\) (Fig. 3(d)). This stripe has slope of \(-1\) surely so that a negative correlation between \(\alpha\) and \(t\) is obtained. In order that a positive correlation gets, this stripe must cover all the range of \(t\) in dataset. Therefore, it must hold that \(k\geqslant\sqrt{\frac{t_{\rm max}}{t_{\rm min}}}\), which we can rewrite from Eq. (12):
\[\frac{\left(\log z-\log z_{0}\right)_{\rm max}}{\left(\log z-\log z_{0} \right)_{\rm min}}\geq\frac{t_{\rm max}}{t_{\rm min}}, \tag{13}\]
Figure 3: The relation of the age and the growth for population in countries and areas over the world assuming the extreme initial condition. (a) The relation between a “administration age” and the corresponding growth. The slope is \(-0.9943\) in log-log scale and the correlation between the age and the growth is \(R=-0.6191\). (b) For a “habitation age,” the slope is \(-0.9281\) and the correlation is \(R=-0.3599\). (c) For an “extrapolation age,” the slope is \(-0.9771\) and the correlation is \(R=-0.3911\). (d) The difference of size forms a stripe which corresponds to a negative correlation.
where \(t_{\max}\) and \(t_{\min}\) could be appropriate extremes of the dataset, e.g. of \(3\sigma\) region. This might be a necessary condition for positive correlation between \(\alpha\) and \(t\). To summarize, we can claim a theorem.
**Theorem 1**.: _The geometrically growing system can have positive correlation between the growth and the age only if Eq. (13) satisfies._
From the theorem we could derive another conclusion:
**Corollary 1**.: _If new items with the lowest size are continuously born within the geometrically growing system, the system should be approximated by the log-normal. On the other hand, the system could be approximated by the log-CS after the creation of new items with lowest size has been stopped._
For the countries over the world, if we take an initial condition \(z_{0}=2\) (a couple) and consider that the maximum and minimum population in countries are now \(z_{\max}=10^{9},z_{\min}=10^{3}\) and \(t_{\max}=5000\) and \(t_{\min}=50\), then we obtain 100 in r.h.s and only 3 in l.h.s. of Eq. (13). However, if we would consider a non-constant initial condition (in real circumstance), we might gain so much greater value of l.h.s that we could perform log-CS approximation. For the case of the COVID-19 pandemic, \(t_{\max}-t_{\min}\approx 120\) and \(z_{\min}=1,z_{0}=1\), so we can apply the log-CS soon after saturation of propagation between countries. Note that Eq. (13) is not a sufficient condition.
We inspect the evolution of the population distribution over the world. WPP 2015 dataset provides the population in countries and areas in time 1950-2015. As aforementioned, the upper slope of the distribution can be evaluated by both methods: the tail exponent and the variance in the log-normal approximation. Figures 2(d) and 2(e) show the similar trend of the flattening slope as in case of the COVID-19 pandemic. This means that the variance of the parameters such as the age and the growth is so growing that the variance of the size distribution grows, and the tail exponent decreases.
The tail exponent for population in city within a country also evolves. Citing previous works, Gabaix & Ioannides (2004) indicated that the tail exponent for the U.S. decreased in the period from 1900 to 1990 to imply a greater concentration. Gonzales-Val (2010) assured a monotonically decreasing of the tail exponent with time, provided the truncation number of cities keeps as 10,000. Interestingly, observing data for dynamics of cities in the central and eastern Europe (CEE) countries during 1970-2007 (Necula et al., 2010), we can find that in most countries the exponent has almost a negative relation with the population itself: if the population increases, the exponent decreases and vice versa (Fig. 4). The exponent for European cities in the middle ages seems to decrease after 1500 (Bairoch, Batou & Chevre, 1987; Gonzales-Val, 2019), and only then Zipf's law was reported to emerge for cities in Europe (Dittmar, 2011).
Considering countries, quite different with respect to wealth, size and geography, Pinto, Lopes & Machado (2012) claimed that the countries presenting higher wealth levelsreveal higher values of the exponent while most African countries unveil smaller values of the exponent. In our simulation, oldest Asian countries such as Iraq and China appear to have smallest exponent. However, their reasoning was so not obvious: if it was right, the exponent over the world should increase as the world economy proceeds but the exponent surely is decreasing. Our approach can give an obvious reasoning: younger countries might have greater exponent and vice versa. In fact, the most more developed countries are younger while the older countries are underdeveloped so that more wealthy countries could appear to have higher exponent, which, however, is casual but not inevitable.
Gabaix & Ioannides (2004) related the urbanization with the economic factor, e.g. the economic integration and the international trade. Then why is the exponent increasing in some countries in spite of economic progress? Necula et al. (2010) proposed political factor to determine the urbanization. We could give a statistical analysis ahead of or including all the economic or political or any other factors. For example, the exponent depends on the variances of the parameters.
## 5 Statistics of firm size
The power-law distribution has appeared widely in economic and financial phenomena (e.g. see Farmer & Geanakoplos, 2008). The power-law distribution in firm size that could be measured by diverse properties have anounced long ago (Ijiri & Simon, 1977; Zipf, 1949). Making use of Economic Census 1997, Axtell (2001) showed the power-law distribution in firm size measured by employees and revenue. The firm size is a quantity which could be apt to grow geometrically. In fact, we commonly evaluate the growth of firm in terms of proportionality but not additivity.
We analyse the data in Axtell (2001), where numerical data for the size of firm expressed by the number of employees (the employment size) were shown explicitly. In fact, the distribution has a convex form in low-size part, which is in more favor of the log-CS rather than the pure power-law modeling. Giovanni, Levchenko
Figure 4: Evolution of the population and the tail exponent for CEE countries. (a) Belarus, (b) Bulgaria, (c) Hungary, (d) Poland, (e) Romania, (f) Russia, (g) Ukraine, (h) Baltic states (Estonia, Latvia and Lithuania). The data is extracted in Necula et al. (2010) where the exponent is calculated by two methods: ordinary least squares (OLS) estimation for linear regression and maximum likelihood estimation (MLE). For some countries one of either estimations of the exponent shows abrupt rise and fall so that we neglect. For most countries except Bulgaria and Poland we can observe inverse relation between the population and the tail exponent in evolution.
& Ranciere (2010) analyzed French firms, and obtained a similar convex profile of distribution. We neglect 0-size firms as Axtell did. We perform the best-fitting with the various approximations by the MCMC method (Fig. 5(a)). For the power-law fitting \(R^{2}\) is obtained the same as Axtell: \(R^{2}=0.9932\). The log-CS fitting gives a greater value: \(R^{2}=0.9987\). We should expect that the dataset could be approximated by the log-normal: \(R^{2}=0.9952\). This says that the log-CS could be the closest to the real dataset.
As aforementioned, the geometrically growing system can be modeled alternatively by either log-CS and log-normal depending on the correlation between the growth and age. Data for employment dynamics by firm age, 1987-2005, from the Census Bureau Business Dynamics Statistic and Longitudinal Business Database, showed that young firms have higher employment growth rates, if they survive, than older firms (Haltiwanger, Jarmin & Miranda, 2009, 2010). This might be because the growth of older or greater firms seems to be saturated due to market limitation while this limitation does not affect younger firms so the latter appears to have higher growth in spite of higher establishment exit. Analyzing data from the EFIGE survey that sampled French, Italian and Spanish firms in the period from 2001 to 2008, Navaretti, Castellani & Pieri (2012) showed that younger firms have a highly probability of experiencing high growth rates both in the short-run (e.g. for 1-year) and in the long-run (i.e. for existing age). Therefore, we could expect a negative correlation between firm age and firm growth. This should lead to the log-normal fitting to the distribution, though the real dataset seems closer to the log-CS. This might be originated from non-normal distribution of the age and growth.
We trace the evolution of the distribution. We inspect data from the Census Bureau Business Dynamics Statistic (BDS), 1977-2014.9 Though the data shows a lowering of exponent in lower-size part in contrast to higher-size part which could stand for a convexity of the distribution (Fig. 5(b)), we could evaluate the tail exponent by linear regression, excluding both the lowest- and highest-size bins, because the highest bin has inappropriate upper limit for infinity. Figure 5(c) shows clearly that the exponent decreases as time goes. Therefore, we can see that the tail exponent is evolving to lower, i.e. the distribution is flattening. We can give a reason in our approach as aforementioned: the variances of the age or the growth so increase that the variance of the size increases and the distribution of size flattens.
Footnote 9: The data is available at website of Small Business Administration [https://www.sba.gov/sites/default/files/advocacy/](https://www.sba.gov/sites/default/files/advocacy/)
## 6 Conclusion and discussion
In this paper, we consider some special properties of distribution for the geometrically growing system (GSS) with pandemic, demographic and economic phenomena.
First, the distribution has a convexity in the lower-size part. It is not surprising, it represents only the modal (most probable) size, which is popular in almost distributions but absent in the power-law. In fact, the log-CS has additional concavity and singularity. In the above approximations the log-CS or log-normal, both or either, dominate over the power-law. This means that the demographic, pandemic and economic distributions can be explained by the GSS properly. In fact, difference between the log-CS and the log-normal is not great in most cases and not important.What matters is that both of them represent the distribution of GSS and the convexity appears commonly in both them. However, profile of those distributions may be changed if the distribution of
Figure 5: Distribution of the employment size of firm and evolution of the tail exponent. (a) The log-CS\(\chi_{1}\), log-normal and power-law fitting for distribution of the employment size of firm in Census/SBA 1997 dataset. The evolution of (b) histogram and (c) the tail exponent for firm’s employment size in BDS 1977-2014 dataset. The tail exponent is evaluated not in count histogram such as (b) but in probability histogram such as (a) by linear regression.
parameters is deviated from the normal. For the log-CS or log-normal, there does never appear a divergence problem which happens for the pure power law with a certain exponent.
If new items are born with low size continuously flourishing, this convexity will get fainter and the distribution seems to be closer to the power-law. However, once the number of items in the system is saturated, the number of low-size items decreases in evolution, unless they are isolated from the ensemble and never grow, and a kind of roll-over in lower-size part grows. Until early stage of such a period, the distribution of GGS should be represented only by the log-normal while long after the saturation of the number of items, the log-CS fitting can become possible.
Second, while the parameters such as the age and the growth diffuse, the tail exponent lowers and the distribution is flattening in the evolution of system, which is often called the spectral hardening. As aforementioned, the slope of distribution depends on the variances of the parameters fully or partly. In many systems such as Brownian motion, the variance of parameters are growing with time. The diversity in economical actions and variance in economical growth are accelerating with the time. The second law of the thermodynamics dictates only that the matter should spread out by the diffusion. However, the matter is collecting and agglomerating over the universe. Though, in the physical view, it can be explained by the gravitation and so on, but, on the statistical way, the geometrically growing system can act as a reverse machine that converts the diffusion in parametric space to the concentration in the distribution of size.
The flattening distribution in turn implies the enlargement of the relative ratio in size between the highest- and lowest-size items within the system. This could explain the urbanization, monopolization and so on. The urbanization proceeded in ancient times such as in ancient Rome. The urbanization can occur not only by migration due to economic and political reasons, but also by the stochastic nature of the growth itself, e.g., by different birth (or death) rate or involving all the former factors. If we would apply this property to wealth distribution, which has geometrically growing trend and follows a power-law, we could expect aggravation of the "rich-get-richer" process and the monopolization in the economic regime by nature if money begets money.
It is interesting that the concentration might be compatible with or even driven by the diffusion. The growth in GSS could give rise to the diffusion in parametric space, which in turn leads to the centralization in matter space. The "rich-get-richer" phenomenon does never imply that the rank in system should be fixed, that is, the richest or biggest one could keep their first rank naturally. The rank could be determined by the growth rate, the diversity of which can be changed with time. The richest job or biggest city have been alternating with era, as we have seen.
We can find properties of the geometrically growing system in many other phenomena. We wish that our approach could contribute to analyze the problems.
## Conflict of interest
The author has no conflicts to disclose.
## Data availability
Data used in this paper are available at the website addresses indicated or by corresponding with the author.
|
2304.00315 | On the limiting problems for two eigenvalue systems and variations | Let $\Omega$ be a bounded, smooth domain. Supposing that $\alpha(p) +
\beta(p) = p$, $\forall\, p \in \left(\frac{N}{s},\infty\right)$ and
$\displaystyle\lim_{p \to \infty} \alpha(p)/{p} = \theta \in (0,1)$, we
consider two systems for the fractional $p$-Laplacian and a variation on the
first system. The first system is the following. $$\left\{\begin{array}{ll}
(-\Delta_p)^{s}u(x) = \lambda \alpha(p) \vert u \vert^{\alpha(p)-2} u \vert
v(x_0)\vert^{\beta(p)} & {\rm in} \ \ \Omega,\\ (-\Delta_p)^{t}v(x) = \lambda
\beta(p) \left(\displaystyle\int_{\Omega}\vert u \vert^{\alpha(p)} d x\right)
\vert v(x_0) \vert^{\beta(p)-2} v(x_0) \delta_{x_0} & {\rm in} \ \ \Omega,\\ u=
v=0 & {\rm in} \ \mathbb{R}^N\setminus\Omega, \end{array}\right. $$ where $x_0$
is a point in $\overline{\Omega}$, $\lambda$ is a parameter, $0<s\leq t<1$,
$\delta_x$ denotes the Dirac delta distribution centered at $x$ and $p>N/s$. A
variation on this system is obtained by considering $x_0$ to be a point where
the function $v$ attains its maximum.
The second one is the system $$\left\{\begin{array}{ll} (-\Delta_p)^{s}u(x) =
\lambda \alpha(p) \vert u(x_1) \vert^{\alpha(p)-2} u(x_1) \vert v(x_2)
\vert^{\beta(p)} \delta_{x_1} & {\rm in} \ \ \Omega,\\ (-\Delta_p)^{t}v(x) =
\lambda \beta(p) \vert u(x_1) \vert^{\alpha(p)} \vert v(x_2) \vert^{\beta(p)-2}
v(x_2) \delta_{x_2} & {\rm in} \ \ \Omega,\\ u= v=0 & {\rm in} \
\mathbb{R}^N\setminus\Omega, \end{array}\right. $$ where $x_1,x_2\in \Omega$
are arbitrary, $x_1\neq x_2$. Although we not consider here, a variation
similar to that on the first system can be solved by practically the same
method we apply.
We obtain solutions for the systems (including the variation on the first
system) and consider the asymptotic behavior of these solutions as
$p\to\infty$. We prove that they converge, in the viscosity sense, to solutions
of problems on $u$ and $v$. | Hamilton P Bueno, Aldo H S Medeiros | 2023-04-01T13:55:55Z | http://arxiv.org/abs/2304.00315v1 | # On the limiting problems for two eigenvalue systems and variations
###### Abstract.
Let \(\Omega\) be a bounded, smooth domain. Supposing that \(\alpha(p)+\beta(p)=p\), \(\forall\,p\in\left(\frac{N}{s},\infty\right)\) and \(\lim\limits_{p\to\infty}\alpha(p)/p=\theta\in(0,1)\), we consider two systems for the fractional \(p\)-Laplacian and a variation on the first system. The first system is the following.
\[\left\{\begin{array}{ll}(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u|^{\alpha(p) -2}u|v(x_{0})|^{\beta(p)}&\mbox{in}\ \ \Omega,\\ (-\Delta_{p})^{t}v(x)=\lambda\beta(p)\left(\int_{\Omega}|u|^{\alpha(p)}{\rm d }x\right)|v(x_{0})|^{\beta(p)-2}v(x_{0})\delta_{x_{0}}&\mbox{in}\ \ \Omega,\\ u=v=0&\mbox{in}\ \mathbb{R}^{N}\setminus\Omega,\end{array}\right.\]
where \(x_{0}\) is a point in \(\overline{\Omega}\), \(\lambda\) is a parameter, \(0<s\leq t<1\), \(\delta_{x}\) denotes the Dirac delta distribution centered at \(x\) and \(p>N/s\).
A variation on this system is obtained by considering \(x_{0}\) to be a point where the function \(v\) attains its maximum. In this case, we denote \(x_{0}=x_{v}\).
The second one is the system
\[\left\{\begin{array}{ll}(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u(x_{1})|^{ \alpha(p)-2}u(x_{1})|v(x_{2})|^{\beta(p)}\delta_{x_{1}}&\mbox{in}\ \ \Omega,\\ (-\Delta_{p})^{t}v(x)=\lambda\beta(p)|u(x_{1})|^{\alpha(p)}|v(x_{2})|^{\beta( p)-2}v(x_{2})\delta_{x_{2}}&\mbox{in}\ \ \Omega,\\ u=v=0&\mbox{in}\ \mathbb{R}^{N}\setminus\Omega,\end{array}\right.\]
where \(x_{1},x_{2}\in\Omega\) are arbitrary, \(x_{1}\neq x_{2}\). Although we not consider here, a variation similar to that on the first system can be solved by practically the same method we apply.
We obtain solutions for the systems (including the variation on the first system) and consider the asymptotic behavior of these solutions as \(p\to\infty\). We prove that they converge, in the viscosity sense, to solutions of problems on \(u\) and \(v\).
Key words and phrases:fractional systems, variational methods, viscosity solutions 2020 Mathematics Subject Classification: 35R11, 35A15, 35D40
## 1. Introduction
In this paper we deal with different systems for the fractional \(p\)-Laplacian and study the behavior of their solutions \((u_{p},v_{p})\) as \(p\) goes to infinity: we prove that these solutions converge, in the viscosity sense, to solutions \((u_{\infty},v_{\infty})\) of related systems.
Let \(\Omega\subset\mathbb{R}^{N}\) be a bounded, smooth domain and, for each \(x\in\Omega\), let \(\delta_{x}\) be the Dirac mass concentrated at \(x\). Consider also functions \(\alpha,\beta\colon\left(\frac{N}{s},\infty\right)\to(1,\infty)\) satisfying
\[(h_{1})\ \alpha(p)+\beta(p)=p,\,\forall\,p\in\left(\frac{N}{s}, \infty\right);\] \[(h_{2})\ \lim\limits_{p\to\infty}\frac{\alpha(p)}{p}=\theta\in(0,1).\]
For each \(p>\frac{N}{s}\), we consider the system
\[\left\{\begin{array}{ll}(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u|^{\alpha(p)-2}u| v(x_{0})|^{\beta(p)}&\mbox{in}\ \ \Omega,\\ (-\Delta_{p})^{t}v(x)=\lambda\beta(p)\left(\int_{\Omega}|u|^{\alpha(p)}{\rm d }x\right)|v(x_{0})|^{\beta(p)-2}v(x_{0})\delta_{x_{0}}&\mbox{in}\ \ \Omega,\\ u=v=0&\mbox{in}\ \mathbb{R}^{N}\setminus\Omega,\end{array}\right.\]
where \(x_{0}\) is a point in \(\overline{\Omega}\), \(\lambda\) is a parameter, \(0<s\leq t<1\) and \((-\Delta_{p})^{r}\) denotes the \(r\)-fractional \(p\)-Laplacian operator, which is defined, for any \(p>1\), by
\[(-\Delta_{p})^{r}\phi(x)=\lim_{\varepsilon\to 0}\int_{\mathbb{R}^{N} \setminus B_{\varepsilon}(x)}\frac{|\phi(x)-\phi(y)|^{p-2}(\phi(x)-\phi(y))}{ |x-y|^{N+rp}}{\rm d}x{\rm d}y \tag{1}\]
for any \(\phi\in C_{0}^{\infty}(\Omega)\), which is a dense subspace of \(W_{0}^{r,p}(\Omega)\). We also recall that
\[\left\langle(-\Delta_{p})^{r}u,\varphi\right\rangle:=\int_{\mathbb{R}^{N}} \int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p-2}(u(x)-u(y))(\varphi(x)-\varphi(y) )}{|x-y|^{N+rp}}{\rm d}x{\rm d}y\]
is the expression of \((-\Delta_{p})^{r}\) as an operator from \(W_{0}^{r,p}(\Omega)\) into its dual. (The definition of the space \(W_{0}^{r,p}(\Omega)\) will be given in the sequence.)
We first prove that, for each \(p>N/s\), this system has a unique solution. Then we consider the behavior of a sequence of these solutions as \(p\to\infty\) and prove that they converge uniformly to \((u_{\infty},v_{\infty})\), which are viscosity solutions of a related system. (Precise statements are given in the sequence.)
As a variation on system \((P_{p}^{1})\), we consider the system
\[\left\{\begin{array}{ll}(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u|^{\alpha(p) -2}u|v(x_{v})|^{\beta(p)}&\mbox{in}\ \ \Omega,\\ (-\Delta_{p})^{t}v(x)=\lambda\beta(p)\left(\int_{\Omega}|u|^{\alpha(p)}{\rm d }x\right)|v(x_{v})|^{\beta(p)-2}v(x_{v})\delta_{x_{v}}&\mbox{in}\ \ \Omega,\\ u=v=0&\mbox{in}\ \mathbb{R}^{N}\setminus\Omega,\end{array}\right.\]
where \(x_{v}\) is a maximum point of \(v\) in \(\overline{\Omega}\). Observe that the first equation in \((P_{\infty}^{1})\) can be replaced by \((-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u|^{\alpha(p)-2}u\|v\|_{\infty}^{\beta( p)}\) in \(\Omega\). To solve the above system we apply the same method used to handle problem \((P_{p}^{1})\), see Remark 8.
We also handle the system
\[\left\{\begin{array}{ll}(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u(x_{1})|^{ \alpha(p)-2}u(x_{1})|v(x_{2})|^{\beta(p)}\delta_{x_{1}}&\mbox{in}\ \ \Omega,\\ (-\Delta_{p})^{t}v(x)=\lambda\beta(p)|u(x_{1})|^{\alpha(p)}|v(x_{2})|^{\beta( p)-2}v(x_{2})\delta_{x_{2}}&\mbox{in}\ \ \Omega,\\ u=v=0&\mbox{in}\ \mathbb{R}^{N}\setminus\Omega,\end{array}\right.\]
where \(x_{1},x_{2}\in\Omega\) are arbitrary points, \(x_{1}\neq x_{2}\).
Of course, we could also consider the case where \(x_{u}\) and \(x_{v}\) are points of maxima of \(u\) and \(v\), respectively, since our reasoning also solves this case.
In Section 2-5 we handle system \((P_{p}^{1})\), while system \((P_{\infty}^{1})\) is considered in Remark 8. Finally, in Section 6 we deal with problem \((P_{p}^{2})\).
## 2. Background, setting and description of results
Due to the appropriate Sobolev embedding, the solutions \((u,v)\) of both problems \((P_{p}^{1})\) and \((P_{p}^{2})\) must be continuous.
Since both equations in the system have the same homogeneity, \((P_{p}^{1})\) and \((P_{p}^{2})\) are actually eigenvalue problems. The eigenvalue problem for the \(s\)-fractional \(p\)-Laplacian operator was studied by Lindgren and Lindqvist in the pioneering paper
[9]. Precisely, they studied the problem
\[\left\{\begin{array}{ll}(-\Delta_{p})^{s}u=\lambda_{1}(s,p)|u|^{p-2}u(x)&\mbox{in } \ \Omega,\\ u=0&\mbox{in }\mathbb{R}^{N}\setminus\Omega.\end{array}\right. \tag{2}\]
The authors proved that the minimum of the Rayleigh quotient associated with (2), that is,
\[\lambda_{1}(s,p)=\inf_{u\in W^{s,p}_{0}(\Omega)\setminus\{0\}}\frac{[u]^{p}_{ s,p}}{\|u\|_{p}^{p}}=\frac{[\phi_{p}]^{p}_{s,p}}{\|\phi_{p}\|_{p}^{p}}.\]
is attained by a function that does not change sign in \(\Omega\).
In the case \(p=\infty\) of the same paper, Lindgren and Lindqvist denoted
\[\lambda_{1}(s,\infty)=\inf\left\{\frac{\left\|\frac{u(x)-u(y)}{|x-y|^{s}} \right\|_{\infty}}{\|u\|_{\infty}}\,:\,u\in W^{s,\infty}_{0}(\Omega)\setminus \{0\}\right\}\]
and showed that
\[\lambda_{1}(s,\infty)=\frac{1}{R^{s}}\qquad\mbox{and}\qquad\lim_{p\to\infty} \sqrt[p]{\lambda_{1}(s,p)}=\lambda_{1}(s,\infty),\]
where \(R=\max_{x\in\Omega}\,\mbox{dist}(x,\mathbb{R}^{N}\setminus\Omega)=\|\mbox{ dist}(\cdot,\mathbb{R}^{N}\setminus\Omega)\|_{\infty}\).
The results obtained in relation with Eq. (2) were extended by Del Pezzo and Rossi in [3] to the case of systems of the form
\[\left\{\begin{array}{ll}(-\Delta_{p})^{r}u(x)=\lambda\alpha(p)|u(x)|^{ \alpha(p)-2}u(x)|v(x)|^{\beta(p)}&\mbox{in }\ \ \Omega,\\ (-\Delta_{p})^{s}v(x)=\lambda\beta(p)|u(x)|^{\alpha(p)}|v(x)|^{\beta(p)-2}v( x)&\mbox{in }\ \ \Omega,\\ u=v=0&\mbox{in }\mathbb{R}^{N}\setminus\Omega,\end{array}\right. \tag{3}\]
when assumptions \((h_{1})\) and \((h_{2})\) are fulfilled. If for each \(p\in(\frac{N}{s},\infty)\) we denote
\[\lambda_{1,p}=\inf\left\{\frac{\frac{1}{p}[u]^{p}_{r,p}+\frac{1}{p}[v]^{p}_{ s,p}}{\int_{\Omega}|u|^{\alpha(p)}|v|^{\beta(p)}\,\mathrm{d}x}\,:\,(u,v)\in W ^{s,p}(\Omega),\ \ uv\neq 0\right\}\]
the authors showed that \(\lambda_{1,p}\) is _principal eigenvalue_ (that is, an eigenvalue associated with an eigenfunction that does not change its sign) and
\[\lambda_{s,p}^{\frac{1}{p}}\to\Lambda_{1,\infty}=\left[\frac{1}{R}\right]^{ \theta r+(1-\theta)s}\ \ \mbox{as }\ p\to\infty. \tag{4}\]
More recently, Mihailescu, Rossi and Stancu-Dumitru [11] studied the system
\[\left\{\begin{array}{ll}-\Delta_{p}u(x)=\lambda\alpha(p)|u(x_{1})|^{\alpha( p)-2}u(x_{1})|v(x_{2})|^{\beta(p)}\delta_{x_{1}}&\mbox{in }\ \ \Omega,\\ -\Delta_{p}v(x)=\lambda\beta(p)|u(x_{1})|^{\alpha(p)}|v(x_{2})|^{\beta(p)-2}v (x_{2})\delta_{x_{2}}&\mbox{in }\ \ \Omega,\\ u=v=0&\mbox{on }\partial\Omega,\end{array}\right. \tag{5}\]
where \(x_{1},x_{2}\in\Omega\) are arbitrary points, \(x_{1}\neq x_{2}\). If \(x_{1}\) and \(x_{2}\) are points of maxima of \(u\) and \(v\), respectively, using arguments like those in [1, 5, 7], it can be proved that \((P_{p}^{2})\) is the limit, as \(r\to\infty\), of the problem
\[\left\{\begin{array}{ll}-\Delta_{p}u=\lambda\alpha(p)\|u\|_{r}^{\alpha(p)-r} |u|^{r}\|v\|_{r}^{\beta(p)}&\mbox{in }\ \ \Omega,\\ -\Delta_{p}v=\lambda\beta(p)\|u\|_{r}^{\alpha(p)}\|v\|_{r}^{\beta(p)-r}|v|^{r} &\mbox{in }\ \ \Omega,\\ u=v=0&\mbox{on }\partial\Omega,\end{array}\right. \tag{6}\]
which can be solved by classical minimization procedures.
As in [3], they proved that system (5) has a principal eigenvalue and studied the asymptotic behavior of the principal eigenvalues and corresponding positive eigenfunctions \(u_{p}\) and \(v_{p}\) as \(p\) goes to infinity. Mihailescu, Rossi and Stancu-Dumitru proved that the converge to \(u_{\infty}\) and \(v_{\infty}\), both viscosity solutions of the equation \(-\Delta_{\infty}w=0\) in.
The main goal of this work is to study system \((P^{1}_{p})\). Note that this system is related to both systems (3) and (5). In the last section of this article, we make clear that the method used to solve system \((P^{1}_{p})\) also applies to system \((P^{2}_{p})\), thus generalizing system (5) from [11] to the fractional \(p\)-Laplacian operator.
Due to the presence of the Dirac mass \(\delta_{x}\), it is more natural to compare the present work with [11]. We note that the integral form of the fractional \(p\)-Laplacian is more difficult to handle than that of the \(p\)-Laplacian. Also, in [11], it is valid the convergence
\[\|\nabla u\|_{L^{p}(\Omega)}\to\||\nabla u\|_{L^{\infty}(\Omega)},\ \ \mbox{for all}\ \ u\in W^{1,p}_{0}(\Omega)\]
in the \(p\)-Laplacian case, what does not happen when we are dealing with the Gagliardo semi-norm. Furthermore, a direct calculation with the distance function \(\mbox{dist}(x,\mathbb{R}^{N}\setminus\Omega)\) shows that \(|\nabla\mbox{dist}(x,\mathbb{R}^{N}\setminus\Omega)|=1\), but this is not valid in our case, making more difficult to estimate the solutions of system \((P^{2}_{p})\). Furthermore, the presence of the integral term in \((P^{1}_{p})\) changes the equation that the viscosity solutions \(u_{\infty}\) and \(v_{\infty}\) satisfy, see Theorem 4.
On its turn, we will show that the eigenvalues of \((P^{1}_{p})\) converge, as \(p\to\infty\) to the same value \(\Lambda_{1,\infty}\) given by (4), a result obtained in [3].
We introduce the notation used while handling problem \((P^{1}_{p})\). In the last section of this article, we consider problem \((P^{2}_{p})\) and make the necessary adjustments.
For each \(0<r<1\) and \(p\in[1,\infty]\), we consider the Sobolev spaces \(W^{r,p}(\Omega)\)
\[W^{r,p}(\Omega)=\left\{u\in L^{p}(\Omega)\,:\,\int_{\Omega}\int_{\Omega} \frac{|u(x)-u(y)|^{p}}{|x-y|^{N+rp}}\mbox{d}x\mbox{d}y<\infty\right\},\]
and also the spaces
\[W^{r,p}_{0}(\Omega)=\left\{u\in L^{p}(\mathbb{R}^{N})\,:\,u=0\ \mbox{in}\ \ \mathbb{R}^{N}\setminus\Omega\ \mbox{and}\ [u]_{r,p}<\infty\right\},\]
where
\[[u]^{p}_{r,p}=\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{|u(x)-u(y)|^{p} }{|x-y|^{N+rp}}\mbox{d}x\mbox{d}y.\]
We recall that, for \(0<s\leq t<1\) and \(1<p<\infty\), there exists q constant \(C>0\) depending only on \(s\), \(N\) and \(p\) such that
\[\|f\|_{W^{s,p}(\Omega)}\leq C\|f\|_{W^{t,p}(\Omega)},\ \ \mbox{for all}\ \ f\in W^{t,p}(\Omega).\]
In particular, \(W^{t,p}_{0}(\Omega)\hookrightarrow W^{s,p}_{0}(\Omega)\), for more details see [4]. So, we can consider only the space \(W^{s,p}_{0}(\Omega)\).
For each \(0<s\leq t<1\), \(x_{0}\in\Omega\) fixed and \(p\in[1,\infty]\), we denote \(X_{s,t,p}(\Omega)=W^{s,p}_{0}(\Omega)\times W^{t,p}_{0}(\Omega)\) and
\[X^{*}_{s,t,p}(\Omega)=\left\{(u,v)\in X_{s,t,p}(\Omega)\,:\,\left(\int_{ \Omega}|u|^{\alpha(p)}\mbox{d}x\right)v(x_{0})\neq 0\right\}.\]
If \(C_{0}(\overline{\Omega})\) stands for the space \(\left\{u\in C(\Omega)\,:\,u=0\ \mbox{in}\ \mathbb{R}^{N}\setminus\Omega\right\}\), it is well-known that the immersion \(W^{s,p}_{0}(\Omega)\hookrightarrow C_{0}(\overline{\Omega})\) is compact for any \(p\in\left(\frac{N}{s},\infty\right)\). The compactness of this immersion is consequence of the following Morrey's type inequality
(see [4])
\[\sup_{y\neq x}\frac{|u(x)-u(y)|}{|x-y|^{s-\frac{N}{p}}}\leq C[u]_{s,p},\ \ \forall u\in W_{0}^{s,p}(\Omega), \tag{7}\]
which holds whenever \(p>\frac{N}{s}\). If \(p\) is sufficiently large, the positive constant \(C\) in (7) can be chosen uniformly with respect to \(p\) (see [8], Remark 2.2).
Thus, denoting
\[X_{0}(\Omega)=C_{0}(\overline{\Omega})\times C_{0}(\overline{\Omega}),\]
we have the compact immersion
\[X_{s,t,p}(\Omega)\hookrightarrow X_{0}(\Omega)\]
for any \(p\in\left(\frac{N}{s},\infty\right)\).
For \(p\in\left(\frac{N}{s},\infty\right)\) and \(u,v\in X_{s,t,p}^{*}\), we define
\[Q_{s,t,p}(u,v)=\frac{\frac{1}{p}[u]_{s,p}^{p}+\frac{1}{p}[v]_{t,p}^{p}}{ \left(\int_{\Omega}|u|^{\alpha(p)}\mathrm{d}x\right)|v(x_{0})|^{\beta(p)}}\]
and
\[\Lambda_{1}(p)=\inf_{(u,v)\in X_{s,t,p}^{*}(\Omega)}Q_{,s,t,p}(u,v).\]
Straightforward calculations show that
\[\frac{\mathrm{d}}{\mathrm{d}t}\bigg{|}_{t=0}\left(\frac{1}{p}[u+t\varphi]_{r, p}^{p}\right)=\big{\langle}(-\Delta_{p})^{r}u,\varphi\big{\rangle},\ \ \forall\varphi\in W_{0}^{r,p}(\Omega). \tag{8}\]
If \(0<m<\infty\), then
\[\frac{\mathrm{d}}{\mathrm{d}t}\bigg{|}_{t=0}|(u+t\varphi)(x)|^{m}=m|u(x)|^{m- 2}u(x)\varphi(x),\ \ \forall\,\varphi\in L^{m}(\Omega). \tag{9}\]
We also have, for all \(1<\alpha<\infty\) and \(\varphi\in L^{\alpha}(\Omega)\),
\[\frac{\mathrm{d}}{\mathrm{d}t}\bigg{|}_{t=0}\left(\int_{\Omega}|(u+t\varphi)( x)|^{\alpha}\mathrm{d}x\right)|v(x_{0})|^{\beta}=\alpha\left(\int_{\Omega}|u(x)|^{ \alpha-2}u(x)\varphi(x)\mathrm{d}x\right)|v(x_{0})|^{\beta}. \tag{10}\]
**Definition 1**.: _A pair \((u,v)\in X_{s,t,p}(\Omega)\) is a weak solution to \((P_{p}^{1})\) if_
\[\langle(-\Delta_{p})^{s}u,\varphi\rangle+\big{\langle}(-\Delta_{ p})^{t}v,\psi\big{\rangle}= \lambda\left[\alpha(p)|u|^{\alpha(p)-2}u(x)|v(x_{0})|^{\beta(p)} \varphi(x)\right. \tag{11}\] \[+\left.\beta(p)\left(\int_{\Omega}|u(x)|^{\alpha(p)}\mathrm{d}x \right)|v(x_{0})|^{\beta(p)-2}v(x_{0})\psi(x_{0})\right]\]
_for all \((\varphi,\psi)\in X_{s,t,p}(\Omega)\)._
The functional at the left-hand side of (11) is the Gateaux derivative of the Frechet differentiable functional \((u,v)\mapsto\frac{1}{p}[u]_{s,p}^{p}+\frac{1}{p}[v]_{t,p}^{p}\). However, the functional at the right-hand side of (11) is merely related to the right-hand Gateaux-derivative of the functional \((u,v)\mapsto\lambda\left(\int_{\Omega}|u(x)|^{\alpha(p)}\mathrm{d}x\right)|v(x _{0})|^{\beta(p)}\), thus motivating the definition of \(Q_{p}\) and \(\Lambda_{1}(p)\). It is noteworthy that minimizing that integral term is enough to minimize the whole system.
By applying minimization methods, our first result shows that the problem \((P_{p}^{1})\) has a principal eigenvalue - and therefore, a weak solution - for each \(p\in\left(\frac{N}{s},\infty\right)\)
Its proof simply adapts Theorem 1 in [11]. We sketch the proof for the convenience of the reader in Section 3.
**Theorem 1**.: _For each \(p\in\left(\frac{N}{s},\infty\right)\) we have_
1. \(\Lambda_{1}(p)>0\)_;_
2. _there exists_ \((u_{p},v_{p})\in X_{s,t,p}^{*}(\Omega)\) _such that_ \[\Lambda_{1}(p)=Q_{s,t,p}(u_{p},v_{p}),\] _with_ \(u_{p},v_{p}>0\) _and_ \(\left(\int_{\Omega}|u_{p}|^{\alpha(p)}\mathrm{d}x\right)|v_{p}(x_{0})|^{ \beta(p)}=1\)_._
The next step is to look for an operator that will motivate the study of the problem \((P_{p}^{1})\) as \(p\to\infty\). So, for each \(0<s\leq t<1\) and \(p\in\left(\frac{N}{s},\infty\right)\) we denote
\[S_{p} =\left\{(u,v)\in X_{s,t,p}(\Omega)\,:\,\left(\int_{\Omega}|u|^{ \alpha(p)}\mathrm{d}x\right)|v(x_{0})|^{\beta(p)}=1\right\}\] \[S_{\infty} =\left\{(u,v)\in X_{s,t,\infty}(\Omega)\,:\,\|u\|_{\infty}^{ \theta}|v(x_{0})|^{1-\theta}=1\right\},\]
where \(\theta\) was defined in \((h_{2})\).
Furthermore, for each \(0<s\leq t<1\) and \(p\in\left(\frac{N}{s},\infty\right]\), we define the functions \(\chi_{S_{p}}:X_{0}(\Omega)\to[0,\infty]\) and \(F_{p}\colon X_{0}(\Omega)\to[0,\infty]\) by
\[\chi_{S_{p}}(u,v)=\left\{\begin{array}{ll}0,&\mbox{if}\quad(u,v)\in S_{p}; \\ \infty,&\mbox{otherwise}\end{array}\right. \tag{12}\]
and
\[F_{p}(u,v)=\left\{\begin{array}{ll}G_{p}(u,v)+\chi_{S_{p}}(u,v),&\mbox{if} \quad(u,v)\in X_{s,t,p}^{*}(\Omega);\\ \infty,&\mbox{otherwise},\end{array}\right. \tag{13}\]
with \(G_{p}\) defined by
\[G_{p}(u,v)=\left\{\begin{array}{ll}Q_{s,t,p}(u,v)^{\frac{1}{p}},&\mbox{if} \quad p\in(\frac{N}{s},\infty),\\ \frac{\max\left\{|u|_{s},|v|_{t}\right\}}{\|u\|_{\infty}^{ \theta}|v(x_{0})|^{1-\theta}},&\mbox{if}\quad p=\infty,\end{array}\right. \tag{14}\]
where, for \(0<\sigma<1\),
\[|u|_{\sigma}=\sup_{y\neq x}\frac{|u(x)-u(y)|}{|x-y|^{\sigma}}.\]
The method we apply is known as \(\Gamma\)-convergence, but everything we use are the properties listed in Theorem 2. Once again, the next result follows from a straightforward adaptation of the proof of [11, Theorem 2].
**Theorem 2**.: _The function \(F_{\infty}\) satisfy the following properties._
1. _If_ \(\{(u_{p},v_{p})\}\) _is a sequence such that_ \((u_{p},v_{p})\to(u,v)\) _in_ \(X_{0}(\Omega)\)_, then_ \[F_{\infty}(u,v)\leq\lim_{p\to\infty}\inf F_{p}(u_{p},v_{p}).\]
2. _For each_ \((u,v)\in X_{0}(\Omega)\)_, there exists a sequence_ \(\{(U_{p},V_{p})\}\subset X_{0}(\Omega)\) _such that_ \((U_{p},V_{p})\to(u,v)\) _in_ \(X_{0}(\Omega)\) _and_ \[F_{\infty}(u,v)\geq\lim_{p\to\infty}\sup F_{p}(U_{p},V_{p}).\]
Thus, as a consequence of Theorem 2-\((i)\), we have
\[F_{\infty}(u,v)\leq\lim_{p\to\infty}\inf F_{p}(u_{p},v_{p}).\]
Applying this inequality to the solutions \((u_{p},v_{p})\) given by Theorem 1, we obtain the estimate
\[F_{\infty}(u,v)\leq\lim_{p\to\infty}\inf\Lambda_{1}(p)^{\frac{1}{p}}=\frac{1}{R^ {s\theta+(1-\theta)t}}=\max\{|u_{\infty}|_{s},|v_{\infty}|_{t}\}, \tag{15}\]
where the last equality will be shown in the proof of Theorem 3. As a consequence of Theorem 2-\((ii)\) and (15), we can analyze problem \((P^{1}_{p})\) as \(p\to\infty\).
Therefore, considering Theorems 1 and 2, we study the behavior of the eigenvalues and eigenfunctions of problem \((P^{1}_{p})\) as \(p\to\infty\).
**Theorem 3**.: _Let \(\{p_{n}\}\) be a sequence converging to \(\infty\) and \((u_{p_{n}},v_{p_{n}})\) the solution of \((P^{1}_{p})\) given in Theorem 1. Passing to a subsequence if necessary, \(\{(u_{p_{n}},v_{p_{n}})\}_{n\in\mathbb{N}}\) converges uniformly to \((u_{\infty},v_{\infty})\in C^{0,s}_{0}(\overline{\Omega})\times C^{0,t}_{0}( \overline{\Omega})\). Furthermore_
1. \(u_{\infty}\geq 0\)_,_ \(v_{\infty}\geq 0\) _and_ \(\|u_{\infty}\|_{\infty}^{\theta}|v_{\infty}(x_{0})|^{1-\theta}=1\)_;_
2. \(\lim\limits_{n\to\infty}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
We recall the definition of a solution in the viscosity sense by considering the problem
\[\left\{\begin{array}{ll}\mathcal{L}_{\sigma,p}u=0&\mbox{in}\ \ \Omega,\\ u=0&\mbox{in}\ \mathbb{R}^{N}\setminus\Omega,\end{array}\right. \tag{16}\]
for all \(p\in(1,\infty]\).
**Definition 2**.: _Let \(u\in C(\mathbb{R}^{N})\) satisfy \(u=0\) in \(\mathbb{R}^{N}\setminus\Omega\). The function \(u\) is a **viscosity supersolution** of (16) if_
\[(\mathcal{L}_{\sigma,p}\varphi)(x_{0})\leq 0\]
_for each pair \((x_{0},\varphi)\in\Omega\times C^{1}_{0}(\mathbb{R}^{N})\) such that_
\[\varphi(x_{0})=u(x_{0})\qquad\mbox{and}\qquad\varphi(x)\leq u(x)\ \ \forall x\in \mathbb{R}^{N}.\]
_On its turn, \(u\) is a **viscosity subsolution** of (16) if_
\[(\mathcal{L}_{\sigma,p}\varphi)(x_{0})\geq 0\]
_for all pair \((x_{0},\varphi)\in\Omega\times C^{1}_{0}(\mathbb{R}^{N})\) such that_
\[\varphi(x_{0})=u(x_{0})\ \ e\ \ \varphi(x)\geq u(x)\ \ \forall x\in \mathbb{R}^{N}.\]
_The function \(u\) is a **viscosity solution** to the problem (16) if \(u\) is both a viscosity super- and subsolution to problem (16)._
Finally, in Section 5, we prove that the solutions \(u_{\infty}\) and \(v_{\infty}\) given by Theorem 3 are viscosity solutions.
**Theorem 4**.: _Let \(1<s\leq t<1\). Then, the functions \(u_{\infty}\) and \(v_{\infty}\), given by Theorem 3, are viscosity solutions of the system_
\[\left\{\begin{array}{ll}\max\big{\{}\mathcal{L}_{s,\infty}u, \mathcal{L}_{s,\infty}^{-}u-\Lambda_{1,\infty}|u(x)|^{\theta}|v_{\infty}(x_{0 })|^{1-\theta}\big{\}}=0&\mbox{in}\ \ \Omega,\\ \mathcal{L}_{t,\infty}v=0&\mbox{in}\ \ \Omega\setminus\{x_{0}\},\\ u=v=0&\mbox{in}\ \mathbb{R}^{N}\setminus\Omega,\\ v(x_{0})=v_{\infty}(x_{0}).\end{array}\right. \tag{17}\]
## 3. Some remarks on the proofs of Theorems 1 and 2
Since the proofs of Theorems 1 and 2 are simple adaptations of that one given in [11], we only sketch them for the convenience of the reader. For details, see [11, Theorem 1 and Theorem 2].
_Sketch of proof of Theorem 1._ Estimating the denominator in the definition of \(Q_{s,t,p}\), the inequalities of Young and Sobolev imply that \(\Lambda_{1}>0\). By defining
\[U_{n}(x)=\frac{u_{n}(x)}{\left(\int_{\Omega}|u_{n}|^{\alpha(p)} \mathrm{d}x\right)^{\frac{1}{p}}|v_{n}(x_{0})|^{\frac{\beta(p)}{p}}}\]
and
\[V_{n}(x)=\frac{v_{n}(x)}{\left(\int_{\Omega}|u_{n}|^{\alpha(p)} \mathrm{d}x\right)^{\frac{1}{p}}|v_{n}(x_{0})|^{\frac{\beta(p)}{p}}},\]
we have \((U_{n},V_{n})\in X_{s,p}(\Omega)\) satisfy \(\left(\int_{\Omega}|U_{n}(x)|^{\alpha(p)}\mathrm{d}x\right)|V_{n}(x_{0})|^{\beta (p)}=1\). Furthermore,
\[\lim_{n\to\infty}Q_{s,t,p}(U_{n},V_{n})=\lim_{n\to\infty}Q_{s,t,p}(u_{n},v_{n})= \Lambda_{1}(s,p),\]
guaranteeing the existence of \(u_{p},v_{p}\in W^{s,p}(\Omega)\) such that
\[\left(\int_{\Omega}|u_{p}|^{\alpha(p)}\mathrm{d}x\right)|v_{p}(x_{0})|^{\beta (p)}=1.\]
and
\[Q_{s,t,p}(u_{p},u_{p})=\Lambda_{1}(p).\]
For any \((\phi,\psi)\in X_{s,t,p}(\Omega)\), considering
\[g(t)=Q_{s,t,p}(u_{p}+t\phi,v_{p}+t\psi),\]
it follows the existence of \(t_{0}>0\) such that \(g(t)>g(0)=\Lambda_{1}(p)\). Since \(g\in C^{1}((-t_{0},t_{0}),\mathbb{R})\mathrm{m}\) we have \(g^{\prime}(0)=0\), from what follows that \((u_{p},v_{p})\) is a weak solution to system \((P^{1}_{p})\). An argument similar [9, Lemma 22] proves that \(u_{p}>0\) and \(v_{p}>0\) in \(\Omega\), showing that \(\Lambda_{1}(s,p)\) is a principal eigenvalue to system \((P^{1}_{p})\).
Sketch of proof of Theorem 2.: In order to prove \((i)\), suppose that \((u_{p},v_{p})\to(u,v)\in X_{0}(\Omega)\). Passing to a subsequence, we assume that \(\lim_{p\to\infty}F_{p}(u_{p},v_{p})=\liminf_{p\to\infty}F_{p}(u_{p},v_{p})\). It is not difficult to discard the case \((u,v)\notin X_{s,t,\infty}^{*}(\Omega)\cap S_{\infty}\). So, we consider the case \((u,v)\in X_{s,t,\infty}^{*}(\Omega)\cap S_{\infty}\), which implies \(\|u\|_{\infty}^{\theta}|v(x_{0})|^{1-\theta}=1\). We can assume that \(F_{p}(u_{p},v_{p})\leq C<\infty\), since otherwise \((i)\) is valid. So, for \(p\) large enough, we have \((u_{p},v_{p})\in S_{p}\) and, if \(k>\frac{N}{s}\), then
\[\left(\int_{\Omega}\int_{\Omega}\frac{|u_{p}(x)-u_{p}(y)|^{k}}{|x-y|^{\left( \frac{N}{p}+s\right)k}}+\frac{|v_{p}(x)-v_{p}(y)|^{k}}{|x-y|^{\left(\frac{N}{ p}+t\right)k}}\mathrm{d}x\mathrm{d}y\right)^{\frac{1}{k}}\]
\[\leq 2^{\frac{1}{k}}|\Omega|^{2\left(\frac{1}{k}-\frac{1}{p}\right)}p^{\frac{ 1}{p}}\left[\frac{1}{p}[u_{p}]_{s,p}^{p}+\frac{1}{p}[v_{p}]_{t,p}^{p}\right]^ {\frac{1}{p}}.\]
Thus,
\[F_{p}(u_{p},v_{p}) =Q_{s,t,p}(u_{p},v_{p})=\left[\frac{1}{p}[u_{p}]_{s,p}^{p}+\frac{ 1}{p}[v_{p}]_{t,p}^{p}\right]^{\frac{1}{p}}\] \[\geq 2^{-\frac{1}{k}}|\Omega|^{2\left(\frac{1}{p}-\frac{1}{k} \right)}p^{-\frac{1}{p}}\left(\int_{\Omega}\int_{\Omega}\frac{|u_{p}(x)-u_{p}( y)|^{k}}{|x-y|^{\left(\frac{N}{p}+s\right)k}}+\frac{|v_{p}(x)-v_{p}(y)|^{k}}{|x-y| ^{\left(\frac{N}{p}+t\right)k}}\mathrm{d}x\mathrm{d}y\right)^{\frac{1}{k}}.\]
As \(p\to\infty\), results from the uniform convergence and Fatou's Lemma that
\[\liminf_{p\to\infty}F_{p}(u_{p},v_{p})\geq 2^{-\frac{1}{k}}|\Omega|^{-\frac{2}{k}} \left(\int_{\Omega}\int_{\Omega}\frac{|u(x)-u(y)|^{k}}{|x-y|^{sk}}+\frac{|v(x)- v(y)|^{k}}{|x-y|^{tk}}\mathrm{d}x\mathrm{d}y\right)^{\frac{1}{k}}.\]
Making \(k\to\infty\), we obtain
\[\liminf_{p\to\infty}F_{p}(u_{p},v_{p})\geq\max\left\{|u|_{s},|v|_{t}\right\}= F_{\infty}(u,v), \tag{18}\]
concluding the proof of \((i)\).
Now we deal with the second claim. Take any \((u,v)\in X_{0}(\Omega)\) and initially suppose that \((u,v)\notin X_{s,t,\infty}^{*}(\Omega)\cap S_{\infty}\). Then \(F_{s,\infty}(u,v)=\infty\). Consider then a sequence of values \(p\to\infty\) and, for any \(p\in\left(\frac{N}{s},\infty\right)\) in the sequence, define \(u_{p}:=u\)
and \(v_{p}:=v\). Of course we have \((u_{p},v_{p})\to(u,v)\) as \(p\to\infty\) in \(X_{0}(\Omega)\). It is not difficult to discard the cases \(\left(\int_{\Omega}|u_{p}|^{\alpha(p)}\mathrm{d}x\right)|v_{p}(x_{0})|^{\beta(p) }\neq 1\). If, however, \((u,v)\in X_{s,t,\infty}^{*}(\Omega)\cap S_{\infty}\), consider then a sequence of values \(p\to\infty\) and, for any \(p\in\left(\frac{N}{s},\infty\right)\) in the sequence, define
\[U_{p}(x)=\frac{u(x)}{\left(\int_{\Omega}|u|^{\alpha(p)}\mathrm{d}x\right)^{ \frac{1}{p}}|v(x_{0})|^{\frac{1}{p}}}\qquad\text{and}\qquad V_{p}(x)=\frac{v(x )}{\left(\int_{\Omega}|u|^{\alpha(p)}\mathrm{d}x\right)^{\frac{1}{p}}|v(x_{0} )|^{\frac{\beta(p)}{p}}}.\]
Then \((U_{p},V_{p})\in S_{p}\) and
\[\limsup_{p\to\infty}F_{p}(U_{p},V_{p})=\max\left\{|u|_{s},|v|_{t}\right\}=F_{ \infty}(u,v),\]
completing the proof of \((ii)\).
## 4. Proof of Theorem 3
Let us denote
\[R=\max_{x\in\overline{\Omega}}\mathrm{dist}(x,\mathbb{R}^{N}\setminus\Omega)= \|\mathrm{dist}(.,\mathbb{R}^{N}\setminus\Omega)\|_{L^{\infty}(\Omega)}.\]
For a fixed \(x_{1}\in\Omega\) we consider the functions \(\phi_{R}\colon\overline{B_{R}(x_{1})}\to[0,R]\) and \(\psi_{R}\colon\overline{B_{R}(x_{0})}\to[0,R]\) given by
\[\phi_{R}(x)=R^{(\theta-1)t-s\theta}\left(R-|x-x_{1}|\right)_{+}^{s}\quad \text{and}\quad\psi_{R}(x)=R^{(\theta-1)t-s\theta}\left(R-|x-x_{0}|\right)_{+} ^{t}.\]
Of course we have \(\phi_{R}\in C_{0}^{0,s}(\overline{B_{R}(x_{1})}\) and \(\psi_{R}\in C_{0}^{0,s}(\overline{B_{R}(x_{0})}\). Furthermore,
\[\|\phi_{R}\|_{\infty}=R^{(\theta-1)(t-s)},\quad|\psi_{R}(x_{0})|=R^{\theta(t- s)}\quad\text{and}\quad|\phi_{R}|_{s}=|\psi_{R}|_{s}=R^{(\theta-1)t-s\theta}.\]
We can extend \(\phi_{R}\) and \(\psi_{R}\) to \(\overline{\Omega}\) by putting \(\phi_{R}=0\) in \(\mathbb{R}^{N}\setminus\overline{B_{R}(x_{1})}\) and \(\psi_{R}=0\) in \(\mathbb{R}^{N}\setminus\overline{B_{R}(x_{0})}\) to that \(\phi_{R},\psi_{R}\in C_{0}^{0,s}(\overline{\Omega})\), maintaining its \(s\)-Holder norm. Additionally, we still have \(\phi_{R},\psi_{R}\in W_{0}^{1,m}(\Omega)\hookrightarrow W_{0}^{s,m}(\Omega)\) for all \(s\in(0,1)\) and \(m\geq 1\). For details, see [7, 9].
**Lemma 5**.: _For any fixed \(0<s\leq t<1\) we have_
\[\Lambda_{1,\infty}=\inf_{(u,v)\in X_{s,t,\infty}^{*}(\Omega)}\frac{\max\left\{ |u|_{s},|v|_{t}\right\}}{\|u\|_{\infty}^{\theta}|v(x_{0})|_{\infty}^{1-\theta }}=\frac{1}{R^{s\theta+(1-\theta)t}}.\]
Proof.: We note that we have
\[\|\phi_{R}\|_{\infty}^{\theta}|\psi_{R}(x_{0})|^{1-\theta}=R^{\theta(\theta-1 )(t-s)+\theta(1-\theta)(t-s)}=1\]
and therefore
\[\Lambda_{1,\infty}=\inf_{(u,v)\in X_{s,t,\infty}^{*}(\Omega)}\frac{\max\left\{ |u|_{s},|v|_{t}\right\}}{\|u\|_{\infty}^{\theta}|v(x_{0})|_{\infty}^{1-\theta }}\leq\frac{\max\left\{|\phi_{R}|_{s},|\psi_{R}|_{t}\right\}}{\|\phi_{R}\|_{ \infty}^{\theta}|\psi_{R}(x_{0})|_{\infty}^{1-\theta}}=\frac{1}{R^{s\theta+(1 -\theta)t}}.\]
Also note that, given \((u,v)\in X_{s,t,p}^{*}(\Omega)\), then \(u=0=v\) in \(\overline{\Omega}\). Since \(u\) is continuous, there exists \(x_{1}\in\overline{\Omega}\) such that
\[\|u\|_{\infty}=|u(x_{1})|.\]
The compactness of \(\overline{\Omega}\) guarantees the existence of \(y_{x_{0}},y_{x_{1}}\in\partial\Omega\) such that
\[|x_{0}-y_{x_{0}}|=\mathrm{dist}(x_{0},\mathbb{R}^{N}\setminus\Omega)\quad\text {and}\quad|x_{1}-y_{x_{1}}|=\mathrm{dist}(x_{1},\mathbb{R}^{N}\setminus\Omega).\]
Thus, since \(u(y_{x_{1}})=v(y_{x_{0}})=0\), it follows
\[\|u\|_{\infty}^{\theta}=|u(x_{1})-u(y_{x_{1}})|^{\theta}\leq|u|_{s}^{\theta}|x_{1 }-y_{x_{1}}|^{s\theta}\leq|u|_{s}^{\theta}\,R^{s\theta}.\]
On the other hand,
\[|v(x_{0})|^{1-\theta}=|v(x_{0})-v(y_{x_{0}})|^{1-\theta}\leq|v|_{t}^{1-\theta}| x_{0}-y_{x_{0}}|^{t(1-\theta)}\leq|v|_{t}^{1-\theta}\,R^{t(1-\theta)}.\]
So, for any \((u,v)\in X_{s,t,p}^{*}(\Omega)\), we have
\[\frac{1}{R^{s\theta+t(1-\theta)}}=\frac{1}{R^{s\theta}\,R^{(1- \theta)t}} \leq\frac{|u|_{s}^{\theta}|v|_{t}^{1-\theta}}{\|u\|_{\infty}^{ \theta}|v(x_{0})|^{1-\theta}}\leq\frac{\left(\max\left\{|u|_{s},|v|_{t}\right\} \right)^{\theta}\left(\max\left\{|u|_{s},|v|_{t}\right\}\right)^{1-\theta}}{ \|u\|_{\infty}^{\theta}|v(x_{0})|^{1-\theta}}\] \[=\frac{\max\left\{|u|_{s},|v|_{t}\right\}}{\|u\|_{\infty}^{\theta }|v(x_{0})|^{1-\theta}}.\]
Therefore,
\[\Lambda_{1,\infty}=\inf_{(u,v)\in X_{s,t,\infty}^{*}(\Omega)}\frac{\max\left\{ |u|_{s},|v|_{t}\right\}}{\|u\|_{\infty}^{\theta}|v(x_{0})|_{\infty}^{1-\theta }}\geq\frac{1}{R^{s\theta+(1-\theta)t}},\]
concluding the proof.
The next result is pivotal in our analysis of the asymptotic behavior of solutions in problems driven by the fractional \(p\)-Laplacian.
**Lemma 6**.: _Let \(u\in C_{0}^{0,\sigma}(\overline{\Omega})\) be extended as zero outside \(\Omega\). If \(u\in W^{\sigma,q}(\Omega)\) for some \(q>1\), then \(u\in W_{0}^{\sigma,p}(\Omega)\) for all \(p\geq q\) and_
\[\lim_{p\to\infty}[u]_{\sigma,p}=|u|_{\sigma}.\]
The proof of Lemma 6 can be found in [6, Lemma 7].
Proof of Theorem 3.: Of course we have
\[\Lambda_{1}(p_{n})\leq\frac{\frac{1}{p_{n}}[\phi_{R}]_{s,p_{n}}^{p_{n}}+ \frac{1}{p_{n}}[\psi_{R}]_{t,p_{n}}^{p_{n}}}{\int_{\Omega} \left(|\phi_{R}|^{\alpha(p_{n})}\mathrm{d}x\right)|\psi_{R}(x_{0})|^{\beta(p_ {n})}}.\]
Thus,
\[\limsup_{n\to\infty}\sqrt[p]{\Lambda_{1}(p_{n})} \leq\limsup_{n\to\infty}\left(\frac{1}{p_{n}}\frac{[\phi_{R}]_{s,p_{n}}^{p_{n}}+[\psi_{R}]_{t,p_{n}}^{p_{n}}}{\int_{\Omega} \left(|\phi_{R}|^{\alpha(p_{n})}\mathrm{d}x\right)|\psi_{R}(x_{0})|^{\beta(p_ {n})}}\right)^{\frac{1}{p_{n}}}\] \[\leq\limsup_{n\to\infty}\left(\left(\frac{2}{p_{n}}\right)^{\frac {1}{p_{n}}}\frac{\max\left\{[\phi_{R}]_{s,p_{n}},[\psi_{R}]_{t,p_{n}}\right\}}{ \int_{\Omega}\left(|\phi_{R}|^{\alpha(p_{n})}\mathrm{d}x\right)|\psi_{R}(x_{0 })|^{\beta(p_{n})}}\right)\] \[=\frac{\max\left\{|\phi_{R}|_{s},|\psi_{R}|_{t}\right\}}{\|\phi_{ R}\|_{\infty}^{\theta}|\psi_{R}(x_{0})|^{1-\theta}}\leq\frac{1}{R^{s\theta+(1- \theta)t}},\]
proving that the sequence \(\left\{\sqrt[p]{\Lambda_{1}(p_{n})}\right\}_{n\in\mathbb{N}}\) is bounded in \(\mathbb{R}\), that is, there exists \(M_{0}>0\) such that
\[\sqrt[p]{\Lambda_{1}(p_{n})}\leq M_{0}\quad\text{ for all }\ n\in\mathbb{N}. \tag{19}\]
Theorem 1 guarantees that we can take \((u_{p_{n}},v_{p_{n}})\) so that
\[u_{p_{n}}>0,\ v_{p_{n}}>0\quad\text{and}\quad\left(\int_{\Omega}|u_{p_{n}}|^{ \alpha(p_{n})}\mathrm{d}x\right)|v_{p_{n}}(x_{0})|^{\beta(p_{n})}=1.\]
Therefore
\[\Lambda_{1}(p_{n})=\frac{1}{p_{n}}[u_{p_{n}}]_{s,p_{n}}^{p_{n}}+\frac{1}{p_{n} }[v_{p_{n}}]_{t,p_{n}}^{p_{n}}\geq\frac{1}{p_{n}}\max\bigg{\{}[u_{p_{n}}]_{s,p _{n}}^{p_{n}},[v_{p_{n}}]_{s,p_{n}}^{p_{n}}\bigg{\}},\]
what yields
\[[u_{p_{n}}]_{s,p_{n}}\leq p_{n}^{\frac{1}{p_{n}}}\sqrt[p_{n}]{ \Lambda_{1}(s,p_{n})}. \tag{20}\]
For a fixed \(m_{0}>\frac{N}{s}\), denoting the diameter of \(\Omega\) by \(\mathrm{diam}(\Omega)\), it follows from (19) and (20) that
\[|u_{p_{n}}|_{s-\frac{N}{m_{0}}} =\sup_{x\neq y}\frac{|u_{p_{n}}(x)-u_{p_{n}}(y)|}{|x-y|^{s-\frac{ N}{m_{0}}}}=\sup_{x\neq y}\frac{|u_{p_{n}}(x)-u_{p_{n}}(y)|}{|x-y|^{s-\frac{N}{p_{n} }}}\left|x-y\right|^{\frac{N}{m_{0}}-\frac{N}{p_{n}}}\] \[\leq(\mathrm{diam}(\Omega))^{\frac{N}{m_{0}}-\frac{N}{p_{n}}}\sup _{x\neq y}\frac{|u_{p_{n}}(x)-u_{p_{n}}(y)|}{|x-y|^{s-\frac{N}{p_{n}}}}\] \[\leq C\left(\mathrm{diam}(\Omega)\right)^{\frac{N}{m_{0}}-\frac{N }{p_{n}}}\ [u_{p_{n}}]_{s,p_{n}}\] \[\leq C\left(\mathrm{diam}(\Omega)\right)^{\frac{N}{m_{0}}-\frac{N }{p_{n}}}\ p_{n}^{\frac{1}{p_{n}}}\sqrt[p_{n}]{\Lambda_{1}(s,p_{n})}\]
the constant \(C\) not depending on \(p_{n}\). We conclude that the sequence \(\{u_{p_{n}}\}\) is uniformly bounded in \(C_{0}^{0,s-\frac{N}{m_{0}}}(\overline{\Omega})\) and the same reasoning is valid for \(\{v_{p_{n}}\}\), showing that \(\{v_{p_{n}}\}_{n\in\mathbb{N}}\) is uniformly bounded in \(C_{0}^{0,t-\frac{N}{m_{0}}}(\overline{\Omega})\).
Passing to subsequences if necessary, there exist \(u_{\infty}\in C_{0}^{0,s-\frac{N}{m_{0}}}(\overline{\Omega})\) and \(v_{\infty}\in C_{0}^{0,t-\frac{N}{m_{0}}}(\overline{\Omega})\) such that
\[u_{p_{n}}\to u_{\infty}\quad\text{and}\quad v_{p_{n}}\to v_{\infty}\ \text{ uniformly in }\ \Omega.\]
We also observe that
\[\|u_{\infty}\|_{\infty}^{\theta}|v_{\infty}(x_{0})|^{1-\theta}= \lim_{n\to\infty}\left(\left(\int_{\Omega}|u_{p_{n}}|^{\alpha(p_{n})}\mathrm{ d}x\right)|v_{p_{n}}(x_{0})|^{\beta(p_{n})}\right)^{\frac{1}{p_{n}}}=1.\]
Fix \(k>\frac{N}{s}\). By applying Fatou's, Holder's inequality and (20), we obtain
\[\int_{\Omega}\int_{\Omega}\frac{|u_{\infty}(x)-u_{\infty}(y)|^{k} }{|x-y|^{sk}}\mathrm{d}x\mathrm{d}y \leq\liminf_{n\to\infty}\int_{\Omega}\int_{\Omega}\frac{|u_{p_{n}} (x)-u_{p_{n}}(y)|^{k}}{|x-y|^{\left(\frac{N}{p_{n}}+s\right)k}}\mathrm{d}x \mathrm{d}y\] \[\leq\liminf_{n\to\infty}|\Omega|^{2\left(\frac{n_{n}-k}{p_{n}} \right)}\left(\int_{\Omega}\int_{\Omega}\frac{|u_{p_{n}}(x)-u_{p_{n}}(y)|^{p_{ n}}}{|x-y|^{N+sp_{n}}}\mathrm{d}x\mathrm{d}y\right)^{\frac{k}{p_{n}}}\] \[\leq|\Omega|^{2}\liminf_{n\to\infty}[u_{p_{n}}]_{s,p_{n}}^{k} \tag{21}\] \[\leq|\Omega|^{2}\liminf_{n\to\infty}\left(p_{n}^{\frac{1}{p_{n}} }\ \sqrt[p_{n}]{\Lambda_{1}(p_{n})}\right)^{k}\] \[\leq|\Omega|^{2}\left(\frac{1}{R^{s\theta+(1-\theta)t}}\right)^{k}.\]
Thus,
\[|u_{\infty}|_{s}=\lim_{k\to\infty}\left(\int_{\Omega}\int_{\Omega}\frac{|u_{ \infty}(x)-u_{\infty}(y)|^{k}}{|x-y|^{sk}}\mathrm{d}x\mathrm{d}y\right)^{\frac{1 }{k}}\leq\lim_{n\to\infty}|\Omega|^{\frac{2}{k}}\,\frac{1}{R^{s\theta+(1- \theta)t}}=\frac{1}{R^{s\theta+(1-\theta)t}}.\]
Analogously,
\[|v_{\infty}|_{t}=\lim_{k\to\infty}\left(\int_{\Omega}\int_{\Omega}\frac{|v_{ \infty}(x)-v_{\infty}(y)|^{k}}{|x-y|^{tk}}\mathrm{d}x\mathrm{d}y\right)^{\frac{ 1}{k}}\leq\lim_{n\to\infty}|\Omega|^{\frac{2}{k}}\,\frac{1}{R^{s\theta+(1- \theta)t}}=\frac{1}{R^{s\theta+(1-\theta)t}}\]
and therefore
\[\max\big{\{}|u_{\infty}|_{s},|v_{\infty}|_{t}\big{\}}\leq\frac{1}{R^{s\theta+ (1-\theta)t}}.\]
It follows from Lemma 5 that
\[\frac{1}{R^{s\theta+(1-\theta)t}}=\inf_{(u,v)\in X^{*}_{s,t,\infty}(\Omega)} \frac{\max\big{\{}|u|_{s},|v|_{t}\big{\}}}{\|u\|_{\infty}^{\theta}|v(x_{0})|^ {1-\theta}}\leq\max\big{\{}|u_{\infty}|_{s},|v_{\infty}|_{t}\big{\}}\leq \frac{1}{R^{s\theta+(1-\theta)t}},\]
thus producing
\[\max\big{\{}|u_{\infty}|_{s},|v_{\infty}|_{t}\big{\}}=\frac{1}{R^{s\theta+(1- \theta)t}}.\]
On its turn, inequality (21) yields
\[\max\Bigg{\{}\left(\int_{\Omega}\int_{\Omega}\frac{|u_{\infty}(x)-u_{\infty}( y)|^{k}}{|x-y|^{sk}}\mathrm{d}x\mathrm{d}y\right)^{\frac{1}{k}},\left(\int_{ \Omega}\int_{\Omega}\frac{|v_{\infty}(x)-v_{\infty}(y)|^{k}}{|x-y|^{tk}} \mathrm{d}x\mathrm{d}y\right)^{\frac{1}{k}}\Bigg{\}}\]
\[\leq|\Omega|^{\frac{2}{k}}\liminf_{n\to\infty}\left(p_{n}^{\frac{1}{n}}\sqrt {\Lambda_{1}(p_{n})}\right).\]
Thus, as \(k\to\infty\) we obtain
\[\frac{1}{R^{s\theta+(1-\theta)t}}=\max\big{\{}|u_{\infty}|_{s},|v _{\infty}|_{s}\big{\}} \leq\liminf_{n\to\infty}\left(p_{n}^{\frac{1}{n}}\sqrt[p_{n}]{ \Lambda_{1}(p_{n})}\right)\] \[\leq\limsup_{n\to\infty}\left(p_{n}^{\frac{1}{n}}\sqrt[p_{n}]{ \Lambda_{1}(p_{n})}\right)\leq\frac{1}{R^{s\theta+(1-\theta)t}},\]
from what follows
\[\lim_{n\to\infty}\sqrt[p_{n}]{\Lambda_{1}(p_{n})}=\lim_{n\to\infty}\left(p_{n }^{\frac{1}{n}}\sqrt[p_{n}]{\Lambda_{1}(p_{n})}\right)=\frac{1}{R^{s\theta+(1 -\theta)t}}=\Lambda_{1,\infty}.\qed\]
## 5. Proof of Theorem 4
The next result only shows that solutions in the weak sense are viscosity solutions. Its proof can be achieved by adapting the arguments given by Lindgren and Lindqvist in [9, Proposition 1].
**Proposition 7**.: _The function \(u_{p}\) e \(v_{p}\) given by Theorem 1 are viscosity solutions to the problems_
\[\left\{\begin{array}{ll}\mathcal{L}_{s,p}u=\Lambda_{1}(p)\alpha(p)|u|^{ \alpha(p)-1}v(x_{0})&\mathrm{in}\ \ \Omega,\\ u=0&\mathrm{in}\ \mathbb{R}^{N}\setminus\Omega,\end{array}\right.\]
_and_
\[\left\{\begin{array}{ll}\mathcal{L}_{t,p}v=0&\mathrm{in}\ \Omega\setminus\{x_{0}\},\\ v=0&\mathrm{in}\ \mathbb{R}^{N}\setminus\Omega,\\ v(x_{0})=v_{p}(x_{0}),\end{array}\right.\]
_respectively._
Proof of Theorem 4.: We start showing that \(v_{\infty}\) is a viscosity solution to the problem
\[\left\{\begin{array}{ll}\mathcal{L}_{t,\infty}v=0&\text{in}\ \ \Omega\setminus\{x_{0}\},\\ v=0&\text{in}\ \mathbb{R}^{N}\setminus\Omega,\\ v(x_{0})=v_{\infty}(x_{0}).\end{array}\right. \tag{22}\]
According to Theorem 3 we have \(v_{\infty}=0\) in \(\mathbb{R}^{N}\setminus\Omega\) and \(v_{\infty}(x_{0})=v_{\infty}(x_{0})\). So, we need only show that \(v_{\infty}\) is a viscosity solution. Fix \((z_{0},\varphi)\in(\Omega\setminus\{x_{0}\})\times C_{0}^{1}(\mathbb{R}^{N} \setminus\{x_{0}\})\) satisfying
\[\varphi(z_{0})=v_{\infty}(z_{0})\qquad\text{and}\qquad\varphi(x)\leq v_{ \infty}(x),\ \ \forall x\in\mathbb{R}^{N}\setminus\{x_{0},z_{0}\}.\]
Theorem 3 also guarantees the existence of a sequence \(\{(u_{p_{n}},v_{p_{n}})\}_{n\in\mathbb{N}}\in C_{0}^{0,s}(\overline{\Omega}) \times C_{0}^{0,t}(\overline{\Omega})\) such that \(u_{p_{n}}\to u_{\infty}\) and \(v_{p_{n}}\to v_{\infty}\) uniformly in \(\Omega\). Thus, there exists a sequence \(\{x_{p_{n}}\}_{n\in\mathbb{N}}\) so that \(x_{p_{n}}\to z_{0}\) and \(v_{p_{n}}(x_{p_{n}})=\varphi(x_{p_{n}})\). Since \(x_{0}\neq z_{0}\), we can assume the existence of \(n_{0}\geq 0\) and a ball \(B_{\rho}(z_{0})\) such that
\[x_{p_{n}}\notin B_{\rho}(z_{0})\subset\Omega\setminus\{z_{0}\},\quad\forall n \geq n_{0}.\]
Since \(v_{p_{n}}\) weakly satisfies
\[(-\Delta_{p_{n}})^{t}v_{p_{n}}(x)=\Lambda_{1}(p_{n})\alpha(p_{n})\left(\int_{ \Omega}|u_{p_{n}}|^{\alpha(p_{n})}\mathrm{d}x\right)|v_{p_{n}}(x_{0})|^{\beta( p_{n})}v_{p_{n}}(x_{0})\delta_{x_{0}}\]
in \(\Omega\), then also in \(\Omega\setminus\{x_{0}\}\), Proposition 7 yields that \(v_{p_{n}}\) is a viscosity solution to the problem
\[\left\{\begin{array}{ll}\mathcal{L}_{t,p_{n}}v=0&\text{in}\ \ \Omega\setminus\{x_{0}\},\\ v=0&\text{in}\ \mathbb{R}^{N}\setminus\Omega,\\ v(x_{0})=v_{p_{n}}(x_{0}).\end{array}\right. \tag{23}\]
By standard arguments, we obtain a sequence \(\{z_{n}\}_{n\in\mathbb{N}}\subset B_{\rho}(x_{0})\) such that \(z_{n}\to z_{0}\) and
\[\sigma_{n}:=\min_{B_{\rho}(x_{0})}\left(v_{p_{n}}-\varphi\right)=v_{p_{n}}(z_{ n})-\varphi(z_{n})<v_{p_{n}}(x)-\varphi(x),\ \ \forall x\neq x_{p_{n}}.\]
Now, define \(\Psi_{n}:=\varphi+\sigma_{n}\). We have
\[\Psi_{n}(z_{n})=\varphi(z_{n})+\sigma_{n}=v_{p_{n}}(z_{n})\qquad\text{and} \qquad\Psi_{n}(x)=\varphi(x)+\sigma_{n}<v_{p_{n}}(x),\ \forall x\in B_{\rho}(x_{0}).\]
Since \(v_{p_{n}}\) satisfies (23) in \(\Omega\setminus\{x_{0}\}\),
\[(\mathcal{L}_{t,\infty}\Psi_{n})(z_{n})\leq 0,\qquad\forall n\geq n_{0}.\]
Thus, defining
\[(A_{p_{n},t}(\varphi(z_{n})))^{p_{n}-1}:=2\int_{\mathbb{R}^{N}}\frac{|\varphi (z_{n})-\varphi(y)|^{p_{n}-2}(\varphi(z_{n})-\varphi(y))^{+}}{|z_{n}-y|^{N+tp_ {n}}}\mathrm{d}y\]
and
\[(B_{p_{n},t}(\varphi(z_{n})))^{p_{n}-1}:=2\int_{\mathbb{R}^{N}}\frac{|\varphi (z_{n})-\varphi(y)|^{p_{n}-2}(\varphi(z_{n})-\varphi(y))^{-}}{|z_{n}-y|^{N+tp_ {n}}}\mathrm{d}y,\]
we have
\[(A_{p_{n},t}(\varphi(z_{n})))^{p_{n}-1}-(B_{p_{n},t}(\varphi(z_{ n})))^{p_{n}-1} =2\int_{\mathbb{R}^{N}}\frac{|\varphi(z_{n})-\varphi(y)|^{p_{n}- 2}(\varphi(z_{n})-\varphi(y))}{|z_{n}-y|^{N+sp_{n}}}\mathrm{d}y\] \[\leq 0,\quad\forall n\geq n_{0}. \tag{24}\]
Applying [7, Lemma 3.9] (see also [8, Lemma 6.1]), we obtain
\[\lim_{n\to\infty}A_{p_{n},t}(\varphi(z_{n}))=\left(\mathcal{L}_{t,\infty}^{+} \varphi\right)(z_{0})\qquad\text{and}\qquad\lim_{n\to\infty}B_{p_{n},t}( \varphi(z_{n}))=\left(-\mathcal{L}_{t,\infty}^{-}\varphi\right)(z_{0}).\]
As \(n\to\infty\) in (24) we get
\[\left(\mathcal{L}_{t,\infty}\varphi\right)(x_{0})=\left(\mathcal{L}_{t,\infty}^{+ }\varphi\right)(x_{0})+\left(\mathcal{L}_{t,\infty}^{-}\varphi\right)(x_{0}) \leq 0,\]
showing that \(v_{\infty}\) is a viscosity supersolution of (22). Analogously, we obtain that \(v_{\infty}\) is a viscosity subsolution of the same equation, and thus a viscosity solution of (22).
Now we show that \(u_{\infty}\) is a viscosity solution to the problem
\[\left\{\begin{array}{ll}\max\left\{\mathcal{L}_{s,\infty}u,\mathcal{L}_{s, \infty}^{-}u+\Lambda_{1,\infty}|u(x)|^{\theta}|v_{\infty}(x_{0})|^{1-\theta} \right\}=0&\mbox{in}\ \ \Omega,\\ u=0&\mbox{in}\ \mathbb{R}^{N}\setminus\Omega.\end{array}\right. \tag{25}\]
The same reasoning used before imply that, for given \((z_{0},\varphi)\in\Omega\times C_{0}^{1}(\mathbb{R}^{N})\), we find a sequence \(\{u_{p_{n}}\}_{n\in\mathbb{N}}\) in \(C_{0}^{0,s}(\overline{\Omega})\) such that \(u_{p_{n}}\to u_{\infty}\) uniformly in \(\Omega\) and a sequence \(\{x_{p_{n}}\}_{n\in\mathbb{N}}\) satisfying \(x_{p_{n}}\to z_{0}\) and \(u_{p_{n}}(x_{p_{n}})=\varphi(x_{p_{n}})\). Thus, there exist \(n_{0}\geq 0\) and a ball \(B_{\rho}(z_{0})\) so that
\[x_{p_{n}}\notin B_{\rho}(z_{0})\subset\Omega\setminus\{z_{0}\},\ \ \forall n\geq n_{0}.\]
As before, we obtain that \(u_{p_{n}}\) is a viscosity solution to the problem
\[\left\{\begin{array}{ll}\mathcal{L}_{s,p_{n}}u_{p_{n}}=\Lambda_{1}(p_{n}) \alpha(p_{n})|u_{p_{n}}|^{\alpha(p_{n})-1}v_{p_{n}}(x_{0})&\mbox{in}\ \ \Omega,\\ u=0&\mbox{in}\ \mathbb{R}^{N}\setminus\Omega.\end{array}\right.\]
Considering, as before, a sequence \(\{z_{n}\}_{n\in\mathbb{N}}\subset B_{\rho}(z_{0})\) such that \(z_{n}\to z_{0}\) and defining \(\Psi_{n}\) as in the previous proof, we obtain
\[\left(\mathcal{L}_{s,p_{n}}\Psi_{n}\right)(z_{n})\leq\Lambda_{1}(p_{n})\alpha (p_{n})|\Psi_{n}(z_{n})|^{\alpha(p_{n})-1}v_{p_{n}}(x_{0})\ \ \forall n\geq n_{0},\]
which is equivalent to the inequality
\[\left(A_{p_{n},s}(\varphi(z_{n}))\right)^{p_{n}-1}-\left(B_{p_{n},s}(\varphi( z_{n}))\right)^{p_{n}-1}\leq\left(C_{p_{n}}(\varphi(z_{n}))\right)^{p_{n}-1} \ \ \forall n\geq n_{0},\]
where
\[\left(C_{p_{n}}(\varphi(z_{n}))\right)^{p_{n}-1}:=\Lambda_{1}(p_{n})\alpha(p_ {n})|\varphi+\sigma_{n}|^{\alpha(p_{n})-1}v_{p_{n}}(x_{0})\]
and the other terms are analogous to that of the previous case, just changing \(t\) for \(s\).
Observe that a direct calculation yields
\[\lim_{n\to\infty}C_{p_{n}}(\varphi(z_{n})) =\lim_{n\to\infty}\left(\sqrt[p_{n}]{\Lambda_{1}(p_{n})}\sqrt[p_{n }]{\alpha(p_{n})}|\varphi(z_{n})+\sigma_{n}|^{\frac{\alpha(p_{n})}{p_{n}-1}} v_{p_{n}}(x_{0})^{\frac{\beta(p_{n})}{p_{n}-1}}\right)\] \[=\Lambda_{1,\infty}|\varphi(z_{0})|^{\theta}v_{\infty}(x_{0})^{1-\theta}\]
So, as \(n\to\infty\) em (24) we obtain
\[\left(\mathcal{L}_{s,\infty}\varphi\right)(x_{0})=\left(\mathcal{L}_{s, \infty}^{+}\varphi\right)(z_{0})+\left(\mathcal{L}_{s,\infty}^{-}\varphi \right)(z_{0})\leq\Lambda_{1,\infty}|\varphi(z_{0})|^{\theta}v_{\infty}(x_{0}) ^{1-\theta}\]
and therefore
\[\max\left\{\mathcal{L}_{s,\infty}u,\mathcal{L}_{s,\infty}^{-}u-\Lambda_{1, \infty}|u(x)|^{\theta}|v_{\infty}(x_{0})|^{1-\theta}\right\}\leq 0\ \ \mbox{in}\ \ \Omega,\]
that is, \(u_{\infty}\) is a viscosity supersolution to problem (22). Analogously, \(u_{\infty}\) is a viscosity subsolution to the same problem. We are done.
**Remark 8**.: _We observe that the system_
\[\left\{\begin{array}{ll}(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u|^{\alpha(p)-2}u |v(x_{v})|^{\beta(p)}&\mbox{in}\ \ \Omega,\\ (-\Delta_{p})^{t}v(x)=\lambda\beta(p)\left(\int_{\Omega}|u|^{\alpha(p)}{\rm d }x\right)|v(x_{v})|^{\beta(p)-2}v(x_{v})\delta_{x_{v}}&\mbox{in}\ \ \Omega,\\ u=v=0&\mbox{in}\ \mathbb{R}^{N}\setminus\Omega,\end{array}\right.\]
_where \(x_{v}\) is a maximum point of \(v\) in \(\overline{\Omega}\) can be treated in the same setting given in Section 2, applying the same procedure used to solve system \((P^{1}_{p})\)._
## 6. On the system\((P^{2}_{p})\)
In this section we consider the functional system \((P^{2}_{p})\).
\[\left\{\begin{array}{ll}(-\Delta_{p})^{s}u(x)=\lambda\alpha(p)|u(x_{1})|^{ \alpha(p)-2}u(x_{1})|v(x_{2})|^{\beta(p)}\delta_{x_{1}}&\mbox{in}\ \ \Omega,\\ (-\Delta_{p})^{t}v(x)=\lambda\beta(p)|u(x_{1})|^{\alpha(p)}|v(x_{2})|^{\beta (p)-2}v(x_{2})\delta_{x_{2}}&\mbox{in}\ \ \Omega,\\ u=v=0&\mbox{in}\ \mathbb{R}^{N}\setminus\Omega,\end{array}\right.\]
where \(x_{1},x_{2}\in\Omega\) are arbitrary points, \(x_{1}\neq x_{2}\). Observe that both equations are functional, so their treatment recall that used to deal with the second equation in system \((P^{1}_{p})\).
**Definition 3**.: _A pair \((u,v)\in X_{s,t,p}(\Omega)\) is a weak solution to \((P^{2}_{p})\) if_
\[\langle(-\Delta_{p})^{s}u,\varphi\rangle+\langle(-\Delta_{p})^{s }v,\psi\rangle=\lambda\left[\alpha(p)|u(x_{1})|^{\alpha(p)-2}u(x_{1})|v(x_{2}) |^{\beta(p)}\varphi(x_{1})\right. \tag{26}\] \[\left.+\beta(p)|u(x_{1})|^{\alpha(p)}|v(x_{2})|^{\beta(p)-2}v(x_{ 2})\psi(x_{2})\right]\]
_for all \((\varphi,\psi)\in X_{s,t,p}(\Omega)\)._
The denominator in the definition of \(Q_{s,t,p}\) should be changed into \(|u(x_{1})|^{\alpha(p)}\,|v(x_{2})|^{\beta(p)}\), maintaining the definition of \(\Lambda_{1}(p)\). The first result, which is similar to Theorem 1 is the following.
**Theorem 9**.: _For each \(p\in\left(\frac{N}{s},\infty\right)\) we have_
* \(\Lambda_{1}(p)>0\)_;_
* _there exist_ \((u_{p},v_{p})\in X_{s,t,p}^{*}(\Omega)\) _such that_ \(u_{p}>0\)_,_ \(v_{p}>0\) _and_ \[|u_{p}(x_{1})|^{\alpha(p)}|v_{p}(x_{2})|^{\beta(p)}=1\qquad\mbox{and}\qquad \Lambda_{1}(s,p)=Q_{s,t,p}(u_{p},v_{p}).\]
Its proof is also similar to that of Theorem 1. For details, see the proof sketched in Section 3 or [11, Theorem 1].
The next step is to prove a result similar to Theorem 2. Changing the definition of \(S_{p}\) and \(S\infty\) into
\[S_{p}=\left\{(u,v)\in X_{s,t,p}(\Omega)\,:\,|u(x_{1})|^{\alpha(p)}|v(x_{2})|^{ \beta(p)}=1\right\}\]
and
\[S_{\infty}=\left\{(u,v)\in X_{s,t,p}\,:\,|u(x_{1})|^{\theta}|v(x_{2})|^{1- \theta}=1\right\}\]
and also the denominator in \(G_{p}\) into \(|u(x_{1})|^{\theta}|v(x_{2})|^{1-\theta}\), we obtain the version of Theorem 2 with the same statement.
Up to this point, the points \(x_{1},x_{2}\in\Omega\) were taken arbitrarily. Now, we consider sequences \(u_{n}:=u_{p_{n}}\) and \(v_{n}:=u_{p_{n}}\) given by Theorem 1. Since \(u_{n},v_{n}>0\), we can
take \(x_{1}\) as a maximum \(x_{n}\) of \(u_{n}\) and \(x_{2}\) as a maximum \(y_{n}\) of \(v_{n}\). Observe that we do not suppose that the maxima \(x_{n}\) and \(y_{n}\) are unique. However, we will prove that the sequence \((x_{n},y_{n})\) has a subsequence that converges to \((x_{\infty},y_{\infty})\) and the equality \(|u_{\infty}(x_{\infty})|^{\theta}|v_{\infty}(y_{\infty})|^{1-\theta}=1\) still holds true.
**Theorem 10**.: _Let \(\{p_{n}\}\) be a sequence converging to \(\infty\) and \((u_{p_{n}},v_{p_{n}})\) the solution of \((P_{p}^{1})\) given in Theorem 9. Denote \(x_{n}:=x_{u_{p_{n}}}\) and \(y_{n}:=x_{v_{p_{n}}}\) a sequence of maxima to \(u_{p_{n}}\) and \(v_{p_{n}}\), respectively. Passing to a subsequence if necessary, \(\{(u_{p_{n}},v_{p_{n}})\}_{n\in\mathbb{N}}\) converges uniformly to \((u_{\infty},v_{\infty})\in C_{0}^{0,s}(\overline{\Omega})\times C_{0}^{0,s}( \overline{\Omega})\), while the sequences \(\{x_{n}\}\) and \(\{y_{n}\}\) converge to \(x_{\infty}\in\Omega\) and \(y_{\infty}\in\Omega\), respectively, which are the maxima of \(u_{\infty}\) and \(v_{\infty}\). Furthermore_
1. \(u_{\infty}\geq 0\)_,_ \(v_{\infty}\geq 0\) _and_ \(|u_{\infty}(x_{\infty})|^{\theta}|v_{\infty}(y_{\infty})|^{1-\theta}=1\)_;_
2. \(\lim_{n\to\infty}\sqrt[r]{\Lambda_{1}(p_{n})}=\dfrac{1}{R^{s\theta+(1-\theta)t}}\)__
3. \(\max\big{\{}|u_{\infty}|_{s},|v_{\infty}|_{t}\big{\}}=\dfrac{1}{R^{s\theta+(1 -\theta)t}}\)_;_
4. _If_ \(s=t\)_, then_ \[0\leq u_{\infty}(x)\leq\dfrac{\big{(}\mathrm{dist}(x,\mathbb{R}^{N}\setminus \Omega)\big{)}^{s}}{R^{s}}\quad\text{and}\quad 0\leq v_{\infty}(x)\leq\dfrac{\big{(} \mathrm{dist}(x,\mathbb{R}^{N}\setminus\Omega)\big{)}^{s}}{R^{s}}.\]
Its proof can be obtained by mimicking the method used to prove Theorem 3. Comparing this result with the one in [11], we first note that our result brings information about the sequence of maxima of \(u_{p_{n}}\) and \(v_{p_{n}}\), which are absent in that paper.
Finally, the analogue to Theorem 4 is the following. Once again, its proof is obtained by adapting that of the Theorem 4.
**Theorem 11**.: _The functions \(u_{\infty}\) and \(v_{\infty}\), given by Theorem 10, are viscosity solutions of the problems_
\[\left\{\begin{array}{ll}\mathcal{L}_{s,\infty}u=0&\text{in}\ \ \Omega \setminus\{x_{1}\},\\ u=0&\text{in}\ \mathbb{R}^{N}\setminus\Omega,\\ u(x_{1})=u_{\infty}(x_{1})&\end{array}\right.\quad\quad\text{and}\quad\quad \left\{\begin{array}{ll}\mathcal{L}_{t,\infty}v=0&\text{in}\ \ \Omega \setminus\{x_{2}\},\\ v=0&\text{in}\ \mathbb{R}^{N}\setminus\Omega,\\ v(x_{2})=v_{\infty}(x_{2}),\end{array}\right.\]
_respectively._
|
2305.17540 | Learning from Children: Improving Image-Caption Pretraining via
Curriculum | Image-caption pretraining has been quite successfully used for downstream
vision tasks like zero-shot image classification and object detection. However,
image-caption pretraining is still a hard problem -- it requires multiple
concepts (nouns) from captions to be aligned to several objects in images. To
tackle this problem, we go to the roots -- the best learner, children. We take
inspiration from cognitive science studies dealing with children's language
learning to propose a curriculum learning framework. The learning begins with
easy-to-align image caption pairs containing one concept per caption. The
difficulty is progressively increased with each new phase by adding one more
concept per caption. Correspondingly, the knowledge acquired in each learning
phase is utilized in subsequent phases to effectively constrain the learning
problem to aligning one new concept-object pair in each phase. We show that
this learning strategy improves over vanilla image-caption training in various
settings -- pretraining from scratch, using a pretrained image or/and
pretrained text encoder, low data regime etc. | Hammad A. Ayyubi, Rahul Lokesh, Alireza Zareian, Bo Wu, Shih-Fu Chang | 2023-05-27T17:59:54Z | http://arxiv.org/abs/2305.17540v2 | # Learning from Children:
###### Abstract
Image-caption pretraining has been quite successfully used for downstream vision tasks like zero-shot image classification and object detection. However, image-caption pretraining is still a hard problem - it requires multiple concepts (nouns) from captions to be aligned to several objects in images. To tackle this problem, we go to the roots - the best learner, children. We take inspiration from cognitive science studies dealing with children's language learning to propose a curriculum learning framework. The learning begins with easy-to-align image caption pairs containing one concept per caption. The difficulty is progressively increased with each new phase by adding one more concept per caption. Correspondingly, the knowledge acquired in each learning phase is utilized in subsequent phases to effectively constrain the learning problem to aligning one new concept-object pair in each phase. We show that this learning strategy improves over vanilla image-caption training in various settings - pretraining from scratch, using a pre-trained image or/and pretrained text encoder, low data regime etc. Code available at: [https://github.com/hayyubi/cur_vl.git](https://github.com/hayyubi/cur_vl.git).
## 1 Introduction
Recently, there has been a tremendous interest in employing image-caption pretraining for downstream vision tasks like zero-shot object classification (Radford et al., 2021) and zero-shot object detection (Zareian et al., 2021; Li et al., 2022). The idea is to learn a common semantic space where the visual embeddings of objects in images lie close to the textual embeddings of the concepts (objects' name/tag/label) in captions they refer to. This learned semantic space is later exploited for zero-shot object recognition by finding the concept embedding nearest to the objects' embeddings.
Despite the recent success, image-caption pretraining is a complex problem as it entails aligning multiple concepts in a caption with multiple objects in an image, as shown in fig. 1. Different methods have tried to solve this problem from various angles - CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) by using more data, ALBEF (Li et al., 2021) by using more complex network architecture, Florence (Yuan et al., 2021) and CoCa (Yu et al., 2022) by using more tasks and ERNIE-ViL 2.0 (Shan et al., 2022) by using more data augmentations (views).
We propose an alternative approach based on a novel learning strategy that is architecture agnostic and does not require any additional data or compute. We take inspiration from cognitive science research studying how children learn language (concepts) in early stages by just observing their surroundings (images). Specifically, we refer to two studies showing that children learn rapidly if the object of interest is unambiguous (Pereira et al., 2014) and by applying co-referential statistics across multiple scenes (Smith and Yu, 2008).
We implement these two ideas via a curriculum
Figure 1: Top: Normal image-caption pertaining. Bottom: Proposed Curriculum Learning Framework. The curriculum eases the learning problem by requiring the model to align only one concept-object pair at a time.
learning approach (demonstrated in fig. 1):
1. We train the model in multiple phases of increasing difficulty with each phase containing one more concept in the caption than the previous one. Moreover, each phase contains only one new concept, the rest seen in prior phases.
2. In each phase, we leverage the concept-object association learned in prior phases to recognize the seen concepts and focus on aligning the new/unseen concept (section 2.2.2).
These two strategies effectively reduce the problem of aligning multiple object-concept pairs per training sample to aligning only one such pair.
To the best of our knowledge, no prior work has applied curriculum leaarning to image-caption pre-training in this way. Srinivasan et al. (2022) apply a curriculum based on the difficulty of negative samples in contrastive loss. Whereas, Liu et al. (2021) design the curriculum based on the granularity of text: from words to phrases to sentences.
Although our proposed approach can be applied to any multimodal network architecture, we pick OVR-CNN (Zareian et al., 2021) due to its simplicity. We pretrain it with the proposed curriculum learning approach and evaluate on the downstream task of zero-shot object detection. We demonstrate that curriculum learning outperforms vanilla image-caption pretraining on a variety of architectural settings - with and without a pretrained image encoder and/or a pretrained text encoder. We even show superior performance in low-data settings, suggesting our method can be leveraged in low-resource scenarios as well.
## 2 Method
We propose a curriculum learning framework to improve image caption pretraining. In this work, we apply it to OVR-CNN as its architecture is simpler and easier/faster to train/evaluate. We begin the description of our approach with a brief background on OVR-CNN. Next, we discuss how we modify it to implement the proposed curriculum learning framework.
### OVR-RCNN Background
OVR-RCNN is a dual-encoder (separate visual encoder and text encoder) multimodal architecture. First, it pretrains the encoders using image-captions and later utilizes them for the downstream task of object detection. We only discuss the pretraining procedure as we only utilize this component.
OVR-RCNN's visual encoder is ResNet-50 (He et al., 2016) and text encoder is either BERT (Devlin et al., 2019) or GloVE (Pennington et al., 2014). The visual encoder takes an image, \(I\) with \(w\times h\) dimensions, as input and outputs a feature map of \(w/32\times h/32\) regions. Each feature map is a vector which is transferred to language space using a projection layer. This gives the visual embeddings, \(e_{i}^{I}\), for each region \(i\). The tokenized caption, \(C\), is input to the text encoder which outputs an embedding \(e_{j}^{C}\) for each token \(j\).
The token-image region pair is aligned via weak supervision. Specifically, a global alignment score between image and caption, \(\langle I,C\rangle_{G}\) is calculated using a locally weighted average alignment score of image regions and tokens as follows:
\[\langle I,C\rangle_{G}=\frac{1}{n_{C}}\sum_{j=1}^{n_{C}}\sum_{i=1}^{n_{I}}a_{i, j}\langle e_{i}^{I},e_{j}^{C}\rangle_{L} \tag{1}\]
where \(\langle.,.\rangle_{L}\) is the dot product of two vectors, \(n_{I}\) and \(n_{C}\) are the number of image and caption tokens respectively, and
\[a_{i,j}=\frac{\exp\langle e_{i}^{I},e_{j}^{C}\rangle_{L}}{\sum_{i^{\prime}=1}^ {n_{I}}\exp\langle e_{i^{\prime}}^{I},e_{j}^{C}\rangle_{L}} \tag{2}\]
The model is trained using contrastive learning by maximizing the global alignment score, \(\langle I,C\rangle_{G}\), between positive image-caption pairs and minimizing it between negative pairs sampled from the same training batch.
\[\mathcal{L}=-\log\frac{\exp\langle I,C\rangle_{G}}{\sum_{\{I^{\prime},C^{ \prime}\}\in N_{I,C}}\exp\langle I^{\prime},C^{\prime}\rangle_{G}+\exp\langle I,C\rangle_{G}} \tag{3}\]
where, \(\mathcal{N}_{I,C}=\{I,C^{\prime}|C^{\prime}\in\mathcal{B}_{C}\}\cup\{I^{ \prime},C|I^{\prime}\in\mathcal{B}_{I}\}\) and \(\mathcal{B}_{C},\mathcal{B}_{I}\) are batch captions and batch images respectively. This learning objective aligns paired image and caption together and also provides weak supervision for image-regions and caption-tokens association.
### Curriculum Learning Framework
OVR-CNN facilitates object-concept alignment through coarse image-region and concept alignment. However, as an object can span multiple image regions or multiple objects can span an image region, this strategy can be noisy. To eliminate this noise and focus on the contribution of our curriculum framework to object-concept alignment, we train the model using object region features instead of image region features. To this end, object
region bounding box is used to ROI pool [14] the image region features. The resulting feature vector \(e_{o}^{I}\), for each object \(o\), is used to replace \(e_{i}^{I}\) in eqs. (1) and (2).
#### 2.2.1 Curriculum Design
The learning is divided into \(1,2,3\ldots k\) phases. Each phase \(p\) is trained with only those image-caption pairs having \(p\) concepts per caption. To divide the data into phases, we use spacy1 to PoS (Part of Speech) tag the captions. Depending upon the number of nouns in each caption, the caption and its paired image is grouped into the corresponding phase. This strategy of designing the curriculum also imparts the data an additional property empirically - at most only one new concept is introduced per caption in each phase (as demonstrated in fig. 1(b)).
Footnote 1: [https://spacy.io/usage](https://spacy.io/usage)
#### 2.2.2 Curriculum Aware Alignment Loss
To recognize the concepts in captions previously seen in prior phases and focus on aligning the new/unseen concept, we formulate a novel Curriculum Aware Alignment Loss (\(\mathcal{L_{C}}\)). Specifically, we first calculate the previously learned object-concept alignment, \(a_{o,j}\) from modified eq. (2), using either the trained model from the last iteration (\(\mathcal{L_{C}}\)) or the trained model from the last phase (\(\mathcal{L_{C}}\)). Next, \(a_{o,j}\) is used to compute:
\[a^{{}^{\prime}}_{o,j}=\frac{\exp(e_{o}^{I},e_{j}^{C})_{L}\exp\left(-\max_{o}(a _{o,j})\cdot\frac{t}{T}\right)}{\sum_{o^{\prime}=1}^{nT}\exp(e_{o^{\prime}}^{I },e_{j}^{C})_{L}\exp\left(-\max_{o}(a_{o,j})\cdot\frac{t}{T}\right)}\]
where, \(t\) is the current iteration number and \(T\) is the total number of iterations in training.
For a concept \(j\), which is already closely aligned to an object \(o\), \(\max_{o}(a_{o,j})\) is high. This leads to a low value of \(a^{{}^{\prime}}_{o,j}\), resulting in less attention being paid to concept \(j\) in the current training iteration/phase. Vice versa for a concept that is not well aligned with any object. \(a^{{}^{\prime}}_{o,j}\) effectively redistributes the attention of learning to focus more on concepts that are not well aligned with any object. The term \(t/T\) has a low value in the beginning of training and gradually scales to \(1\) by the end. This allows the network to ignore prior knowledge in the beginning while utilizing it in the latter stages.
We use \(a^{{}^{\prime}}_{o,j}\) to replace \(a_{o,j}\) in modified eq. (1), and then use eq. (3) to compute \(\mathcal{L_{C}}\).
## 3 Experiments
### Pretraining Dataset and Implementation Details
We use the COCO Captions dataset [1] for pretraining. It contains 118,287 images and 5x captions. To obtain bounding box regions for objects in images, we use COCO Objects [13] dataset as it uses the same set of images as COCO Captions. We divide the data into \(k=4\) phases using the strategy discussed in section 2.2.1. Figure 1(a) shows number of captions assigned to each phase. As shown in fig. 1(b), the majority of captions in each phase have at least k-1 concepts previously seen, allowing the curriculum to introduce at most one new concept per training sample. Further, as more concepts are introduced with each passing phase, the percent of captions per phase actually introducing a new concept decreases (as depicted in fig. 1(c)). By phase 4, this percent reduces to \(<5\%\). Additional phases of training may not contain enough captions actually introducing a new concept in the curriculum way, making these phases similar to regular image-caption training. Hence, we limit to 4 phases.
We train the model using SGD optimizer, with a batch size of 32 for 4 epochs in each phase, a learning rate of 0.005, a step scheduler, and the loss \(\mathcal{L_{C}}\).
Figure 2: Curriculum Statistics. (a) #Captions Vs Phase (b) The number next to bar shows % of captions per phase with at least #shaded concepts seen previously. (c) % Captions per phase introducing 1 new concept
### Downstream Task, Dataset and Transfer
We evaluate the performance of the model on zero-shot object detection task on COCO Objects, val split (4836 images; 33374 instances of 65 object class). The task involves object bounding box predictions besides classifying these object regions to a label (concept). However, our method is aimed only at improving the alignment of object regions to a concept. As such, we eliminate any performance noise from bounding box predictions by only evaluating the classification accuracy of object regions given ground truth object bounding boxes.
Transfer to Downstream Task: We extract object features from image and object bounding box using visual backbone and use it to find the closest class label vector (obtained via language backbone).
### Baseline and Evaluation
Our baseline is OVR-CNN, a regular image-caption pretrained model. However, since our method uses object region features instead of image patch features for multimodal alignment (section 2.2), we also pretrain OVR-CNN with object regions to obtain OVR-CNN\({}_{O}\). It is transferred to downstream task similar to our proposed model (section 3.2).
Our proposed curriculum framework outperforms the baseline in various settings, as shown in table 1. The accuracy numbers reported are averaged across three seeds. This demonstrates that our proposed learning strategy works across encoders trained from scratch or pretrained ones.
**Performance Gain Analysis.** We analyze model performance on object classes introduced during pretraining in phase 1 and phase 2 separately. As reported in table 2, the improvement in phase 2 objects is ~10x. This illustrates that our curriculum strategy improves alignment of multiple concepts in a caption by focusing on one at a time.
**Low Data Setting.** Our model outperforms the baseline even if both uses 50%, 25% or 10% data (fig. 3), indicating its utility when data is scarce.
**Region proposals instead of ground-truth object regions.** We use a RPN model Girshick (2015) trained class-agnostically on Visual Genome Krishna et al. (2016) to generate object regions. The superior performance of our model against baseline, reported in table 3, demonstrates that our approach is effective even when ground-truth object regions are not available.
**Loss Ablation.** From table 4, we can conclude that our curriculum design works (Ours + \(\mathcal{L}\) > OVR-CNN\({}_{O}\) + \(\mathcal{L}\)); our proposed curriculum aware loss works (Ours + \(\mathcal{L}\) < Ours + \(\mathcal{L}_{\mathcal{CR}}\)) irrespective of curriculum (OVR-CNN\({}_{O}\) + \(\mathcal{L}\) < OVR-CNN\({}_{O}\) + \(\mathcal{L}_{\mathcal{CR}}\)); curriculum aware loss works better when previous knowledge is taken from the last phase instead of the last iteration (Ours + \(\mathcal{L}_{\mathcal{CP}}\) > Ours + \(\mathcal{L}_{\mathcal{CR}}\)).
**Qualitative Analysis.** We provide qualitative analysis as well to shed more insights into the cases where our approach works/doesn't work. From Figure 4, we find that our model performs better than OVR-CNN\({}_{O}\) in certain cases, especially when the objects are from Phase 2 - "snowboard", "cup", "skis" etc. This provides further evidence towards our claim that our approach improves the alignment of Phase 2 objects.
**Comparison of traditional mAP metric for object detection** As mentioned before, we have focused our experiments on evaluating object
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & \begin{tabular}{c} Pretrained \\ Visual BB \\ \end{tabular} &
\begin{tabular}{c} Language \\ Backbone \\ \end{tabular} & Accuracy \\ \hline OVR-CNN\({}_{O}\) & ✗ & GloVE & 20.62\({}_{\pm 0.86}\) \\ Ours & ✗ & GloVE & **21.64\({}_{\pm 1.02}\)** \\ \hline OVR-CNN\({}_{O}\) & ✗ & BERT & 22.73\({}_{\pm 0.06}\) \\ Ours & ✗ & BERT & **23.74\({}_{\pm 0.48}\)** \\ \hline OVR-CNN\({}_{O}\) & ✓ & BERT & 34.46\({}_{\pm 0.11}\) \\ Ours & ✓ & BERT & **35.49\({}_{\pm 0.21}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Curriculum learning vs. baseline in various settings with/without pretrained encoders. BB: backbone
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & \(P_{1}\) Obj. & \(P_{2}\) Obj. \\ \hline OVR-CNN\({}_{O}\) & 49.36 & 16.61 \\ Ours & **50.9** & **26.1** \\ \hline \hline \end{tabular}
\begin{tabular}{l c c} \hline \hline Model & Accuracy \\ \hline OVR-CNN\({}_{O}\) & 13.10 \\ Ours & ✓ & \(\mathcal{L}_{\mathcal{CR}}\) & 21.58 \\ Ours & ✓ & \(\mathcal{L}_{\mathcal{CP}}\) & **22.57** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation of proposed loss.
Figure 3: Performance comparison in low data setting.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & \(P_{1}\) Obj. & \(P_{2}\) Obj. \\ \hline OVR-CNN\({}_{O}\) & 49.36 & 16.61 \\ Ours & **50.9** & **26.1** \\ \hline \hline \end{tabular}
\begin{tabular}{l c c} \hline \hline Model & Accuracy \\ \hline OVR-CNN\({}_{O}\) & 13.10 \\ Ours & **17.45** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Phase wise top-5 accuracy. \(P_{i}\) Obj.: Phase \(i\) Objects.
concept alignment rather than on traditional object detection mAP metric. This was done to avoid unnecessary performance noise arising from training a RPN, which is required for mAP evaluation. However, to test the limits of our model, we evaluate on this noisy mAP metric as well. We keep all the settings similar as Zareian et al. (2021), except we pretrain using our curriculum learning approach. The results are reported in Table 5. We find that our model performs better in the most generic Generalized ('All') set (41.33 vs 39.9), signifying the effectiveness of our approach even in this noisy setting. We further observe that we perform better in the base classes while lagging behind in the target classes. A deeper analysis shows that most of the Phase 2 objects, on which we make major improvements, lie in the base classes. This explains the improved performance on base classes and slight depreciation in target classes performance.
loss requires modifications for use with dual encoder architectures that don't use cross-modal attention. Additionally, we use an off-the-shelf Part-of-Speech tagger to divide the data into different phases. As such, the correctness of this division is dependent on the quality of tagger. A poor tagger can negatively impact the curriculum design. Moreover, our approach doesn't apply to possible image-captions dataset which contain only short captions, containing possibly only one noun.
## Acknowledgement
This work was supported by the U.S. DARPA GAILA Program No.HR00111990058. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.
|
2306.11461 | Neutrino Mixing Phenomenology: $A_4$ Discrete Flavor Symmetry with
Type-I Seesaw Mechanism | We study a neutrino mass model with $A_4$ discrete flavor symmetry using a
type-I seesaw mechanism. The inclusion of extra flavons in our model leads to
the deviations from exact tribimaximal mixing pattern resulting in a nonzero
$\theta_{13}$ consistent with the recent experimental results and a sum rule
for light neutrino masses is also obtained. In this framework, a connection is
established among the neutrino mixing angles: reactor mixing
angle($\theta_{13}$), solar mixing angle($\theta_{12}$) and atmospheric mixing
angle ($\theta_{23}$). This model also allows us a prediction of Dirac CP-phase
and Jarlskog parameter $J$. The octant of the atmospheric mixing angle
$\theta_{23}$ occupies the lower octant. Our model prefers normal hierarchy
(NH) than inverted hierarchy (IH). We use the parameter space of our model of
neutrino masses to study the neutrinoless double beta decay parameter $m_{ee}$.
Keywords: Discrete flavor symmetry, Type-I seesaw mechanism, Tribimaximal
mixing, Dirac CP-phase, Jarlskog parameter, Neutrinoless double beta decay | Animesh Barman, Ng. K. Francis, Hrishi Bora | 2023-06-20T11:35:30Z | http://arxiv.org/abs/2306.11461v2 | # Neutrino Mixing Phenomenology: \(A_{4}\) Discrete Flavor Symmetry with Type-I Seesaw Mechanism
###### Abstract
We study a neutrino mass model with \(A_{4}\) discrete flavor symmetry using a type-I seesaw mechanism. The inclusion of extra flavons in our model leads to the deviations from exact tribimaximal mixing pattern resulting in a nonzero \(\theta_{13}\) consistent with the recent experimental results and a sum rule for light neutrino masses is also obtained. In this framework, a connection is established among the neutrino mixing angles- reactor mixing angle(\(\theta_{13}\)), solar mixing angle(\(\theta_{12}\)) and atmospheric mixing angle (\(\theta_{23}\)). This model also allows us a prediction of Dirac CP-phase and Jarlskog parameter \(J\). The octant of the atmospheric mixing angle \(\theta_{23}\) occupies the lower octant. Our model prefers normal hierarchy (NH) than inverted hierarchy (IH). We use the parameter space of our model of neutrino masses to study the neutrinoless double beta decay parameter \(m_{ee}\).
pacs: 12.60.-i, 14.60.Pq, 14.60.St
Introduction
The discovery of neutrino oscillations has triggered a lot of theoretical and experimental effort to understand the physics of lepton masses and mixing. As flavor mixing happens due to the mismatch between the mass and flavor eigenstates, so neutrinos need to have small non-degenerate masses [1; 2; 3] Over the last twenty five years, numerous experiments on neutrino oscillation have taken place, resulting in the precise determination of oscillation parameters [4; 5; 6]. The discovery of neutrino oscillations in 1998 by Japanese Super-Kamiokande (SK) collaborators and Canadian Sudbury Neutrino Observatory collaborators was the first evidence of physics beyond the Standard Model. A few latest reviews on neutrino physics are placed in references [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25].
The main factors influencing neutrino oscillation probabilities are the mass-squared differences and the mixing angles. Therefore, these parameters are determined in neutrino oscillation experiments. The experimental data has shown two large mixing angles, one is atmospheric mixing angle \(\theta_{23}\) and another is solar mixing angle \(\theta_{12}\) and one small mixing angle, called reactor mixing angle \(\theta_{13}\). This pattern differs from quark mixing where all angles are small and the mixing matrix is close to identity.
The tribimaximal (TBM) mixing pattern is one of the most extensively used lepton mixing patterns obtained utilising discrete non-Abelian symmetries.
\[U_{TBM}=\begin{pmatrix}-\frac{\sqrt{2}}{\sqrt{3}}&\frac{1}{\sqrt{3}}&0\\ \frac{1}{\sqrt{6}}&\frac{1}{\sqrt{3}}&-\frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{6}}&\frac{1}{\sqrt{3}}&\frac{1}{\sqrt{2}}\end{pmatrix} \tag{1}\]
But TBM has been ruled out due to a non-zero reactor mixing angle, [26; 27]. One of the admired ways to achieve realistic mixing is through either its extensions or through modifications. In the concept of tribimaximal mixing (TBM), the angle \(\theta_{13}\), which represents the degree of mixing in a reactor, is equal to zero and the CP phase \(\delta_{CP}\), which characterizes the violation of symmetry between matter and antimatter, cannot be determined or has no specific value. Albeit in 2012 the Daya Bay Reactor Neutrino Experiment (\(\sin^{2}2\theta_{13}=0.089\pm 0.010\pm 0.005\)) [28] and RENO Experiment \(\sin^{2}2\theta_{13}=0.113\pm 0.013\pm 0.019\)[26] showed that \(\theta_{13}\simeq 9^{\circ}\). Moreover, several neutrino oscillation experiments like MINOS [29], Double Chooz [27], T2K [30], measured consistent non-zero values for \(\theta_{13}\). Other mixing
angle values also show small deviations the TBM value.
The experiments investigating neutrino oscillation have discovered two mass-squared differences that vary significantly in their scales. The smaller mass squared differences, denoted \(\Delta m^{2}_{21}=m^{2}_{2}-m^{2}_{1}\), is positive and is of the order of \(10^{-5}eV^{2}\) and the larger mass-squared difference, \(\Delta m^{2}_{31}=m^{2}_{3}-m^{2}_{1}\), is of order \(10^{-3}eV^{2}\) but its sign is unknown. It leads to two possible mass hierarchies for neutrinos: normal hierarchy (NH) in which \(\Delta m^{2}_{31}\) is positive and \(m_{1}<m_{2}<m_{3}\) and inverted hierarchy (IH) where \(m_{3}<m_{1}<m_{2}\). Many experiments like INO [31; 32; 33], ICECube-PINGU [34; 35; 36; 37] and and long baseline experiments [38; 39] has the primary objective of determining the sign of \(\Delta m^{2}_{31}\). The values of mixing angles and mass-squared differences and \(\delta_{CP}\) from the global analysis of data is summarized in given in Table 1.
To elucidate the small masses of neutrinos in comparison to charged leptons and quarks, a new mechanism involving Majorana nature of neutrinos, called seesaw mechanism, was introduced in [41; 42; 43; 44; 45]. This method introduces right-handed neutrino companions with high-scale Majorana masses. In this mechanism, the right handed partners of neutrinos are introduced with Majorana masses at high scale. Furthermore, the neutrinos have Dirac masses of the order of charged lepton masses. Also, there are some other frameworks beyond the standard model (BSM) that can explain the origin of neutrino masses, for examples, Supersymmetry [46], Minimal Supersymmetric Standard Model (MSSM) [47], Minimal seesaw model[48], Inverse seesaw model[49], Next-to-Minimal Supersymmetric Standard Model (NMSSM) [50], String theory [51], models based on extra dimensions [52], Radiative Seesaw Mechanism [53; 54] and also some other models. And also various models based on non-abelian discrete
flavor symmetries [55] like \(A_{4}\)[56; 57; 58; 59; 60; 61; 62], \(S_{3}\)[63], \(S_{4}\)[64; 65; 66; 67; 68; 69], \(\Delta_{27}\)[70; 71; 72; 73], \(\Delta_{54}\)[18; 74; 75] etc. have been proposed to obtain tribimaximal mixing (TBM) and deviation from TBM.
The mixing between the neutrino flavour eigenstates and their mass eigenstates is encoded by the commonly used PMNS matrix. This PMNS matrix is parameterized in a three-flavoured paradigm using three mixing angles and three CP phases as given below:
\[U_{PMNS}=\begin{pmatrix}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta}\\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^{ i\delta}&s_{23}c_{13}\\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta}&-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{ i\delta}&c_{23}c_{13}\end{pmatrix}\cdot U_{Maj} \tag{2}\]
where, \(c_{ij}=\cos\theta_{ij}\), \(s_{ij}=\sin\theta_{ij}\). The diagonal matrix \(U_{Maj}=diag(1,e^{i\alpha},e^{i(\beta+\gamma)})\) contains the Majorana CP phases, \(\alpha\), \(\beta\) which become observable in case the neutrinos behave as Majorana particles. Identifying neutrinoless double beta decay will probably be necessary to prove that neutrinos are Majorana particles. Such decays have not yet been seen. Here, symmetry will play an important role in explaining these problems. In order to account for the fact that neutrino mass is zero within the standard model (SM) [76], it becomes necessary to develop a new framework that goes beyond the standard model. This entails incorporating a new symmetry and creating a mechanism that generates non-zero masses for neutrinos.
In this study, we put forward a model for neutrino masses to provide an explanation for the observed non-zero value of \(\theta_{13}\), as well as the existing data on neutrino masses and mixings. To get the deviation from exact TBM neutrino mixing pattern, we have extended the flavon sector of Altarelli-Feruglio (A-F) [77; 78] model by introducing extra flavons \(\xi^{\prime}\), \(\xi^{\prime\prime}\) and \(\rho\) which transform as \(1^{\prime}\), \(1^{\prime\prime}\) and \(1\) respectively under \(A_{4}\). Here, type-I see-saw framework [79; 80] is utilized to construct the model. Also, we incorporated a \(Z_{2}\times Z_{3}\) symmetry as well, which serves the purpose of avoiding undesired terms.
The content material of our paper is organised as follows: In section 2, we give the overview of the framework of our model by specifying the fields involved and their transformation properties under the symmetries imposed. In section 3, we do numerical analysis and study the results for the neutrino phenomenology. We finally conclude our work in section
Framework of the model
Here we provide a concise overview of the ways in which the non-Abelian discrete symmetry \(A_{4}\) group can be represented [78; 81]. \(A_{4}\) is a group of even permutations of four objects and it has 12 elements (12= \(\frac{4!}{2}\)). The group known as the tetrahedron group, or sometimes referred to as the group of orientation-preserving symmetries of a regular tetrahedron, is characterized by its ability to describe the symmetries of this particular geometric shape. This can be generated by two basic permutations S and T having properties \(S^{2}=T^{3}=(ST)^{3}=1\). This group representations of \(A_{4}\) include three one-dimensional unitary representations 1, 1\({}^{\prime}\), 1\({}^{\prime\prime}\) with the generators S and T given, respectively as follows:
\[1:S=1,T=1\]
\[1^{\prime}:S=1,T=\omega^{2}\]
\[1^{\prime\prime}:S=1,T=\omega\]
and a three dimensional unitary representation with the generators1
Footnote 1: Here the generator T has been chosen to be diagonal
\[T=\begin{pmatrix}1&0&0\\ 0&\omega^{2}&0\\ 0&0&\omega\end{pmatrix} \tag{3}\]
\[S=\frac{1}{3}\begin{pmatrix}-1&2&2\\ 2&-1&2\\ 2&2&-1\end{pmatrix} \tag{4}\]
. Here \(\omega\) is the cubic root of unity, \(\omega=exp(i2\pi)\), so that \(1+\omega+\omega^{2}=0\).
The multiplication rules corresponding to the specific basis of two generators S and T are as follows:
\[1\times 1=1\]
\[1^{\prime\prime}\times 1^{\prime}=1\]
\[1^{\prime}\times\ 1^{\prime\prime}=1\]
\[3\times 3=3+3_{A}+1+1^{\prime}+1^{\prime\prime}\]
For two triplets
\[a=(a_{1},a_{2},a_{3})\]
\[b=(b_{1},b_{2},b_{3})\]
we can write
\[1\equiv(ab)=a_{1}b_{1}+a_{2}b_{3}+a_{3}b_{2}\]
\[1^{\prime}\equiv(ab)^{\prime}=a_{3}b_{3}+a_{1}b_{2}+a_{2}b_{1}\]
\[1^{\prime\prime}\equiv(ab)^{\prime\prime}=a_{2}b_{2}+a_{1}b_{3}+a_{3}b_{1}\]
Here 1 is symmetric under the exchange of second and third elements of a and b, \(1^{\prime}\) is symmetric under the exchange of the first and second elements while \(1^{\prime\prime}\) is symmetric under the exchange of first and third elements.
\[3\equiv(ab)_{S}=\frac{1}{3}(2a_{1}b_{1}-a_{2}b_{3}-a_{3}b_{2},2a_{3}b_{3}-a_{1 }b_{2}-a_{2}b_{1},2a_{2}b_{2}-a_{1}b_{3}-a_{3}b_{1})\]
\[3_{A}\equiv(ab)_{A}=\frac{1}{3}(a_{2}b_{3}-a_{3}b_{2},a_{1}b_{2}-a_{2}b_{1},a _{1}b_{3}-a_{3}b_{1})\]
Here 3 is symmetric and \(3_{A}\) is anti-symmetric. For the symmetric case, we notice that the first element has 2-3 exchange symmetry, the second element has 1-2 exchange symmetry and the third element has 1-3 exchange symmetry.
The particle content and their charge assignment under the symmetry group is given in Table 2. The left-handed lepton doublets l and right-handed charged leptons (\(e^{c},\mu^{c},\tau^{c}\)) are assigned to triplet and singlet (\(1,1^{\prime\prime},1^{\prime}\) ) representation under A\({}_{4}\) respectively and other particles transform as shown in Table-2. Here, \(h_{u}\) and \(h_{d}\) are the standard Higgs doublets
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline Field & 1 & \(e^{c}\) & \(\mu^{c}\) & \(\tau^{c}\) & \(h_{u}\) & \(h_{d}\) & \(\nu^{c}\) & \(\Phi_{S}\) & \(\Phi_{T}\) & \(\xi\) & \(\xi^{\prime}\) & \(\xi^{\prime\prime}\) & \(\rho\) \\ \hline SU(2) & 2 & 1 & 1 & 1 & 2 & 2 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ A\({}_{4}\) & 3 & 1 & 1\({}^{\prime\prime}\) & 1\({}^{\prime}\) & 1 & 1 & 3 & 3 & 3 & 1 & \({}^{\prime\prime}\) & 1\({}^{\prime}\) & 1 \\ Z\({}_{2}\) & 1 & -1 & -1 & -1 & 1 & 1 & 1 & 1 & -1 & 1 & 1 & 1 & 1 \\ Z\({}_{3}\) & \(\omega^{2}\) & \(\omega\) & \(\omega\) & \(\omega\) & 1 & 1 & 1 & \(\omega\) & 1 & \(\omega\) & \(\omega\) & \(\omega\) & \(\omega\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Full particle content of our model
which remain invariant under \(A_{4}\). The right-handed neutrino field \(\nu^{c}\) is assigned to the triplet representation under \(A_{4}\) flavor symmetry. There are six \(SU(2)\otimes U_{Y}(1)\) Higgs singlets, four (\(\xi\), \(\xi^{\prime}\), \(\xi^{\prime\prime}\) and \(\rho\)) of which singlets under \(A_{4}\) and two (\(\Phi_{T}\) and \(\Phi_{S}\)) of which transform as triplets.
Consequently, the invariant Yukawa Lagrangian is as follows:
\[-\mathcal{L}=\frac{y_{e}}{\Lambda}(l\Phi_{T})_{1}h_{d}e^{c}+\frac {y_{\mu}}{\Lambda}(l\Phi_{T})_{1^{\prime}}h_{d}\mu^{c}+\frac{y_{\tau}}{\Lambda} (l\Phi_{T})_{1^{\prime\prime}}h_{d}\tau^{c}+\frac{y_{1}}{\Lambda}\xi_{1}(lh_{ u}\nu^{c})_{1}+\frac{y_{2}}{\Lambda}\xi_{2^{1\prime\prime}}(lh_{u}\nu^{c})_{1^{ \prime}}+\] \[\frac{y_{3}}{\Lambda}\xi_{31^{\prime}}(lh_{u}\nu^{c})_{1^{\prime \prime}}+\frac{y_{4}}{\Lambda}\rho_{1}(lh_{u}\nu^{c})_{1}+\frac{y_{a}}{\Lambda }\Phi_{S}(lh_{u}\nu^{c})_{A}+\frac{y_{b}}{\Lambda}\Phi_{S}(lh_{u}\nu^{c})_{S} +\frac{1}{2}M_{N}(\nu^{c}\nu^{c})+h.c. \tag{5}\]
The terms \(y_{e}\), \(y_{\mu}\), \(y_{\tau}\), \(y_{1}\), \(y_{2}\), \(y_{3}\), \(y_{4}\), \(y_{a}\) and \(y_{b}\) are coupling constant and \(\Lambda\) is the cut-off scale of the theorys. We assume \(\Phi_{T}\) does not couple to the Majorana mass matrix and \(\Phi_{S}\) does not couple to the charged leptons. After spontaneous symmetry breaking of flavour and electroweak symmetry we obtain the mass matrices for the charged leptons and neutrinos. We assume the vacuum alignment of \(\langle\Phi_{T}\rangle=(v_{T},0,0)\) and \(\langle\Phi_{S}\rangle=(v_{s},v_{s},v_{s})\). Also, \(v_{u},v_{d}\) are the VEVs of \(\langle h_{u}\rangle\), \(\langle h_{d}\rangle\) and \(u_{1},u_{2},u_{3},u_{4}\) are the VEVs of \(\langle\xi_{1}\rangle\), \(\langle\xi_{2}\rangle\), \(\langle\xi_{3}\rangle\),\(\langle\rho\rangle\) respectively.
The VEV pattern of the \(A_{4}\) triplets, which is considered in our model, has been thoroughly examined in numerous \(A_{4}\) models like [78; 82].
The charged lepton mass matrix is given as
\[M_{l}=\frac{v_{d}v_{T}}{\Lambda}\begin{pmatrix}y_{e}&0&0\\ 0&y_{\mu}&0\\ 0&0&y_{\tau}\end{pmatrix} \tag{6}\]
where, \(v_{d}\) and \(v_{T}\) are the VEVs of \(h_{d}\) and \(\Phi_{T}\) respectively.
The structure of the Majorana neutrino mass matrix:
\[M_{R}=\begin{pmatrix}M_{N}&0&0\\ 0&0&M_{N}\\ 0&M_{N}&0\end{pmatrix} \tag{7}\]
The form of the Dirac mass matrix:
\[M_{D}=\begin{pmatrix}\frac{2b}{3}+c+f&-\frac{a}{3}-\frac{b}{3}+d&-\frac{a}{3 }-\frac{b}{3}+e\\ \frac{a}{3}-\frac{b}{3}+d&\frac{2b}{3}+e&-\frac{a}{3}-\frac{b}{3}+c+f\\ \frac{a}{3}-\frac{b}{3}+e&\frac{a}{3}-\frac{b}{3}+c+f&\frac{2b}{3}+d\end{pmatrix} \tag{8}\]
Where, \(a=\frac{y_{a}v_{u}v_{s}}{\Lambda}\), \(b=\frac{y_{b}v_{u}v_{s}}{\Lambda}\), \(c=\frac{y_{1}v_{u}u_{1}}{\Lambda}\), \(d=\frac{y_{2}v_{u}u_{2}}{\Lambda}\), \(e=\frac{y_{3}v_{u}u_{3}}{\Lambda}\) and \(f=\frac{y_{4}v_{u}u_{4}}{\Lambda}\).
The Type-I seesaw method is used to determine the effective neutrino mass matrix \(m_{\nu}=M_{D}^{T}M_{R}^{-1}M_{D}\)
\[m_{\nu}=\begin{pmatrix}m_{11}&m_{12}&m_{13}\\ m_{12}&m_{22}&m_{23}\\ m_{13}&m_{23}&m_{33}\end{pmatrix} \tag{9}\]
Where,
\(m_{11}=\frac{1}{M_{N}}[2(\frac{a}{3}-\frac{b}{3}+d)(\frac{a}{3}-\frac{b}{3}+e) +(\frac{2b}{3}+c+f)^{2}]\)
\(m_{12}=m_{21}=\frac{1}{M_{N}}[(\frac{a}{3}-\frac{b}{3}+e)(\frac{2b}{3}+e)+( \frac{a}{3}-\frac{b}{3}+d)(\frac{a}{3}-\frac{b}{3}+c+f)+(-\frac{a}{3}-\frac{b} {3}+d)(\frac{2b}{3}+c+f)]\)
\(m_{13}=m_{31}=\frac{1}{M_{N}}[(\frac{a}{3}-\frac{b}{3}+d)(\frac{2b}{3}+d)+( \frac{a}{3}-\frac{b}{3}+e)(-\frac{a}{3}-\frac{b}{3}+c+f)+(-\frac{a}{3}-\frac{ b}{3}+e)(\frac{2b}{3}+c+f)]\)
\(m_{22}=\frac{1}{M_{N}}[(-\frac{a}{3}-\frac{b}{3}+d)^{2}+2(\frac{2b}{3}+e)( \frac{a}{3}-\frac{b}{3}+c+f)]\)
\(m_{23}=m_{32}=\frac{1}{M_{N}}[(-\frac{a}{3}-\frac{b}{3}+d)(-\frac{a}{3}-\frac{ b}{3}+e)+(\frac{2b}{3}+d)(\frac{2b}{3}+e)+(-\frac{a}{3}-\frac{b}{3}+c+f)(\frac{a}{3}- \frac{b}{3}+c+f)]\)
\(m_{33}=\frac{1}{M_{N}}[(-\frac{a}{3}-\frac{b}{3}+e)^{2}+2(\frac{2b}{3}+d)(- \frac{a}{3}-\frac{b}{3}+c+f)]\)
To explain the smallness of active neutrino masses, we consider the heavy neutrino with masses \(M_{N}\approx 10^{14}\) GeV. We can assume \(c\simeq d\simeq e\simeq f\). This is a reasonable assumption to make since the phenomenology does not change drastically unless the VEVs of the singlet Higgs vary by a huge amount [82; 83]. Thus, the neutrino mass matrix becomes with new matrix elements:
\(m^{\prime}_{11}=\frac{1}{9M_{N}}[2(a^{2}-2ab+3b^{2}+6(a+b)c+27c^{2})]\)
\(m^{\prime}_{12}=m^{\prime}_{21}=\frac{1}{9M_{N}}[a^{2}-2a(b-3c)-3(b-3c)(b+5c)]\)
\(m^{\prime}_{13}=m^{\prime}_{31}=\frac{1}{M_{N}}[-a^{2}+3(b-3c)(b+5c)]\)
\(m^{\prime}_{22}=\frac{1}{9M_{N}}[a^{2}+6ab-3b^{2}+12bc+45c^{2}]\)
\(m^{\prime}_{23}=m^{\prime}_{32}=\frac{1}{9M_{N}}[2b(a+3b)-6(a+b)c+54c^{2}]\)
\(m^{\prime}_{33}=\frac{1}{9M_{N}}[a^{2}-3b^{2}+12bc+45c^{2}-2a(b+6c)]\)
In section 3, we give the detailed phenomenological analysis on various neutrino oscillation parameters. Further, we present a numerical study of neutrinoless double-beta decay considering the allowed parameter space of the model.
## III Numerical analysis and results
The neutrino mass matrix \(m_{\nu}\) can be diagonalized by the PMNS matrix \(U\) as
\[U^{\dagger}m_{\nu}U^{*}=\text{diag}(m_{1},m_{2},m_{3}) \tag{10}\]
We can numerically calculate \(U\) using the relation \(U^{\dagger}hU={\rm diag}(m_{1}^{2},m_{2}^{2},m_{3}^{2})\), where, \(h=m_{\nu}m_{\nu}^{\dagger}\). The neutrino oscillation parameters \(\theta_{12}\), \(\theta_{13}\), \(\theta_{23}\) and \(\delta_{CP}\) can be obtained from \(U\) as
\[s_{12}^{2}=\frac{|U_{12}|^{2}}{1-|U_{13}|^{2}},\qquad s_{13}^{2}=|U_{13}|^{2}, \qquad s_{23}^{2}=\frac{|U_{23}|^{2}}{1-|U_{13}|^{2}}, \tag{11}\]
and \(\delta\) may be given by
\[\delta=\sin^{-1}\left(\frac{8\,{\rm Im}(h_{12}h_{23}h_{31})}{P}\right) \tag{12}\]
with
\[P=(m_{2}^{2}-m_{1}^{2})(m_{3}^{2}-m_{2}^{2})(m_{3}^{2}-m_{1}^{2})\sin 2\theta _{12}\sin 2\theta_{23}\sin 2\theta_{13}\cos\theta_{13} \tag{13}\]
For the comparison of theoretical neutrino mixing parameters with the latest experimental data [40], the \(A_{4}\) model is fitted to the experimental data by minimizing the following \(\chi^{2}\) function:
\[\chi^{2}=\sum_{i}\left(\frac{\lambda_{i}^{model}-\lambda_{i}^{expt}}{\Delta \lambda_{i}}\right)^{2}. \tag{14}\]
where \(\lambda_{i}^{model}\) is the \(i^{th}\) observable predicted by the model, \(\lambda_{i}^{expt}\) stands for the \(i^{th}\) experimental best-fit value and \(\Delta\lambda_{i}\) is the \(1\sigma\) range of the \(i^{th}\) observable.
In Fig. 1, correlation among the neutrino oscillation parameters \(\sin^{2}\theta_{12}\), \(\sin^{2}\theta_{23}\), \(\frac{\Delta m_{21}^{2}}{\Delta m_{31}^{2}}\) and \(\sin^{2}\theta_{13}\) for NH has shown, which is constrained using the \(3\sigma\) bound on neutrino oscillation data. We can see that there is a high correlation among different parameters of the model. Fig. 2 shows the correlation among the mixing angles, ratio of mass-squared differences, Jarlskog parameter \(J\) and Dirac CP phase for NH.
The calculated best fit values of \(\sin^{2}\theta_{12}\), \(\sin^{2}\theta_{13}\) and \(\sin^{2}\theta_{23}\) are (0.342, 0.0238, 0.556) which are within the 3 \(\sigma\) range of experimental values. Other parameters such as \(\Delta m_{21}^{2}\), \(\Delta m_{31}^{2}\) and \(\delta_{CP}\) have their best-fit values, corresponding to \(\chi^{2}\)-minimum, at (\(7.425\times 10^{-5}eV^{2}\), \(2.56\times 10^{-3}eV^{2}\), \(-0.358\pi\)) respectively, which perfectly agreed with the latest observed neutrino oscillation experimental data. Thus, the model defined here, clearly shows the deviation from exact tri-bimaximal mixing.
**Neutrinoless double beta decay (NDBD):**
Up until now, the question of whether neutrinos belong to the Dirac or Majorana category remains unanswered. If they are of the Majorana type, the investigation of Neutrinoless
Double Beta Decay (NDBD) becomes highly significant. Several ongoing experiments are being conducted to ascertain the Majorana nature of neutrinos. The effective mass that controls this process is furnished by
\[m_{\beta\beta}=U_{Li}^{2}m_{i} \tag{15}\]
where \(U_{Li}\) are the elements of the first row of the neutrino mixing matrix \(U_{PMNS}\) (Eq.2) which is dependent on known parameters \(\theta_{12}\), \(\theta_{13}\) and the unknown Majorana phases \(\alpha\) and \(\beta\). \(U_{PMNS}\) is the diagonalizing matrix of the light neutrino mass matrix \(m_{\nu}\) so that,
\[m_{\nu}=U_{PMNS}M_{\nu}^{(diag)}U_{PMNS}^{T} \tag{16}\]
where, \(m_{\nu}^{(diag)}=\)diag(\(m_{1}\), \(m_{2}\), \(m_{3}\)). The effective Majorana mass can be parameterized using the diagonalizing matrix elements and the mass eigen values as follows:
\[m_{\beta\beta}=m_{1}c_{12}^{2}c_{13}^{2}+m_{2}s_{12}^{2}c_{13}^{2}e^{2i\alpha} +m_{3}s_{13}^{2}e^{2i\beta} \tag{17}\]
Using the constrained parameter space we have evaluated the value of \(m_{\beta\beta}\) for our model. The variation of \(m_{\beta\beta}\) with lightest neutrino mass is shown in figure 3. The sensitivity reach of NDBD experiments like KamLAND-Zen [84; 85], GERDA [86; 87; 88], LEGEND-1k [89] is also shown in figure 3. \(m_{\beta\beta}\) is found to be well within the sensitivity reach of these NDBD experiments.
Figure 3: Variation of effective Majorana neutrino mass with highest neutrino mass in NH with the KamLAND-Zen-Gerda bound on the effective mass.
Figure 2: Correlation among the mixing angles, ratio of mass-squared differences \(\frac{\Delta m^{2}_{21}}{\Delta m^{2}_{31}}\), Dirac CP phase and Jarlskog parameter J.
experiments for our model.
## IV Conclusions
We have developed a flavon-symmetric \(A_{4}\times Z_{2}\times Z_{3}\) model using seesaw type-I mechanism. This model aims to explain the recent experimental data on neutrino oscillation, which deviate from the Tribimaximal neutrino mixing pattern. The inclusion of the cyclic \(Z_{2}\times Z_{3}\) symmetric component has been done to remove undesired terms during the computations. The computed values unequivocally demonstrated that the neutrino mixing parameters deviate from the exact Tri-bimaximal neutrino mixing matrix. The resulting mass matrices give predictions for the neutrino oscillation parameters and their best-fit values are obtained using the \(\chi^{2}\)-analysis, which are consistent with the latest global neutrino oscillation experimental data. In our model, we have also explored the concept of NDBD. The value of effective Majorana neutrino mass \(|m_{\beta\beta}|\) is well fitted within the sensitivity reach of the recent NDBD experiments like KamLAND- Zen, GERDA and LEGEND-1k. The identification of NDBD, cosmological mass, and the leptonic CP-violation phase \(\delta_{CP}\), which align with the most recent experimental information, will differentiate between various models of neutrino mass.
###### Acknowledgements.
Animesh Barman acknowledges the financial support provided by the CSIR, New Delhi, India for Senior Research Fellowship (file no 09/796(0072)/2017-EMR-1). The research of Ng. K. Francis is funded by DST-SERB, India under Grant no. EMR/2015/001683.
|
2308.07977 | Dynamic Attention-Guided Diffusion for Image Super-Resolution | Diffusion models in image Super-Resolution (SR) treat all image regions with
uniform intensity, which risks compromising the overall image quality. To
address this, we introduce "You Only Diffuse Areas" (YODA), a dynamic
attention-guided diffusion method for image SR. YODA selectively focuses on
spatial regions using attention maps derived from the low-resolution image and
the current time step in the diffusion process. This time-dependent targeting
enables a more efficient conversion to high-resolution outputs by focusing on
areas that benefit the most from the iterative refinement process, i.e.,
detail-rich objects. We empirically validate YODA by extending leading
diffusion-based methods SR3 and SRDiff. Our experiments demonstrate new
state-of-the-art performance in face and general SR across PSNR, SSIM, and
LPIPS metrics. A notable finding is YODA's stabilization effect by reducing
color shifts, especially when training with small batch sizes. | Brian B. Moser, Stanislav Frolov, Federico Raue, Sebastian Palacio, Andreas Dengel | 2023-08-15T18:27:03Z | http://arxiv.org/abs/2308.07977v3 | # YODA: You Only Diffuse Areas.
###### Abstract
This work introduces "You Only Diffuse Areas" (YODA), a novel method for partial diffusion in Single-Image Super-Resolution (SISR). The core idea is to utilize diffusion selectively on spatial regions based on attention maps derived from the low-resolution image and the current time step in the diffusion process. This time-dependent targeting enables a more effective conversion to high-resolution outputs by focusing on areas that benefit the most from the iterative refinement process, i.e., detail-rich objects. We empirically validate YODA by extending leading diffusion-based SISR methods SR3 and SRDiff. Our experiments demonstrate new state-of-the-art performance gains in face and general SR across PSNR, SSIM, and LPIPS metrics. A notable finding is YODA's stabilization effect on training by reducing color shifts, especially when induced by small batch sizes, potentially contributing to resource-constrained scenarios. The proposed spatial and temporal adaptive diffusion mechanism opens promising research directions, including developing enhanced attention map extraction techniques and optimizing inference latency based on sparser diffusion.
## 1 Introduction
Image Super-Resolution (SR) describes the enhancing process of Low-Resolution (LR) images into High-Resolution (HR) images. Although it has a long history of research, it remains a fascinating, but challenging domain within computer vision [20]. The primary challenge arises due to the inherently ill-posed nature of SR: any given LR image can lead to several valid HR images and vice versa [2, 23].
Recently, the field of SR has made significant progress thanks to deep learning [7]. Initial regression-based methods, such as early convolutional neural networks, work great at low magnification ratios. However, they often fail to produce high-frequency details at high magnification ratios and generate over-smoothed images. To bridge this gap, generative models and, more recently, Denoising Diffusion Probabilistic Models (DDPMs) have emerged with better quality compared to regression-based methods when human raters are asked [21, 26, 6]. Traditionally, diffusion models apply diffusion across the entire image for all time steps. However, this is inefficient with respect to image quality and inference speed because not all regions of an image require equal in-depth feature extraction and refinement. For example, an image with a face in the foreground and monochromatic backgrounds devoid of complex features, e.g., a blue sky background without any detail-rich elements like clouds.
To address this, we introduce the "You Only Diffuse Areas" (YODA) approach, which targets only essential areas based on time-dependent and attention-guided masking. We employ the self-supervised method DINO [4] to identify important regions within the image and utilize a strategy similar to RePaint [18], an approach for inpainting tasks, to merge SR predictions of these vital areas with the LR image for subsequent processing steps. Consequently, YODA initiates the process with a noisy LR input alongside an attention map specifying the spatial regions to be diffused at various time steps. Thus, high attention values trigger more refinement iterations and throughout this process, YODA substitutes crucial regions with SR predictions. The predicted SR regions expand progressively with each step, simultaneously contributing to gradual denoising and enhancing the overall image quality. A significant benefit of YODA is that it can be applied agnostically to the diffusion model, which means that our method can be used plug&play for any existing diffusion approach.
We evaluate YODA in conjunction with SR3 [21] for face SR and SRDiff [14] for general SR and demonstrate clear performance improvements. However, the influence of YODA appears to extend beyond mere image quality enhancement. Our face SR experiments necessitated a reduced batch size (4 instead of 256 for \(64\times 64\to 512\times 512\) scaling) due to training on a single GPU, and YODA exhib
ited stabilizing effects on the training process. SR3 generated predictions marred by color shifts, an issue that was absent when YODA was integrated under identical conditions. These color shifts resulted in a significant performance drop for SR3 compared to SR3 with YODA.
Thus, our experiments suggest that training a SR3 diffusion model with YODA is feasible under limited hardware and lower batch size conditions without significant performance drop like vanilla SR3. Furthermore, YODA offers promising research avenues and would directly benefit from improved techniques to extract attention maps and optimized inference speed through sparse diffusion. Our work has the following key contributions:
* We introduce YODA, an attention-guided and time-dependent diffusion approach for image super-resolution.
* We analyze different ways to derive attention maps and find that DINO [4] yields the best results.
* Our approach outperforms state-of-the-art diffusion models SR3 [21] for face-only SR and SRDiff [14] for general SR across several metrics.
* We show that YODA has stabilizing effects on SR3 in the training process when reduced batch sizes are used.
## 2 Background
Our work uses attention maps extracted with DINO to guide a time-dependent and area-masked denoising process in diffusion models. This section lays out the basics of DDPMs and DINO, which are the underlying techniques of our work [21, 4].
### DDPMs
Denoising Diffusion Probabilistic Models (DDPMs) employ two distinct Markov chains: the first models a diffusion process \(q\) transitioning from an input \(\mathbf{x}\) to a pre-defined prior distribution with intermediate states \(\mathbf{z}_{t}\), \(0<t\leq T\), while the second models the reverse process \(p\), reverting from the prior distribution back to the intended target \(\mathbf{y}\)[11]. In image SR context, we designate \(\mathbf{x}\) as the LR image and the target \(\mathbf{y}\) as the desired HR image. The prior distribution is generally set manually.
#### 2.1.1 Diffusion Process
Adopting the conventional approach, we introduce Gaussian noise to the diffusion process, which manifests as follows:
\[q(\mathbf{z}_{t}\mid\mathbf{z}_{t-1})=\mathcal{N}(\mathbf{z}_{t}\mid\sqrt{1- \alpha_{t}}\,\mathbf{z}_{t-1},\alpha_{t}\mathbf{I}). \tag{1}\]
The hyperparameters \(0<\alpha_{1:T}<1\) represent the noise variance injected at each time step. This formulation can be further simplified to:
\[q(\mathbf{z}_{t}\mid\mathbf{z}_{0})=\mathcal{N}(\mathbf{z}_{t}\mid\sqrt{ \gamma_{t}}\,\mathbf{z}_{0},(1-\gamma_{t})\mathbf{I}), \tag{2}\]
where \(\gamma_{t}=\prod_{i=1}^{t}(1-\alpha_{i})\)[22]. This reduction allows direct sampling of the intermediate step \(\mathbf{z}_{t}\), independent of the previous time steps, without the requirement of computing the previous time step \(\mathbf{z}_{t-1}\), via:
\[\mathbf{z}_{t}=\sqrt{\gamma_{t}}\cdot\mathbf{z}_{0}+\sqrt{1-\gamma_{t}}\cdot \varepsilon_{t},\quad\varepsilon_{t}\sim\mathcal{N}\left(\mathbf{0},\mathbf{ I}\right). \tag{3}\]
Here, either \(\mathbf{z}_{0}\) or \(\varepsilon\) can be derived from \(\gamma_{t}\) and \(\mathbf{z}_{t}\) through the reorganization of Equation 3. Ho et al. [11] recommend predicting the noise, which has been widely accepted in the literature.
#### 2.1.2 Reverse Process
The reverse process aims to inverse diffusion by employing a parameterized model:
\[p_{\theta}\left(\mathbf{z}_{t-1}\mid\mathbf{z}_{t}\right)=\mathcal{N}\left( \mathbf{z}_{t-1}\mid\mu_{\theta}(\mathbf{z}_{t},\gamma_{t}),\Sigma_{\theta}( \mathbf{z}_{t},\gamma_{t})\right), \tag{4}\]
In image SR, we aim to incorporate conditional information, specifically the LR image, to guide the reverse process toward generating the corresponding HR image:
\[p_{\theta}\left(\mathbf{z}_{t-1}\mid\mathbf{z}_{t},\mathbf{x}\right)=\mathcal{ N}\left(\mathbf{z}_{t-1}\mid\mu_{\theta}(\mathbf{z}_{t},\mathbf{x},\gamma_{t}), \Sigma_{\theta}(\mathbf{z}_{t},\mathbf{x},\gamma_{t})\right). \tag{5}\]
As shown in the diffusion process, the mean \(\mu_{\theta}\) depends on a parameterized denoising function \(f_{\theta}\), which can either predict the added noise \(\varepsilon\) or the underlying image \(\mathbf{z}_{0}\). Following the standard approach of Ho et al. [11], we focus on predicting the noise in this work. Hence, the mean is
\[\mu_{\theta}(\mathbf{x},\mathbf{z}_{t},\gamma_{t})=\frac{1}{\sqrt{\alpha_{t}}} \left(\mathbf{z}_{t}-\frac{1-\alpha_{t}}{\sqrt{1-\gamma_{t}}}f_{\theta}\left( \mathbf{x},\mathbf{z}_{t},\gamma_{t}\right)\right). \tag{6}\]
Following Saharia et al. [21], setting the variance of \(p_{\theta}(\mathbf{z}_{t-1}|\mathbf{z}_{t},\mathbf{x})\) to \((1-\alpha_{t})\) yields the subsequent refining step:
\[\mathbf{z}_{t-1}\leftarrow\frac{1}{\sqrt{\alpha_{t}}}\left(\mathbf{z}_{t}- \frac{1-\alpha_{t}}{\sqrt{1-\gamma_{t}}}f_{\theta}\left(\mathbf{x},\mathbf{z}_ {t},\gamma_{t}\right)\right)+\sqrt{1-\alpha_{t}}\varepsilon_{t}, \tag{7}\]
where \(\varepsilon_{t}\sim\mathcal{N}(\mathbf{0},\,\mathbf{I})\).
#### 2.1.3 Optimization
We want our parameterized model to predict the noise added to the input in the intermediate time steps \(t\) as is standard in the literature, which results in the following loss function:
\[\mathcal{L}\left(\theta\right)=\underset{(\mathbf{x},\mathbf{y})}{\mathbb{E}} \underset{t}{\mathbb{E}}\left\|\varepsilon_{t}-f_{\theta}\left(\mathbf{x}, \mathbf{z}_{t},\gamma_{t}\right)\right\|_{1} \tag{8}\]
### Dino
DINO, an acronym for DIstillation with NO labels, is a self-supervised learning approach [4]. It involves a teacher and a student network that is optimized to match the teacher's output via a cross-entropy loss. While both networks share the same architecture, namely Vision Transformers (ViTs) [8], they differ in their parameters.
During the training phase, each network receives two random transformations of the same input: the teacher receives global views, two \(224\times 224\) crops of the original image, and the student gets local views, i.e., crops smaller than \(224\times 224\). This setup encourages the student to learn "local-to-global" correspondences. In other words, the student learns to predict global features from local patches. Moreover, the student is supervised by a cross-entropy loss from the momentum teacher's embedding [10], which means that the teacher's weights are an exponentially moving average of the student's weights. This strategy effectively circumvents mode collapse when the teacher and student have identical architectures and produce congruent embeddings.
By creating an artificial classification problem, this framework enables the student to draw meaningful global representations from local perspectives. Given the underlying architecture of a ViT, we can examine self-attention maps from various attention heads. The authors of DINO demonstrated in their work that these self-attention maps contain information about scene layout, object semantics, and areas of interest due to its comprehensive feature extraction, a characteristic we exploit in our research [4]. Similarly, semantic information from DINO can be used to control bit rate allocation in image compression [3].
## 3 Methodology
We propose YODA, a technique that optimizes the diffusion and reverse processes by focusing on key regions of the image in each time step. Thus, YODA enhances overall image quality by pinpointing essential and detail-rich areas for more frequent refinement. We begin by introducing the concept of time-dependent masking. Following this, we explore how this masking can be seamlessly integrated into the training and sampling pipeline of DDPMs.
### Time-Dependent Masking
Let \(\mathbf{x}\) be the input LR image, which is to be enhanced to a SR prediction in \(T\) steps. We assume an attention mask \(\mathbf{M}\) of the same spatial size as \(\mathbf{x}\). Each entry of the mask, \(0\leq\mathbf{M}_{i,j}\leq 1\), reflects the importance of the corresponding spatial position in \(\mathbf{x}\). For two coordinates \((i,j)\) and \((i^{\prime},j^{\prime})\) with \(\mathbf{M}_{i,j}>\mathbf{M}_{i^{\prime},j^{\prime}}\), our diffusion method applies more refinement steps to the location \((i,j)\) than to \((i^{\prime},j^{\prime})\).
Further, we advance the masking process by introducing a lower bound hyperparameter of \(0<l<1\), eliminating areas that would never undergo diffusion (in case of \(\mathbf{M}_{i,j}=0\)) and ensuring a minimum number of diffusion steps in each spatial position. As a result, we can formulate a time-dependent mask with the following equation:
\[\mathbf{M}(t)_{i,j}=\begin{cases}1,\text{if }T\cdot(\mathbf{M}_{i,j}+l)\geq t \\ 0,\text{otherwise}\end{cases} \tag{9}\]
Figure 1 shows an example of our time-dependent masking. For each time step within the range \(0\leq t\leq T\), we can
Figure 1: Time-dependent masking using \(T=1000\).
Figure 2: Sampling process. It starts with masking areas that need refinement (derived from \(\mathbf{z}_{t}\) and \(\mathbf{M}(t)\)) and LR regions, which retain the noise level needed for the next time step. Finally, the SR and LR areas are combined to form a whole image with no masked-out regions for the next iteration.
logically determine whether a given spatial position \((i,j)\) should be refined. This formulation requires a modified training and sampling procedure, which is different from standard DDPMs, discussed in the following sections.
### Training
We aim to confine the diffusion and reverse process to specific areas determined by the current time step \(0\leq t\leq T\) and the corresponding time-dependent mask \(\mathbf{M}(t)\). This leads to a modified version of Equation 8 as a training objective:
\[\mathcal{L}\left(\theta\right)=\mathop{\mathbb{E}}_{\left(\mathbf{x},\mathbf{ y}\right)}\mathop{\mathbb{E}}_{t}\left\|\mathbf{M}(t)\odot\left[\varepsilon_{t}-f_{ \theta}\left(\mathbf{x},\mathbf{z}_{t},\gamma_{t}\right)\right]\right\|_{1} \tag{10}\]
### Sampling
The sampling process iteratively reverses the diffusion process, transitioning from the noisy state \(\mathbf{z}_{T}\) to the clean state \(\mathbf{z}_{0}\). To ensure that masked and non-masked image regions are correctly transitioned in-between time steps, we formulate the sampling process visualized in Figure 2 similar to related inpainting tasks such as RePaint [18].
First, we identify areas that require refinement in the next time step \((t-1)\), which are derived from the output of the current iteration \(\mathbf{z}_{t}\), and the current mask \(\mathbf{M}(t)\):
\[\widetilde{\mathbf{z}}_{t}\leftarrow\mathbf{M}(t)\odot\mathbf{z}_{t} \tag{11}\]
Next, we divide the image into two components that will later form a dichotomy: \(\mathbf{z}_{t-1}^{SR}\), which is the refined image in the next step, and \(\mathbf{z}_{t-1}^{LR}\), the complementary LR image that remains unchanged sampled using \(\mathbf{x}\) as the mean. Both components acquire the same noise level \(\Sigma_{\theta}(\widetilde{\mathbf{z}}_{t},\mathbf{x},\gamma_{t})\), and can be described by:
\[\mathbf{z}_{t-1}^{SR} \sim\mathcal{N}\left(\mu_{\theta}(\widetilde{\mathbf{z}}_{t}, \mathbf{x},\gamma_{t}),\Sigma_{\theta}(\widetilde{\mathbf{z}}_{t},\mathbf{x}, \gamma_{t})\right) \tag{12}\] \[\mathbf{z}_{t-1}^{LR} \sim\mathcal{N}\left(\mathbf{x},\Sigma_{\theta}(\widetilde{ \mathbf{z}}_{t},\mathbf{x},\gamma_{t})\right) \tag{13}\]
The last step combines the complementing, non-overlapping areas to reconstruct a complete image:
\[\mathbf{z}_{t-1}\leftarrow\mathbf{M}(t)\odot\mathbf{z}_{t-1}^{SR}+(1-\mathbf{ M}(t))\odot\mathbf{z}_{t-1}^{LR} \tag{14}\]
## 4 Experiments
We evaluate our proposed YODA method and compare its performance in tandem with SR3 for face-only, and SRDiff for general SR. We present quantitative and qualitative results for both tasks. Overall, our method achieves high-quality results for both face and general SR tasks and outperforms using standard metrics such as PSNR, SSIM, and LPIPS.
### Training Details
The implementation of our models is made publicly available on GitHub1, which complements the official implementation of SRDiff2 and the unofficial implementation of SR33. All experiments were run on a single NVIDIA A100-80GB GPU.
Footnote 1: [https://github.com/WILL-BE-IN-FINAL](https://github.com/WILL-BE-IN-FINAL)
Footnote 2: [https://github.com/LeiaLi/SRDiff](https://github.com/LeiaLi/SRDiff)
Footnote 3: [https://github.com/Janspiry/Image-Super-Resolution-via-Iterative-Refinement](https://github.com/Janspiry/Image-Super-Resolution-via-Iterative-Refinement)
#### 4.1.1 Face Super-Resolution
We use the Flickr-Faces-HQ (FFHQ) dataset [13] for training, which comprises 50,000 high-quality facial images sourced from Flickr. We adopted the AdamW [17] optimizer, using a weight decay of 0.0001 and a learning rate of 5e-5. The number of sampling steps is set to \(T_{train}=500\). For evaluation, we use the CelebA-HQ dataset [12], which contains 30,000 facial images. The number of sampling steps is set to \(T_{eval}=200\). Furthermore, we employed 1M training iterations as in SR3 [21]. We evaluated three scenarios: \(16\times 16\to 128\times 128\), \(64\times 64\to 256\times 256\), and \(64\times 64\to 512\times 512\).
#### 4.1.2 General Super-Resolution
We follow the setup and use the same hyperparameters of SRDiff [14], which itself follows the experimental design of SRFlow [19]: For training, we employed 800 2K resolution high-quality images from DIV2K [1] and 2,650 2K images from Flickr2K [24]. For testing, we used the DIV2K validation set, which consists of 100 images. Mathematically, \(\mathbf{z}_{t-1}^{LR}\) in Equation 14 has a mean value of 0 instead of \(\mathbf{x}\) due to SRDiff's prediction of the residual information between the LR and HR images.
### Results
This section presents the results of our work. First, we show the ablation study of different attention maps extracted with DINO alongside deterministic (non-DL) methods to derive attention maps. We also examined aggregations of DINO attention maps. Next, we present quantitative and qualitative results for facial and general SR.
#### 4.2.1 Attention Maps
Table 1 presents our study with several baselines and masking variants for \(16\times 16\to 128\times 128\) scaling on the CelebA-HQ dataset. The models are compared across three key metrics: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and perceptual-based distance metric LPIPS [20].
We begin by evaluating non-deep learning-based methods employed to derive attention maps:
* **Gaussian:** Placing a simple 2D Gaussian pattern at the center of the image provides a straightforward approach which relies on the assumption that the essential parts of an image are centered.
* **Edge-based Segmentation:** Using the Canny edge detector, the attention maps are defined by the edges, where adjacent and near edges are connected to create defined regions.
* **Scale-Invariant Feature Transform (SIFT):** Through Gaussian differences, SIFT provides an attention map characterized by scale invariance. It produces an attention map by applying 2D Gaussian patterns around the points of interest.
The evaluation of these deterministic strategies revealed mixed performance results. Specifically, the edge-based segmentation and SIFT methods displayed superior efficacy to SR3 under identical hyperparameter conditions. Except for SIFT, they underperformed relative to the reported SR3 [21] results across all metrics. As expected, the straightforward Gaussian approach yielded the worst performance as it does not adapt to image features.
In contrast, using DINO to extract attention maps shows improved performance for both ResNet-50 and ViT-S/8 backbones. Individual attention (0 to 5) were independently tested along with combination strategies that include both averaging (AVG) and selecting the maximum value (MAX). The MAX combination surfaced as a distinct method that achieved the best results compared to individual heads or the AVG combination. ResNet-50 yields the best performance upon the two backbone architectures, indicating its suitability to guide our time-dependent diffusion approach. Thus, in the remaining paper, we use the MAX aggregation of attention heads method within DINO and utilize the ResNet-50 backbone.
Figure 4 provides additional information on the ratio of diffused pixels using our time-dependent masking approach and the total number of pixel upd
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & **PSNR**\(\uparrow\) & **SSIM**\(\uparrow\) & **LPIPS**\(\downarrow\) \\ \hline SR3 (reported) & 23.04 & 0.650 & n.a. \\ SR3 (reproduced) & 22.35 & 0.646 & 0.082 \\ \hline Gaussian & 22.13 & 0.602 & 0.260 \\ Edge-based Seg. & 22.93 & 0.648 & 0.151 \\ SIFT & 22.84 & 0.678 & 0.095 \\ \hline ViT-S/8 Att.-Head 0 & 22.91 & 0.650 & 0.105 \\ ViT-S/8 Att.-Head 1 & 22.43 & 0.616 & 0.130 \\ ViT-S/8 Att.-Head 2 & 22.55 & 0.633 & 0.111 \\ ViT-S/8 Att.-Head 3 & 22.73 & 0.641 & 0.110 \\ ViT-S/8 Att.-Head 4 & 22.85 & 0.645 & 0.097 \\ ViT-S/8 Att.-Head 5 & 22.86 & 0.648 & 0.101 \\ ViT-S/8 AVG & 23.25 & 0.663 & 0.122 \\ ViT-S/8 MAX & 23.46 & 0.683 & 0.103 \\ \hline ResNet-50 Att.-Head 0 & 22.82 & 0.649 & 0.115 \\ ResNet-50 Att.-Head 1 & 22.54 & 0.627 & 0.117 \\ ResNet-50 Att.-Head 2 & 22.84 & 0.650 & 0.107 \\ ResNet-50 Att.-Head 3 & 22.78 & 0.645 & 0.105 \\ ResNet-50 Att.-Head 4 & 22.38 & 0.620 & 0.127 \\ ResNet-50 Att.-Head 5 & 22.50 & 0.630 & 0.119 \\ ResNet-50 AVG & 23.55 & 0.682 & 0.093 \\
**ResNet-50 MAX** & **23.84** & **0.695** & **0.072** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablation study of SR3+YODA using different attention maps for \(16\times 16\to 128\times 128\) on CelebA-HQ. The second block shows the results using non-DL importance masks, while the subsequent blocks show the performance with attention maps derived by DINO (with ViT-S/8 and ResNet-50 backbone).
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Scale** & **Model** & **PSNR**\(\uparrow\) & **SSIM**\(\uparrow\) & **LPIPS**\(\downarrow\) \\ \hline
4\(\times\) & SR3 & 17.98 & 0.607 & 0.138 \\
**4\(\times\)** & **SR3 + YODA** & **26.33** & **0.838** & **0.090** \\ \hline
8\(\times\) & SR3 & 17.44 & 0.631 & 0.147 \\
**8\(\times\)** & **SR3 + YODA** & **25.04** & **0.800** & **0.126** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on face SR with 4\(\times\) scaling (\(64\times 64\to 256\times 256\)) and 8\(\times\) scaling (\(64\times 64\to 512\times 512\)) on CelebA-HQ. The models were trained for 1M iterations on FFHQ and a reduced batch size of 4 and 8 in order to fit on a single GPU, respectively.
Figure 3: Comparison of attention maps derived from different DINO (ResNet-50 backbone) attention heads 0 to 5 and the combination of all attention maps (MAX). Blue indicates low attention values, while yellow indicates high attention values.
was uniformly applied across all pixel locations throughout every time step (as in standard diffusion). Therefore, any result under 100% shows that not all pixels are diffused during all time steps in the sampling process. As can be seen, DINO coupled with the ResNet-50 architecture requires more total pixel updates than its implementation with ViT-S/8 or deterministic methods. Note that the ResNet-50 implementation can employ 100% of the updates for particularly exceptional scenarios, a characteristic not observed with the combination of DINO and ViT-S/8. Moreover, integrating DINO with ResNet-50 and the MAX combination demands, on average, approximately 70% of the updates compared to SR3.
The coverage can also be inspected in Figure 3, where roughly 70% of the image is diffused according to the MAX-combined attention map. Interestingly, areas with better illumination are often weighted higher for the MAX-combined attention map. Figure 5 illustrates areas refined over time using the MAX-combined attention maps. Here, both ResNet-50 and ViT-S/8 show similar trends in terms of refined area amounts. However, ResNet-50 initiates the refinement process much earlier, advances more rapidly toward refining the entire image, and has a higher standard deviation.
Figure 6 offers comparative qualitative examples between SR3 and SR3+YODA, highlighting subtle yet potentially impactful differences, especially for the pixel-based metrics PSNR and SSIM. The most notable differences can be observed around the eyes, mouth, and hair.
#### 4.2.2 Face Super-Resolution
Beyond the study of attention maps, our work extended into further face SR experiments with varied scaling scenarios, specifically from \(64\times 64\to 256\times 256\) and \(64\times 64\to 512\times 512\). Due to the hardware requirements of SR3 and our available hardware, coupled with the absence of reported quantitative results in the original publication, our experiments with SR3 required a decrease from the originally used batch size of 256: we used a batch size of 8 for the \(64\times 64\to 256\times 256\) and 4 for the \(64\times 64\to 512\times 512\) scenario.
The results are shown in Table 2, where an evident enhancement in quality is observed when SR3 is coupled with YODA across all examined metrics. We explain the significant improvements by YODA with a phenomenon consistent with most SR predictions: a color shift within the SR3 predictions, which we attribute to the reduced batch size necessitated by hardware limitations. An example is shown in Figure 7. This color shift manifested in a pronounced deviation in pixel-based metrics PSNR and SSIM but did not affect the perceptual metric LPIPS in similar significance.
YODA's role seems to extend beyond mere performance enhancement. It actively mitigates the color shift phenomenon observed at smaller batch sizes. This observation underscores the potential of YODA as not merely a performance enhancer but also as a stabilizing factor, particularly when faced with constraints due to hardware requirements.
Figure 4: Ratio comparison between diffused pixels using our time-dependent masking approach and the total number of pixel updates in standard diffusion. On average, DINO with a ResNet-50 backbone leads to more pixel updates than the VIT-S/8 backbone. The lower bound indicates a threshold to eliminate areas that would never undergo diffusion.
Figure 5: Refined image area in percentage across time steps for the MAX combination. Note that the sampling process goes from \(T=500\) to 0. ResNet-50 initiates the refinement process much earlier, advances more rapidly toward refining the entire image, and has a higher standard deviation.
With YODA, SR3 can be trained to achieve strong performance using a much smaller batch size.
#### 4.2.3 General Super-Resolution
Table 3 shows the 4x scaling general image SR results on the DIV2K validation set. Note that the reported values include regression-based methods. These typically yield higher pixel-based scores (PSNR and SSIM), than generative approaches [21]. When implemented with YODA, SRDiff demonstrates improved performance in PSNR by +0.21db and SSIM by +0.01. However, there is a minor increase in LPIPS by +0.01. Thus, our approach excels in pixel-centric metrics but sees a marginal decline in the perceptual metric.
YODA's strengths appear more significant in face-only SR than in general SR. This may stem from SRDiff's design, which focuses on diffusing and denoising Gaussian noise within the residual image, i.e., the difference between LR and HR. Unlike SR3's approach of using the full LR image, SRDiff's residual image input is relatively sparse. As a result, we surmise that DINO's attention maps might not accurately capture the essential regions of the input, possibly overvaluing areas absent in the residual image.
A critical distinction between SR3 and SRDiff lies in their incorporation of conditional information, i.e., the LR image, which we also identify as a potential contributor to the reduced perceptual score. SRDiff employs an LR encoder that generates an embedding during the denoising phase. Meanwhile, SR3 directly appends the LR image to the input. Another possible reason for the lower perceptual score could be the image size during DINO training, i.e., \(224\times 224\) for the teacher network. As such, fine-tuning DINO on larger-scale images, such as DIV2K, might be essential to capture more meaningful semantic features.
Nevertheless, YODA's benefits can still be instrumental as it improves pixel-based scores (PSNR and SSIM) and might unlock optimized inference latency based on sparser diffusion steps in the future.
## 5 Conclusion
In this work, we presented a novel "You Only Diffuse Areas" (YODA) approach for attention-guided diffusion-based image SR that emphasizes specific areas through time-dependent masking. This targeting allows for a more efficient transition to high-resolution outputs, prioritizing
Figure 6: A comparison of LR, HR, SR3, and SR3+YODA images illustrates the improved quality of our proposed method for the \(16\times 16\to 128\times 128\) CelebA-HQ setting.
Figure 7: A comparison of LR, HR, SR3, and SR3+YODA images in the \(64\times 64\to 256\times 256\) setting. The color shift evident in vanilla SR3 is absent in our approach.
areas that gain the most from iterative refinements, such as detail-intensive objects.
First, we examined different techniques to derive attention maps, including deterministic methods as well as the self-supervised method DINO. Our investigation on the \(16\times 16\to 128\times 128\) face SR case led to selecting DINO with a MAX combination of attention maps as the optimal strategy, which we adopted for the following experiments.
Our subsequent evaluations compared YODA against vanilla SR3 in face SR tasks (\(64\times 64\to 256\times 256\) and \(64\times 64\to 512\times 512\)) and against vanilla SRDiff in general SR with 4x scaling. In both tasks, YODA outperformed the state-of-the-art diffusion-based techniques by extending them with YODA (plug&play), showcasing superiority in core metrics, including PSNR, SSIM, and LPIPS.
Beyond performance enhancement, YODA has the effect of stabilizing training. It mitigates the color shift phenomenon that emerges when vanilla SR3 is constrained by a reduced batch size due to hardware limitations. As a result, YODA consistently delivers impressive quality using smaller batch sizes than standard SR3. Therefore, SR3 combined with YODA can be used on more commonly available and possibly less expensive hardware, enhancing accessibility.
## 6 Limitations & Future Work
A notable constraint of this study is its dependence on DINO, thereby inheriting its limitations. For instance, to effectively handle medical image SR, DINO would require fine-tuning tailored to the characteristics of medical imagery. Additionally, DINO is explicitly trained for resolutions such as \(224\times 224\), which may not suffice for high-resolution image SR applications. An ideal solution would be a scale-invariant extraction of attention maps. Another limitation is that YODA introduces a new hyperparameter: the lower bound, which represents the minimum number of diffusion steps that must be defined before training.
For further research, explorations of unconditional image generation with YODA, such as text-to-image translation, and developing other innovative techniques to extract attention maps could be exciting avenues. Moreover, it would be interesting to see YODA perform related image restoration tasks, such as deblurring or unsupervised image SR.
## Acknowledgment
This work was supported by the EU project SustainML (Grant 101070408) and by Carl Zeiss Foundation through the Sustainable Embedded AI project (P2021-02-009).
|
2304.08749 | On pairs of $r$-primitive and $k$-normal elements with prescribed traces
over finite fields | Given $\mathbb{F}_{q^{n}}$, a field with $q^n$ elements, where $q $ is a
prime power and $n$ is positive integer. For $r_1,r_2,m_1,m_2 \in \mathbb{N}$,
$k_1,k_2 \in \mathbb{N}\cup \{0\}$, a rational function $F = \frac{F_1}{F_2}$
in $\mathbb{F}_{q}[x]$ with deg($F_i$) $\leq m_i$; $i=1,2,$ satisfying some
conditions, and $a,b \in \mathbb{F}_{q}$, we construct a sufficient condition
on $(q,n)$ which guarantees the existence of an $r_1$-primitive, $k_1$-normal
element $\epsilon \in \mathbb{F}_{q^n}$ such that $F(\epsilon)$ is
$r_2$-primitive, $k_2$-normal with
$\operatorname{Tr}_{\mathbb{F}_{q^n}/\mathbb{F}_q}(\epsilon) = a$ and
$\operatorname{Tr}_{\mathbb{F}_{q^n}/\mathbb{F}_q}(\epsilon^{-1}) = b$. For
$m_1=10, \; m_2=11,\; r_1 = 3, \; r_2 = 2, \; k_1=2,\;k_2 = 1$, we establish
bounds on $q$, for various $n$, to determine the existence of such elements in
$\mathbb{F}_{q^{n}}$. Furthermore, we identify all such pairs $(q,n)$ excluding
10 possible values of $(q,n)$, in fields of characteristics 13. | Aakash Choudhary, R. K. Sharma | 2023-04-18T06:09:40Z | http://arxiv.org/abs/2304.08749v2 | # On pairs of \(r\)-primitive and \(k\)-normal elements with prescribed traces over finite fields
###### Abstract.
Given \(\mathbb{F}_{q^{n}}\), a field with \(q^{n}\) elements, where \(q\) is a prime power, \(n\) is positive integer. For \(r\in\mathbb{N}\), \(k\in\mathbb{N}\cup\{0\}\), an element \(\epsilon\in\mathbb{F}_{q^{n}}\) is said to be \(r\)-primitive if its multiplicative order is \(\frac{q^{n}-1}{r}\) and it is referred to as \(k\)-normal if the greatest common divisor of the polynomial \(\sum_{i=0}^{n-1}\epsilon^{q^{i}}x^{n-1-i}\) with \(x^{n}-1\) has degree \(k\) in \(\mathbb{F}_{q^{n}}[x]\). In this article, for \(r_{1},r_{2},m_{1},m_{2}\in\mathbb{N}\), \(k_{1},k_{2}\in\mathbb{N}\cup\{0\}\), a rational function \(F=\frac{F_{1}}{F_{2}}\) in \(\mathbb{F}_{q}[x]\) with \(\deg(F_{i})\leq m_{i}\); \(i=1,2\), satisfying some conditions, and \(a,b\in\mathbb{F}_{q}\), we construct a sufficient condition on \((q,n)\) which guarantees the existence of an \(r_{1}\)-primitive, \(k_{1}\)-normal element \(\epsilon\in\mathbb{F}_{q^{n}}\) such that \(F(\epsilon)\) is \(r_{2}\)-primitive, \(k_{2}\)-normal with \(\operatorname{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon)=a\) and \(\operatorname{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon^{-1})=b\). Further, for \(m_{1}=10,m_{2}=11\), we demonstrate an example showing the existence of \(3\)-primitive, \(2\)-normal element \(\epsilon\) in \(\mathbb{F}_{q^{n}}\) such that \(F(\epsilon)\) is \(2\)-primitive, \(1\)-normal with \(\operatorname{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon)=a\) and \(\operatorname{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon^{-1})\)\(=b\) for any prescribed \(a,b\in\mathbb{F}_{q}\) except from possible \(10\) values of \((q,n)\) in field of characteristics \(13\).
Key words and phrases:Character, Finite field, \(r\)-Primitive element, \(k\)-Normal element
_email: [email protected], [email protected]_ 2020 _Mathematics Subject Classification._ 12E20, 11T23 _Key words and phrases._ Character, Finite field, \(r\)-Primitive element, \(k\)-Normal element
_email: [email protected], [email protected]_
for primitive elements in several applications. Researchers have studied the effective construction of such highly ordered elements in papers like [9, 14, 19]. \(k\)-normal elements has the potential to decrease the computational complexity of multiplication operations in finite fields, as explained in [16, 17, 18], and many researchers have studied their existence, including in papers like [21, 24, 26]. In [20, 1], authors presented a condition that is sufficient for the existence of \(r\)-primitive, \(k\)-normal elements in \(\mathbb{F}_{q^{n}}\) over \(\mathbb{F}_{q}\).
Numerous articles (see [5, 3, 22, 23, 7, 10]) have been published in which authors established sufficient criteria for the existence of primitive elements and normal elements in finite fields with prescribed traces. Recently [2], proposed a sufficient condition for the existence of \(r_{1}\)-primitive, \(k_{1}\)-normal element \(\epsilon\in\mathbb{F}_{q^{n}}\) such that \(F(\epsilon)\) is \(r_{2}\)-primitive, \(k_{2}\)-normal in \(\mathbb{F}_{q^{n}}\) over \(\mathbb{F}_{q}\), where \(F\in\Lambda_{q}(m_{1},m_{2})\) (see Definition 1.1). Prior to this article, \(r\)-primitive, \(k\)-normal elements with prescribed traces were not examined. In this article we present a condition that ensures the existence of element \(\epsilon\in\mathbb{F}_{q^{n}}\) such that \(\epsilon\) is \(r_{1}\)-primitive, \(k_{1}\)-normal, \(F(\epsilon)\) is \(r_{2}\)-primitive, \(k_{2}\)-normal in \(\mathbb{F}_{q^{n}}\) over \(\mathbb{F}_{q}\) with \(\operatorname{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon)=a\) and \(\operatorname{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon^{-1})=b\) for any \(a,b\in\mathbb{F}_{q}\), where \(F\in\Lambda_{q^{n}}(m_{1},m_{2})\), \(r_{1}\) and \(r_{2}\) are positive divisors of \(q^{n}-1\), and \(k_{1}\) and \(k_{2}\) are degrees of some polynomials over \(\mathbb{F}_{q}\) that divide \(x^{n}-1\).
**Definition 1.1**.: _For positive integers \(m_{1},m_{2}\), Define \(\Lambda_{q}(m_{1},m_{2})\) as the set of rational functions \(F=\frac{F_{1}}{F_{2}}\in\mathbb{F}_{q}(x)\), such that \(F_{1}\) and \(F_{2}\) are relatively prime, deg\((F_{i})\leq m_{i}\); \(i=1,2\), and there exist \(m\in\mathbb{N}\) and an irreducible monic polynomial \(g\in\mathbb{F}_{q}[x]\setminus\{x\}\) such that gcd\((m,q-1)=1\), \(g^{m}|F_{1}F_{2}\) but \(g^{m+1}\nmid F_{1}F_{2}\)._
Let \(A_{m_{1},m_{2}}(r_{1},r_{2},k_{1},k_{2})\) denote the set consisting of pairs \((q,n)\in\mathbb{N}\times\mathbb{N}\) such that for any \(F\in\Lambda_{m_{1},m_{2}}\), \(r_{1}\) and \(r_{2}\), divisors of \(q^{n}-1\), \(k_{1}\) and \(k_{2}\), non-negative integers, and \(a,b\in\mathbb{F}_{q}\), the set \(\mathbb{F}_{q^{n}}\) contains an element \(\epsilon\) such that \(\epsilon\) is \(r_{1}\)-primitive, \(k_{1}\)-normal, and \(F(\epsilon)\) is \(r_{2}\)-primitive, \(k_{2}\)-normal element in \(\mathbb{F}_{q^{n}}\) over \(\mathbb{F}_{q}\) with \(\operatorname{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon)=a\) and \(\operatorname{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon^{-1})=b\).
In this article, we begin by considering \(F\in\Lambda_{q^{n}}(m_{1},m_{2})\), \(r_{1}\) and \(r_{2}\) as divisors of \(q^{n}-1\), \(k_{1}\) and \(k_{2}\) as degrees of some polynomials \(f_{1}\) and \(f_{2}\) over \(\mathbb{F}_{q}\) that divide \(x^{n}-1\), and \(a,b\in\mathbb{F}_{q}\). We then establish a sufficient condition on \((q,n)\) such that \((q,n)\in A_{m_{1},m_{2}}(r_{1},r_{2},k_{1},k_{2})\). Furthermore, using some results and seive variation of this sufficient condition we prove the following result:
**Theorem 1.2**.: _Suppose \(n\geq 13\), be a positive integer and \(q=13^{y}\) be powers of \(13\). If \(x^{n}-1\) has a factor of degree 2 in \(\mathbb{F}_{q}[x]\), and \(q\) and \(n\) assumes none of the following values:_
1. \(q=13\) _and_ \(n=13,14,15,16,18,20,24\)_;_
2. \(q=13^{2}\) _and_ \(n=13,14\)_;_
3. \(q=13^{3}\) _and_ \(n=13\)_._
_Then, \((q,n)\in A_{10,11}(3,2,2,1)\)._
_Note:-_ The exceptions in the preceding theorem are not always exceptions. They represent potential exceptions.
SageMath [25]is used to perform all non-trivial complex calculations required throughout this article.
## 2. Preliminaries
This section serves as an introduction to fundamental concepts, notations, and findings that will be employed in subsequent sections of this article. Throughout this article, \(n\) represents a positive integer, \(q\) stands for an arbitrary prime power, and \(\mathbb{F}_{q}\) denotes a finite field with \(q\) elements.
### Definitions
1. A character of a finite abelian group \(G\) is a homomorphism \(\chi\) from the set \(G\) into \(Z^{1}\), where \(Z^{1}\) is the set of all elements of complex field \(\mathbb{C}\) with absolute value \(1\). The trivial character of \(G\) denoted by \(\chi_{0}\), is defined as \(\chi_{0}(g)=1\) for all \(g\in G\). In addition, the set of all characters of \(G\), denoted by \(\widehat{G}\), forms a group under multiplication, which is isomorphic to \(G\). The order of a character \(\chi\) is the least positive integer \(d\) such that \(\chi^{d}=\chi_{0}\). For a finite field \(\mathbb{F}_{q^{n}}\), a character of the additive group \(\mathbb{F}_{q^{n}}\) is called an additive character and that of the multiplicative group \(\mathbb{F}_{q^{n}}^{*}\) is called a multiplicative character. For more information on characters, primitive elements and finite fields, we refer the reader to [13].
2. The Euler's totient function for polynomials \(f(x)\in\mathbb{F}_{q}[x]\) is defined as follows: \[\Phi_{q}(f)=\left|\left(\frac{\mathbb{F}_{q}[x]}{\langle f\rangle}\right)^{* }\right|=|f|\prod_{\begin{subarray}{c}p|f,\\ p\text{ irreducible}\\ \text{over}\mathbb{F}_{q}\end{subarray}}\left(1-\frac{1}{|p|}\right),\] where \(|f|=q^{deg(f)}\), and \(\langle f\rangle\) is the ideal generated by \(f\) in \(\mathbb{F}_{q}[x]\).
3. For \(a\in\mathbb{F}_{q}\), the characteristic function for the subset of \(\mathbb{F}_{q^{n}}\) whose elements satisfy \(\mathrm{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon)=a\) is defined as \[\tau_{a}:\epsilon\mapsto\frac{1}{q}\sum_{\eta\in\widehat{\mathbb{F}}_{q}}\eta( \mathrm{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon)-a).\] According to [13, Theorem 5.7], every additive character \(\eta\) of \(\mathbb{F}_{q}\) can be obtained as \(\eta(a)=\eta_{0}(u^{\prime}a)\), where \(\eta_{0}\) is the canonical additive character of \(\mathbb{F}_{q}\) and \(u^{\prime}\) is an element of \(\mathbb{F}_{q}\) corresponding to \(\eta\). Thus \[\tau_{a} =\frac{1}{q}\sum_{u^{\prime}\in\mathbb{F}_{q}}\eta_{0}(\mathrm{Tr }_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(u^{\prime}\epsilon)-u^{\prime}a)\] \[=\frac{1}{q}\sum_{u^{\prime}\in\mathbb{F}_{q}}\widehat{\eta_{0}}(u^{ \prime}\epsilon)\eta_{0}(-u^{\prime}a),\] (2.1) where \(\widehat{\eta_{0}}\) is the additive character of \(\mathbb{F}_{q^{n}}\) defined by \(\widehat{\eta_{0}}(\epsilon)=\eta_{0}(\mathrm{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{ F}_{q}}(\epsilon))\).
The additive group of \(\mathbb{F}_{q^{n}}\) is an \(\mathbb{F}_{q}[x]\)-module under the operation \(f\circ\epsilon=\sum_{i=0}^{k}a_{i}\epsilon^{q^{i}}\), where \(\epsilon\in\mathbb{F}_{q^{n}}\) and \(f(x)=\sum_{i=0}^{k}a_{i}x^{i}\in\mathbb{F}_{q}[x]\). The \(\mathbb{F}_{q}\)-order of \(\epsilon\in\mathbb{F}_{q^{n}}\), denoted by \(\mathrm{Ord}(\epsilon)\), is the unique least degree polynomial \(g\) such that \(g\circ\epsilon=0.\) It is an easy observation that \(g\) is a factor of \(x^{n}-1\). By defining the action of \(\mathbb{F}_{q^{n}}[x]\) over \(\widehat{\mathbb{F}}_{q^{n}}\), by the operation \(\eta\circ f(\epsilon)=\eta(f\circ\epsilon)\), where \(\eta\in\widehat{\mathbb{F}}_{q^{n}},\epsilon\in\mathbb{F}_{q^{n}}\) and \(f\in\mathbb{F}_{q}[x]\), \(\widehat{\mathbb{F}}_{q^{n}}\) becomes an \(\mathbb{F}_{q}\)-module. From [11, Theorem 13.4.1], \(\widehat{\mathbb{F}}_{q^{n}}^{*}\) and \(\mathbb{F}_{q^{n}}^{*}\) are \(\mathbb{Z}\)-isomorphic modules,
and \(\widehat{\mathbb{F}}_{q^{n}}\) and \(\mathbb{F}_{q^{n}}\) are \(\mathbb{F}_{q}[x]\)-isomorphic modules. The following character sum holds true for every \(\epsilon\in\mathbb{F}_{q^{n}}.\)
\[\mathcal{I}_{0}(\epsilon)=\frac{1}{q^{n}}\sum_{\gamma\in\widehat{\mathbb{F}}_{q ^{n}}}\gamma(\epsilon)=\begin{cases}1&\text{if }\epsilon=0;\\ 0&\text{otherwise.}\end{cases} \tag{2.2}\]
The unique least degree monic polynomial \(g\) such that \(\eta\circ g=\chi_{0}\) is called the \(\mathbb{F}_{q}\)-order of \(\eta,\) denoted by \(\text{Ord}(\eta)\,.\) Moreover, there are \(\Phi_{q}(g)\) characters of \(\mathbb{F}_{q}\)-order \(g.\)
Let \(g\in\mathbb{F}_{q^{n}}\) is a divisor of \(x^{n}-1,\) an element \(\epsilon\in\mathbb{F}_{q^{n}}\) is \(g\)-free if \(\epsilon=h\circ\sigma,\) where \(\sigma\in\mathbb{F}_{q^{n}}\) and \(h|g,\) implies \(h=1\). It can easily be seen that an element in \(\mathbb{F}_{q^{n}}\) is \((x^{n}-1)\)-free if and only if it is normal. As in the multiplicative case, from [11, Theorem 13.4.4], for \(g|x^{n}-1,\) the characteristics function for \(g\)-free elements is given by
\[\Omega_{g}(\epsilon)=\frac{\Phi_{q}(g)}{q^{deg(g)}}\int\limits_{h|g}\psi_{h}( \epsilon)=\frac{\Phi_{q}(g)}{q^{deg(g)}}\sum_{h|g}\frac{\mu_{q}(h)}{\Phi_{q}(h )}\sum_{\psi_{h}}\psi_{h}(\epsilon), \tag{2.3}\]
where \(\sum_{h|g}\) runs over all the monic divisors \(h\in\mathbb{F}_{q^{n}}[x]\) of \(g,\)\(\psi_{h}\) is an additive character of \(\mathbb{F}_{q^{n}},\) the sum \(\sum_{\psi_{h}}\psi_{h}\) runs over all \(\Phi_{q}(h)\) additive characters of \(\mathbb{F}_{q}\)-order \(h\) and \(\mu_{q}\) is the polynomial Mobius function defined as
\[\mu_{q}(h)=\left\{\begin{array}{ll}(-1)^{r}&\text{if $h$ is product of $r$ distinct monic irreducible polynomial over $\mathbb{F}_{q}$,}\\ 0&\text{otherwise.}\end{array}\right.\]
**Definition 2.1**.: [6, Definition 3.1] _For a divisor \(r\) of \(q^{n}-1\) and a divisor \(R\) of \(\frac{q^{n}-1}{r}\), let \(H_{r}\) be a multiplicative cyclic subgroup of \(\mathbb{F}_{q^{n}}^{*}.\) An element \(\epsilon\in\mathbb{F}_{q^{n}}\) is referred to as \((R,r)\)-free if \(\epsilon\in H_{r}\) and \(\epsilon\) is \((R,r)\)-free in \(H_{r}\), i.e, if \(\epsilon=\sigma^{e}\) with \(\sigma\in H_{r}\) and \(e|R,\) then \(e=1\)._
Based on the definition above, it is clear that an element \(\epsilon\in\mathbb{F}_{q^{n}}^{*}\) is \(r\)-primitive if and only if it is \((\frac{q^{n}-1}{r},r)\)-free. From [6, Theorem 3.8], the characteristic function for the set of \((R,r)\)-free elements is given by
\[\mathbb{I}_{R,r}(\epsilon)=\frac{\theta(R)}{r}\int\limits_{d|Rr}\chi_{d}( \epsilon)=\frac{\theta(R)}{r}\sum_{d|Rr}\frac{\mu(d_{(r)})}{\phi(d_{(r)})}\sum _{\chi_{d}}\chi_{d}(\epsilon), \tag{2.4}\]
where \(\theta(R)=\frac{\phi(R)}{R}\), \(\mu\) is the mobius function, \(d_{(r)}=\frac{d}{gcd(d,r)}\), and the sum \(\sum_{\chi_{d}}\chi_{d}\) runs over all the multiplicative characters of \(\mathbb{F}_{q^{n}}^{*}\) of order \(d\).
For \(\kappa\), a positive integer (or a monic polynomial over \(\mathbb{F}_{q}\)), we use \(\omega(\kappa)\) to represent the number of distinct prime divisors (irreducible factors) of \(\kappa\), and \(W(\kappa)\) to represent the number of square-free divisors (square-free factors) of \(\kappa\). Clearly, \(W(\kappa)=2^{\omega(\kappa)}.\) We have the following result to bound the sum (2.4).
**Lemma 2.2**.: [6, Lemma 2.5] _For any positive integer \(R,r\), we have that_
\[\sum_{d|Rr}\frac{\mu(d_{(r)})}{\phi(d_{(r)})}\phi(d)=gcd(R,r)W(gcd(R,R_{(r)})).\]
The results provided by Wang and Fu [8] will play a crucial role in the proof of Theorem 3.1
**Lemma 2.3**.: _[_8_, Theorem 5.5]_ _Let \(f(x)\in\mathbb{F}_{q^{n}}(x)\) be a rational function. Write \(f(x)=\prod_{j=1}^{k}f_{j}(x)^{r_{j}}\), where \(f_{j}(x)\in\mathbb{F}_{q^{n}}[x]\) are irreducible polynomials and \(r_{j}^{\prime}s\) are non zero integers. Let \(\chi\) be a multiplicative character of \(\mathbb{F}_{q^{n}}\). Suppose that \(f(x)\) is not of the form \(r(x)^{\mathrm{Ord}(\chi)}\) for any, rational function \(r(x)\in\mathbb{F}(x)\), where \(\mathbb{F}(x)\) is algebraic closure of \(\mathbb{F}_{q^{n}}(x)\). Then we have_
\[\bigg{|}\sum_{\epsilon\in\mathbb{F}_{q^{n}},f(\epsilon)\neq 0,\infty}\chi(f( \epsilon))\bigg{|}\leq\bigg{(}\sum_{j=1}^{k}deg(f_{j})-1\bigg{)}q^{\frac{n}{2}}.\]
**Lemma 2.4**.: _[_8_, Theorem 5.6]_ _Let f(x), g(x) \(\in\mathbb{F}_{q^{n}}(x)\) be rational functions. Write f(x) = \(\prod_{j=1}^{k}f_{j}(x)^{r_{j}}\), where \(f_{j}(x)\in\mathbb{F}_{q^{n}}[x]\) are irreducible polynomials and \(r_{j}\) are non-zero integers. Let \(D_{1}=\sum_{j=1}^{k}deg(f_{j})\), \(D_{2}=max\{deg(g),0\}\), \(D_{3}\) is the degree of denominator of g(x) and \(D_{4}\) is the sum of degrees of those irreducible polynomials dividing denominator of \(g\) but distinct from \(f_{j}(x)\)(?= 1,2,...,k). Let \(\chi\) be a multiplicative character of \(\mathbb{F}_{q^{n}}\), and let \(\psi\) be a nontrivial additive character of \(\mathbb{F}_{q^{n}}\). Suppose g(x) is not of the form \(v(x)^{q^{n}}-v(x)\) in \(\mathbb{F}(x)\), where \(\mathbb{F}(x)\) is algebraic closure of \(\mathbb{F}_{q^{n}}(x)\). Then we have_
\[\bigg{|}\sum_{\epsilon\in\mathbb{F}_{q^{n}},f(\epsilon)\neq 0,\infty,g( \epsilon)\neq\infty}\chi(f(\epsilon))\psi(g(\epsilon))\bigg{|}\leq(D_{1}+D_{2 }+D_{3}+D_{4}-1)q^{\frac{n}{2}}.\]
**Remark 2.5**.: _In [21, Lemma 3.1], L. Ries determined a technique for constructing \(k\)-normal elements. Given a normal element \(\sigma\) in \(\mathbb{F}_{q^{n}}\) and a divisor \(f\) of \(x^{n}-1\) with degree \(k\) in \(\mathbb{F}_{q}[x]\), the composition \(\epsilon=f\circ\sigma\) is a k-normal element._
## 3. Sufficient Condition
Suppose \(r_{1}\) and \(r_{2}\) be positive divisors of \(q^{n}-1\), and \(f_{1}\) and \(f_{2}\in\mathbb{F}_{q}[x]\) be monic factors of \(x^{n}-1\) of degrees \(k_{1}\) and \(k_{2}\), respectively. Let \(m_{1}\) and \(m_{2}\) be non-negative integers such that \(1\leq m_{1}+m_{2}<q^{n/2}\), and \(F=\frac{F_{1}}{F_{2}}\in\Lambda_{q^{n}}(m_{1},m_{2})\). Also, let \(R_{1}\) and \(R_{2}\) be divisors of \(\frac{q^{n}-1}{r_{1}}\) and \(\frac{q^{n}-1}{r_{2}}\), respectively, and \(g_{1}\) and \(g_{2}\in\mathbb{F}_{q}[x]\) be monic factors of \(x^{n}-1\). Suppose \(C_{F,a,b}(R_{1},R_{2},g_{1},g_{2})\) (denote \(C_{F,a,b}(R_{1},R_{2},g)\) when \(g_{1}=g_{2}=g\)) denote the cardinality of the set containing elements \((\epsilon,\sigma_{1},\sigma_{2})\in\mathbb{F}_{q^{n}}^{*}\times\mathbb{F}_{q^ {n}}\times\mathbb{F}_{q^{n}}\) that satisfy the following conditions:
* \(\epsilon\) is \((R_{1},r_{1})\)-free, \(F(\epsilon)\) is \((R_{2},r_{2})\)-free,
* \(\sigma_{1}\) is \(g_{1}\)-free, \(\sigma_{2}\) is \(g_{2}\)-free,
* \(\epsilon=f_{1}\circ\sigma_{1}\), \(F(\epsilon)=f_{2}\circ\sigma_{2}\),
* \(\mathrm{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon)=a\) and \(\mathrm{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon^{-1})=b\), for \(a,b\in\mathbb{F}_{q}\).
In particular, \(C_{F,a,b}(\frac{q^{n}-1}{r_{1}},\frac{q^{n}-1}{r_{2}},x^{n}-1)\) denotes the number of elements \((\epsilon,\sigma_{1},\sigma_{2})\)\(\in\mathbb{F}_{q^{n}}^{*}\times\mathbb{F}_{q^{n}}\times\mathbb{F}_{q^{n}}\) such that \(\epsilon=f_{1}\circ\sigma_{1}\) is \(r_{1}\)-primitive, \(k_{1}\)-normal, \(F(\epsilon)=f_{2}\circ\sigma_{2}\) is \(r_{2}\)-primitive, \(k_{2}\)-normal, \(\mathrm{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon)=a\) and \(\mathrm{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon^{-1})=b\), for any \(a,b\in\mathbb{F}_{q}\).
We now present a sufficient condition as follows:
**Theorem 3.1**.: _Let \(\Omega=r_{1}r_{2}W(R_{1})W(R_{2})W(gcd(g_{1},\frac{x^{n}-1}{f_{1}}))W(gcd(g_{2}, \frac{x^{n}-1}{f_{2}}))\), if_
\[q^{\frac{n}{2}-k_{1}-k_{2}-2}>\begin{cases}\Omega\cdot(2m_{1}+2m_{2}+1)&\text{ if }\,m_{1}\geq m_{2},\\ \Omega\cdot(m_{1}+3m_{2}+1)&\text{ if }\,m_{1}<m_{2},\end{cases}\]
_then \(C_{F,a,b}(R_{1},R_{2},g_{1},g_{2})>0\)._
Proof.: Suppose \(Z_{F}\) is the set that constitutes all zeroes and poles of \(F\in\Lambda_{q^{n}}(m_{1},m_{2})\). By definition, \(C_{F,a,b}(R_{1},R_{2},g_{1},g_{2})\) is given by
\[\sum_{\begin{subarray}{c}\epsilon\in\mathbb{F}_{q^{n}}^{*}\setminus Z_{F}\\ \sigma_{1},\sigma_{2}\in\mathbb{F}_{q^{n}}\end{subarray}}\mathbb{I}_{R_{1},r_ {1}}(\epsilon)\mathbb{I}_{R_{2},r_{2}}(F(\epsilon))\Omega_{g_{1}}(\sigma_{1}) \Omega_{g_{2}}(\sigma_{2})\mathcal{I}_{0}(\epsilon-f_{1}\circ\sigma_{1}) \mathcal{I}_{0}(F(\epsilon)-f_{2}\circ\sigma_{2})\tau_{a}(\epsilon)\tau_{b}( \epsilon^{-1}).\]
Using (2.1),(2.2), (2.3) and (2.4), \(C_{F,a,b}(R_{1},R_{2},g_{1},g_{2})\) is equal to
\[\frac{\theta(R_{1})\theta(R_{2})\Theta(g_{1})\Theta(g_{2})}{r_{1}r_{2}q^{2}} \int\limits_{\begin{subarray}{c}d_{1}|R_{1}r_{1}\\ d_{2}|R_{2}r_{2}\end{subarray}}\int\limits_{\begin{subarray}{c}h_{1}|g_{1}\\ h_{2}|g_{2}\end{subarray}}\sum_{\gamma_{1},\gamma_{2}\in\mathbb{F}_{q^{n}}} \chi_{F,a,b}(\chi_{d_{1}},\chi_{d_{2}},\psi_{h_{1}},\psi_{h_{2}},\gamma_{1}, \gamma_{2},\eta), \tag{3.1}\]
where
\[\chi_{F,a,b}(\chi_{d_{1}},\chi_{d_{2}},\psi_{h_{1}},\psi_{h_{2}}, \gamma_{1},\gamma_{2},\eta)=\frac{1}{q^{2n}}\sum_{u^{\prime},v^{\prime}\in \mathbb{F}_{q}}\eta_{0}(-au^{\prime}-bv^{\prime})\sum_{\epsilon\in\mathbb{F}_{ q^{n}}^{*}\setminus Z_{F}}\chi_{d_{1}}(\epsilon)\chi_{d_{2}}(F( \epsilon))\times\] \[\times\gamma_{1}(\epsilon)\gamma_{2}(F(\epsilon))\widehat{\eta_{ 0}}(u^{\prime}\epsilon+v^{\prime}\epsilon^{-1})\sum_{\sigma_{1}\in\mathbb{F}_ {q^{n}}}\psi_{h_{1}}(\sigma_{1})\gamma_{1}^{-1}(f_{1}\circ\sigma_{1})\sum_{ \sigma_{2}\in\mathbb{F}_{q^{n}}}\psi_{h_{2}}(\sigma_{2})\gamma_{2}^{-1}(f_{2} \circ\sigma_{2}).\]
It follows from [1, Lemma 2.5], if \(\gamma_{i}\in\widehat{f}_{i}^{-1}(\psi_{h_{i}})\); \(i\in\{1,2\}\), then
\[\sum_{\sigma_{i}\in\mathbb{F}_{q^{n}}}\psi_{h_{i}}(\sigma_{i})\gamma_{i}^{-1}( f_{i}\circ\sigma_{i})=q^{n}. \tag{3.2}\]
Left hand side of (3.2) is 0, if \(\gamma_{i}\notin\widehat{f}_{i}^{-1}(\psi_{h_{i}})\), and the set \(\widehat{f}_{i}^{-1}(\psi_{h_{i}})\) is empty if \(\mathbb{F}_{q}\) -order of \(\psi_{h_{i}}=h_{i}\) does not divide \(\frac{x^{n}-1}{f_{i}}\); \(i\in\{1,2\}\). Define \(\tilde{g_{i}}=gcd(g_{i},\frac{x^{n}-1}{f_{i}})\), then \(C_{F,a,b}(R_{1},R_{2},g_{1},g_{2})\) becomes
\[\frac{\theta(R_{1})\theta(R_{2})\Theta(g_{1})\Theta(g_{2})}{r_{1}r_{2}q^{2}} \int\limits_{\begin{subarray}{c}d_{1}|R_{1}r_{1}\\ d_{2}|R_{2}r_{2}\end{subarray}}\int\limits_{\begin{subarray}{c}h_{1}|g_{1}\\ h_{2}|g_{2}\end{subarray}}\sum_{\begin{subarray}{c}\gamma_{1}\in\widehat{f}_{1 }^{-1}(\psi_{h_{1}})\\ \gamma_{2}\in\widehat{f}_{1}^{-1}(\psi_{h_{2}})\end{subarray}}\chi_{F,a,b}( \chi_{d_{1}},\chi_{d_{2}},\gamma_{1},\gamma_{2},\eta),\]
where
\[\chi_{F,a,b}(\chi_{d_{1}},\chi_{d_{2}},\gamma_{1},\gamma_{2},\eta)=\sum_{u^{ \prime},v^{\prime}\in\mathbb{F}_{q}}\eta_{0}(-au^{\prime}-bv^{\prime})\sum_{ \epsilon\in\mathbb{F}_{q^{n}}^{*}\setminus Z_{F}}\chi_{d_{1}}(\epsilon)\chi_{ d_{2}}(F(\epsilon))\gamma_{1}(\epsilon)\gamma_{2}(F(\epsilon))\widehat{\eta_{0}}( \epsilon^{\prime}),\]
where \(\epsilon^{\prime}=u^{\prime}\epsilon+v^{\prime}\epsilon^{-1}\). Since \(\gamma_{1}\) and \(\gamma_{2}\) are additive characters of \(\mathbb{F}_{q^{n}}\), there exist \(w_{1},w_{2}\in\mathbb{F}_{q^{n}}\) such that \(\gamma_{1}(\epsilon)=\widehat{\eta}_{0}(w_{1}\epsilon)\) and \(\gamma_{2}(F(\epsilon))=\widehat{\eta}_{0}(w_{2}F(\epsilon))\), where \(\widehat{\eta}_{0}\) is the
canonical additive character of \(\mathbb{F}_{q^{n}}\). Assuming \(u_{0}=w_{1}+u^{\prime}\), we have
\[\chi_{F,a,b}(\chi_{d_{1}},\chi_{d_{2}},\gamma_{1},\gamma_{2},\eta)= \sum_{u^{\prime},v^{\prime}\in\mathbb{F}_{q}}\eta_{0}(-au^{\prime}-bv^{\prime}) \sum_{\epsilon\in\mathbb{F}_{q^{n}}^{*}\setminus Z_{F}}\chi_{d_{1}}(\epsilon) \chi_{d_{2}}(F(\epsilon))\times\] \[\times\widehat{\eta}_{0}(u_{0}\epsilon+w_{2}F(\epsilon)+v^{\prime }\epsilon^{-1}).\]
Let \(\deg(F_{i})=n_{i};\;i=1,2\). First we consider the case when \(d_{2}=1\). we have
\[\chi_{F,a,b}(\chi_{d_{1}},\chi_{1},\gamma_{1},\gamma_{2},\eta)= \sum_{u^{\prime},v^{\prime}\in\mathbb{F}_{q}}\eta_{0}(-au^{\prime}-bv^{\prime })\sum_{\epsilon\in\mathbb{F}_{q^{n}}^{*}\setminus Z_{F}}\chi_{d_{1}}( \epsilon)\widehat{\eta}_{0}(G(\epsilon)),\]
where \(G(x)=\dfrac{u_{0}x^{2}F_{2}(x)+w_{2}xF_{1}(x)+v^{\prime}F_{2}(x)}{xF_{2}(x)}\). If \(G(x)\neq r(x)^{q^{n}}-r(x)\) for any \(r(x)\in\mathbb{F}(x)\), then we have two cases:
_Case 1:-_ If \(n_{1}>n_{2}+1\), then in accordance with Lemma 2.4, we have \(D_{2}=n_{1}-n_{2}\), and
\[|\chi_{F,a,b}(\chi_{d_{1}},\chi_{1},\gamma_{1},\gamma_{2},\eta)| \leq(2(m_{1}+m_{2})+1)q^{\frac{n}{2}+2}. \tag{3.3}\]
_Case 2:-_ If \(n_{1}\leq n_{2}+1\), then \(D_{2}=1\), and
\[|\chi_{F,a,b}(\chi_{d_{1}},\chi_{1},\gamma_{1},\gamma_{2},\eta)| \leq(m_{1}+3m_{2}+2)q^{\frac{n}{2}+2}. \tag{3.4}\]
If \(G(x)=r(x)^{q^{n}}-r(x)\), for some \(r(x)\in\mathbb{F}(x)\), where \(r(x)=\frac{r_{1}(x)}{r_{2}(x)}\) with \((r_{1},r_{2})=1\). We have, \(\dfrac{u_{0}x^{2}F_{2}(x)+w_{2}xF_{1}(x)+v^{\prime}F_{2}(x)}{xF_{2}(x)}=\dfrac {r_{1}(x)^{q^{n}}}{r_{2}(x)^{q^{n}}}-\dfrac{r_{1}(x)}{r_{2}(x)}\), i.e.,
\[(r_{1}(x)^{q^{n}}-r_{1}(x)r_{2}(x)^{q^{n}-1})xF_{2}(x)=r_{2}(x)^{q^ {n}}(u_{0}x^{2}F_{2}(x)+w_{2}xF_{1}(x)+v^{\prime}F_{2}(x)).\]
Since \((r_{1}(x)^{q^{n}}-r_{1}(x)r_{2}(x)^{q^{n}-1},r_{2}(x)^{q^{n}})=1\), \(r_{2}(x)^{q^{n}}|xF_{2}(x)\), which is possible only if \(r_{2}\) is constant. Let \(r_{2}(x)=c\), then we have
\[(r_{1}(x)^{q^{n}}-r_{1}(x)c^{q^{n}-1})xF_{2}(x)=c^{q^{n}}(u_{0}x ^{2}F_{2}(x)+w_{2}xF_{1}(x)+vF_{2}(x)). \tag{3.5}\]
From 3.5, \(xF_{2}(x)\) divides \((u_{0}x^{2}F_{2}(x)+w_{2}xF_{1}(x)+v^{\prime}F_{2}(x))\), which happens only when \(w_{2}=0\). Now \((r_{1}(x)^{q^{n}}-r_{1}(x)c^{q^{n}-1})x=c^{q^{n}}(u_{0}x^{2}+v^{\prime})\), implies \(v^{\prime}=0\), leaving out \((r_{1}(x)^{q^{n}}-r_{1}(x)c^{q^{n}-1})=c^{q^{n}}u_{0}x\), which only applies, if \(u_{0}=0\) and \(r_{1}(x)\) is constant, which in turn gives
\[|\chi_{F,a,b}(\chi_{d_{1}},\chi_{1},\gamma_{1},\gamma_{2},\eta)| \leq(m_{1}+m_{2})q^{\frac{n}{2}+2}. \tag{3.6}\]
Now assume, \(d_{2}>1\). There exist \(t_{1},t_{2}\) with \(0\leq t_{1},t_{2}<q^{n}-1\) such that \(\chi_{d_{i}}(x)=\chi_{q^{n}-1}(x^{t_{i}})\); \(i\in\{1,2\}\). We have
\[\chi_{F,a,b}(\chi_{d_{1}},\chi_{d_{2}},\gamma_{1},\gamma_{2},\eta) =\sum_{u^{\prime},v^{\prime}\in\mathbb{F}_{q}}\eta_{0}(-au^{ \prime}-bv^{\prime})\sum_{\epsilon\in\mathbb{F}_{q^{n}}^{*}\setminus Z_{F}} \chi_{q^{n}-1}(G_{1}(\epsilon))\widehat{\eta}_{0}(G_{2}(\epsilon)),\]
where \(G_{1}(x)=x^{t_{1}}F(x)^{t_{2}}\in\mathbb{F}(x)\) and \(G_{2}(x)=u_{0}x+w_{2}F(x)+v^{\prime}x^{-1}\in\mathbb{F}(x)\). If \(G_{2}(x)\neq h(x)^{q^{n}}-h(x)\) for any \(h(x)\in\mathbb{F}(x)\), again we have two cases:
_Case 1:-_ If \(n_{1}>n_{2}+1\), then in accordance with Lemma 2.4, we have \(D_{2}=n_{1}-n_{2}\), and
\[|\chi_{F,a,b}(\chi_{d_{1}},\chi_{d_{2}},\gamma_{1},\gamma_{2}, \eta)|\leq(2m_{1}+m_{2}+1)q^{\frac{n}{2}+2}. \tag{3.7}\]
_Case 2:-_ If \(n_{1}\leq n_{2}+1\), then \(D_{2}=1\), and
\[|\chi_{F,a,b}(\chi_{d_{1}},\chi_{d_{2}},\gamma_{1},\gamma_{2},\eta)|\leq(m_{1}+2 m_{2}+2)q^{\frac{n}{2}+2}. \tag{3.8}\]
When \(G_{2}(x)=h(x)^{q^{n}}-h(x)\) for some \(h(x)\in\mathbb{F}(x)\), it leads to \(u_{0}=v^{\prime}=w_{2}=0\), and gives
\[\chi_{F,a,b}(\chi_{d_{1}},\chi_{d_{2}},\gamma_{1},\gamma_{2},\eta)=\sum_{u^{ \prime},v^{\prime}\in\mathbb{F}_{q}}\eta_{0}(-au^{\prime}-bv^{\prime})\sum_{ \epsilon\in\mathbb{F}_{q^{n}}^{*}\setminus\mathbb{Z}_{F}}\chi_{q^{n}-1}(G_{1 }(\epsilon)).\]
From [4, Theorem 3.2], \(G_{1}(x)\) is not of the type \(h(x)^{q^{n}-1}\) for any \(h(x)\in\mathbb{F}(x)\), it follows from Lemma 2.3,
\[|\chi_{F,a,b}(\chi_{d_{1}},\chi_{d_{2}},\gamma_{1},\gamma_{2},\eta)|\leq(m_{1 }+m_{2})q^{\frac{n}{2}+2}. \tag{3.9}\]
Suppose \(M=\max\{2(m_{1}+m_{2})+1,m_{1}+3m_{2}+1\}\). We observe that
\[|\chi_{F,a,b}(\chi_{d_{1}},\chi_{d_{2}},\gamma_{1},\gamma_{2},\eta)|\leq Mq^{ \frac{n}{2}+2}, \tag{3.10}\]
for inequalities (3.3), (3.4), (3.6), (3.7), (3.8) and (3.9). Let \(\psi_{1}\) be the trivial additive character, and suppose
\[U_{1}=\int\limits_{\begin{subarray}{c}d_{1}|R_{1}r_{1}\\ d_{2}|R_{2}r_{2}\end{subarray}}\sum_{\begin{subarray}{c}\gamma_{1}\in\tilde{ f}_{1}^{-1}(\psi_{1})\\ \gamma_{2}\in\tilde{f}_{2}^{-1}(\psi_{1})\\ (\gamma_{1},\gamma_{2})\neq(\psi_{1},\psi_{1})\end{subarray}}\chi_{F,a,b}( \chi_{d_{1}},\chi_{d_{2}},\gamma_{1},\gamma_{2},\eta),\]
and
\[U_{2}=\int\limits_{\begin{subarray}{c}d_{1}|R_{1}r_{1}\\ d_{2}|R_{2}r_{2}\end{subarray}}\int\limits_{\begin{subarray}{c}h_{1}|\tilde{ g_{1}}\\ h_{2}|\tilde{g_{2}}\\ (h_{1},h_{2})\neq(1,1)\end{subarray}}\sum_{\begin{subarray}{c}\gamma_{1}\in \tilde{f}_{1}^{-1}(\psi_{h_{1}})\\ \gamma_{2}\in\tilde{f}_{2}^{-1}(\psi_{h_{2}})\end{subarray}}\chi_{F,a,b}(\chi_ {d_{1}},\chi_{d_{2}},\gamma_{1},\gamma_{2},\eta).\]
It follows from [1, Lemma 2.5], Lemma 2.2 and (3.10),
\[|U_{1}|\leq Mq^{\frac{n}{2}+2}(q^{k_{1}+k_{2}}-1)r_{1}r_{2}W(R_{1})W(R_{2}),\]
and
\[|U_{2}|\leq Mq^{\frac{n}{2}+2}q^{k_{1}+k_{2}}r_{1}r_{2}W(R_{1})W(R_{2})(W( \tilde{g_{1}})W(\tilde{g_{2}})-1).\]
Hence, from the above discussion, along with (3.1), we get that
\[C_{F,a,b}(R_{1},R_{2},g_{1},g_{2}) \geq\frac{\theta(R_{1})\theta(R_{2})\Theta(g_{1})\Theta(g_{2})}{ r_{1}r_{2}q^{2}}((q^{n}-U\cdot q^{2})-(|U_{1}|+|U_{2}|))\] \[\geq\frac{\theta(R_{1})\theta(R_{2})\Theta(g_{1})\Theta(g_{2})}{ r_{1}r_{2}q^{2}}((q^{n}-(m_{1}+m_{2}+1)q^{2})\] \[\quad-(|U_{1}|+|U_{2}|))\] \[> \frac{\theta(R_{1})\theta(R_{2})\Theta(g_{1})\Theta(g_{2})}{r_{1}r _{2}q^{2}}(q^{n}-Mq^{\frac{n}{2}+2}q^{k_{1}+k_{2}}r_{1}r_{2}W(R_{1})\times\] \[\quad\times W(R_{2})W(\tilde{g_{1}})W(\tilde{g_{2}}))\]
Thus if, \(q^{\frac{n}{2}-k_{1}-k_{2}-2}>M\Omega\), then \(C_{F,a,b}(R_{1},R_{2},g_{1},g_{2})>0\).
**Corollary 3.2**.: _Suppose \(M=max\{2(m_{1}+m_{2})+1,m_{1}+3m_{2}+1\}\). If_
\[q^{\frac{n}{2}-k_{1}-k_{2}-2}>Mr_{1}r_{2}W\bigg{(}\frac{q^{n}-1}{r_{1}}\bigg{)}W \bigg{(}\frac{q^{n}-1}{r_{2}}\bigg{)}W\bigg{(}\frac{x^{n}-1}{f_{1}}\bigg{)}W \bigg{(}\frac{x^{n}-1}{f_{2}}\bigg{)}.\]
_Then, \((q,n)\in A_{m_{1},m_{2}}(r_{1},r_{2},k_{1},k_{2})\)._
Proof.: It follows by taking \(R_{i}=\frac{q^{n}-1}{r_{i}}\) and \(g_{i}=\frac{x^{n}-1}{f_{i}}\); \(i=1,2\) in Theorem 3.1.
From now onwards \(M=\max\{2(m_{1}+m_{2})+1,m_{1}+3m_{2}+1\}\). To prove Seive variation of Theorem 3.1 we need Lemma 3.3 and Lemma 3.4.
**Lemma 3.3**.: _Suppose \(l_{i}\) is a divisor of \(\frac{q^{n}-1}{r_{i}};\ i=1,2\). Let \(\{p_{1},p_{2},\ldots,p_{u}\}\) be the set of all primes dividing \(\frac{q^{n}-1}{r_{1}}\) but not \(l_{1}\), and \(\{q_{1},q_{2},\ldots,q_{v}\}\) be the set of all primes dividing \(\frac{q^{n}-1}{r_{2}}\) but not \(l_{2}\). Also, let \(\{P_{1},P_{2},\ldots,P_{s}\}\) be the set of all monic irreducible polynomials which divide \(x^{n}-1\) but not \(g_{1}\), and \(\{Q_{1},Q_{2},\ldots,Q_{t}\}\) be the set of all monic irreducible polynomials which divide \(x^{n}-1\) but not \(g_{2}\). Then_
\[C_{F,a,b}(\frac{q^{n}-1}{r_{1}},\frac{q^{n}-1}{r_{2}},x^{n}-1) \geq\sum_{i=1}^{u}C_{F,a,b}(l_{1}p_{i},l_{2},g_{1},g_{2})+\sum_{i= 1}^{v}C_{F,a,b}(l_{1},l_{2}q_{i},g_{1},g_{2})\] \[+\sum_{i=1}^{s}C_{F,a,b}(l_{1},l_{2},g_{1}P_{i},g_{2})+\sum_{i=1} ^{t}C_{F,a,b}(l_{1},l_{2},g_{1},g_{2}Q_{i})\] \[-(u+v+s+t-1)C_{F,a,b}(l_{1},l_{2},g_{1},g_{2}).\]
Proof.: It follows from the idea of [2].
**Lemma 3.4**.: _Suppose \(l_{i}\) be a divisor of \(\frac{q^{n}-1}{r_{i}};\ i=1,2\). Let \(\{p_{1},p_{2},\ldots,p_{u}\}\) be the set of all primes dividing \(\frac{q^{n}-1}{r_{1}}\) but not \(l_{1}\), and \(\{q_{1},q_{2},\ldots,q_{v}\}\) be the set of all primes dividing \(\frac{q^{n}-1}{r_{2}}\) but not \(l_{2}\). Also, let \(\{P_{1},P_{2},\ldots,P_{s}\}\) be the set of monic irreducible polynomials which divide \(\frac{x^{n}-1}{f_{1}}\) but not \(g_{1}\), and \(\{Q_{1},Q_{2},\ldots,Q_{t}\}\) be the set of monic irreducible polynomials which divide \(\frac{x^{n}-1}{f_{2}}\) but not \(g_{2}\). Suppose \(\tilde{g}_{i}=gcd(g_{i},\frac{x^{n}-1}{f_{i}})\); \(i=1,2,\) and \(\Gamma=Mq^{\frac{n}{2}+k_{1}+k_{2}+2}r_{1}r_{2}W(l_{1})W(l_{2})W(\tilde{g}_{1} )W(\tilde{g}_{2})\), then_
1. \(|C_{F,a,b}(l_{1}p_{i},l_{2},g_{1},g_{2})-\theta(p_{i})C_{F,a,b}(l_{1},l_{2},g _{1},g_{2})|\ \leq\ \frac{\theta(l_{1})\theta(l_{2})\theta(p_{i})\Theta(g_{1})\Theta(g_{2})}{r_{1} r_{2}q^{2}}\Gamma\) _for_ \(i=1,2,\ldots,u,\)__
2. \(|C_{F,a,b}(l_{1},l_{2}q_{i},g_{1},g_{2})-\theta(q_{i})C_{F,a,b}(l_{1},l_{2},g _{1},g_{2})|\ \leq\ \frac{\theta(l_{1})\theta(l_{2})\theta(q_{i})\Theta(g_{1})\Theta(g_{2})}{r_{1} r_{2}q^{2}}\Gamma\) _for_ \(i=1,2,\ldots,v,\)__
3. \(|C_{F,a,b}(l_{1},l_{2},g_{1}P_{i},g_{2})-\Theta(P_{i})C_{F,a,b}(l_{1},l_{2},g _{1},g_{2})|\leq\frac{\theta(l_{1})\theta(l_{2})\Theta(P_{i})\Theta(g_{1}) \Theta(g_{2})}{r_{1}r_{2}q^{2}}\Gamma\) _for_ \(i=1,2,\ldots,s,\) _and_
4. \(|C_{F,a,b}(l_{1},l_{2},g_{1},g_{2}Q_{i})-\Theta(Q_{i})C_{F,a,b}(l_{1},l_{2},g _{1},g_{2})|\leq\frac{\theta(l_{1})\theta(l_{2})\Theta(Q_{i})\Theta(g_{1}) \Theta(g_{2})}{r_{1}r_{2}q^{2}}\Gamma\) _for_ \(i=1,2,\ldots,t\)
Proof.: We prove (1) and other parts will follow in the same manner. By definition, we have
\[C_{F,a,b}(l_{1}p_{i},l_{2},g_{1},g_{2})-\theta(p_{i})C_{F,a,b}(l_{1},l_{2},g_{1}, g_{2})=\lambda\int\limits_{\begin{subarray}{c}p_{i}|d_{1}|r_{1}l_{2}p_{i}\\ d_{2}|r_{2}l_{2}\end{subarray}}\int\limits_{\begin{subarray}{c}n_{1}|g_{1} \\ n_{2}|g_{2}\end{subarray}}\sum\limits_{\begin{subarray}{c}n_{1}\in\widehat{f}_{ 1}^{-1}(\psi_{h_{1}})\\ \gamma_{2}\in\widehat{f}_{2}^{-1}(\psi_{h_{2}})\end{subarray}}\chi_{F,a,b}\]
for \(i=1,2,\ldots,u\), where \(\chi_{F,a,b}=\chi_{F,a,b}(\chi_{d_{1}},\chi_{d_{2}},\gamma_{1},\gamma_{2},\eta)\) and \(\lambda=\frac{\theta(l_{1})\theta(l_{2})\theta(p_{i})\Theta(g_{1})\Theta(g_{2 })}{r_{1}r_{2}q^{2}}\). It follows from [1, Lemma 2.5], Lemma 2.2 and (3.10),
\[|C_{F,a,b}(l_{1}p_{i},l_{2},g_{1},g_{2})-\theta(p_{i})C_{F,a,b}(l _{1},l_{2},g_{1},g_{2})|\leq\lambda Mq^{\frac{n}{2}+k_{1}+k_{2}+2}(W(l_{1}p_{i })-W(l_{1}))\times\] \[\times r_{1}r_{2}W(l_{2})W(\tilde{g}_{1})W(\tilde{g}_{2}).\]
Since \(W(l_{1}p_{i})=2W(l_{1})\), we get
\[|C_{F,a,b}(l_{1}p_{i},l_{2},g_{1},g_{2})-\theta(p_{i})C_{F,a,b}(l_{1},l_{2},g_ {1},g_{2})|\leq\frac{\theta(l_{1})\theta(l_{2})\theta(p_{i})\Theta(g_{1}) \Theta(g_{2})}{r_{1}r_{2}q^{2}}\Gamma,\]
for \(i=1,2,\ldots,u\).
We shall now prove Seive variation of Theorem 3.1.
**Theorem 3.5**.: _Assume the notations and conditions in Lemma 3.4. Let \(\delta=1-\sum_{i=1}^{u}\frac{1}{p_{i}}-\sum_{i=1}^{v}\frac{1}{q_{i}}-\sum_{i=1 }^{s}\frac{1}{q^{deg(p_{i})}}-\sum_{i=1}^{t}\frac{1}{q^{deg(Q_{i})}}>0\) and \(\Delta=\frac{u+v+s+t-1}{\delta}+2\). If_
\[q^{\frac{n}{2}-k_{1}-k_{2}-2}>Mr_{1}r_{2}\Delta W(l_{1})W(l_{2})W(\tilde{g}_{1 })W(\tilde{g}_{2}),\]
_then \((q,n)\in A_{m_{1},m_{2}}(r_{1},r_{2},k_{1},k_{2})\)._
Proof.: From Lemma 3.3, we have
\[C_{F,a,b}(\frac{q^{n}-1}{r_{1}},\frac{q^{n}-1}{r_{2}},x^{n}-1) \geq\sum\limits_{i=1}^{u}(C_{F,a,b}(l_{1}p_{i},l_{2},g_{1},g_{2})- \theta(p_{i})C_{F,a,b}(l_{1},l_{2},g_{1},g_{2}))\] \[+\sum\limits_{i=1}^{v}(C_{F,a,b}(l_{1},l_{2}q_{i},g_{1},g_{2})- \theta(q_{i})C_{F,a,b}(l_{1},l_{2},g_{1},g_{2}))\] \[+\sum\limits_{i=1}^{s}(C_{F,a,b}(l_{1},l_{2},g_{1}P_{i},g_{2})- \Theta(P_{i})C_{F,a,b}(l_{1},l_{2},g_{1},g_{2}))\] \[+\sum\limits_{i=1}^{t}(C_{F,a,b}(l_{1},l_{2},g_{1},g_{2}Q_{i})- \Theta(Q_{i})C_{F,a,b}(l_{1},l_{2},g_{1},g_{2}))\] \[+\delta C_{F,a,b}(l_{1},l_{2},g_{1},g_{2}).\]
Let \(\lambda=\frac{\theta(l_{1})\theta(l_{2})\Theta(g_{1})\Theta(g_{2})}{r_{1}r_{2} q^{2}}\) and \(\mathcal{Z}=\sum\limits_{i=1}^{u}\theta(p_{i})+\sum\limits_{i=1}^{v}\theta(q_{i})+ \sum\limits_{i=1}^{s}\Theta(P_{i})+\sum\limits_{i=1}^{t}\Theta(Q_{i})\). It follows from Lemma 3.4 and Theorem 3.1,
\[C_{F,a,b}(\frac{q^{n}-1}{r_{1}},\frac{q^{n}-1}{r_{2}},x^{n}-1)\geq \delta\lambda(q^{n}-Mq^{\frac{n}{2}+k_{1}+k_{2}+2}r_{1}r_{2}W(l_{ 1})W(l_{2})W(\tilde{g_{1}})W(\tilde{g_{2}}))\] \[-Mq^{\frac{n}{2}+k_{1}+k_{2}+2}r_{1}r_{2}W(l_{1})W(l_{2})W(\tilde{ g}_{1})W(\tilde{g}_{2})\lambda\mathcal{Z}\]
Since \(\mathcal{Z}=\delta+u+v+s+t-1\), we have
\[C_{F,a,b}(\frac{q^{n}-1}{r_{1}},\frac{q^{n}-1}{r_{2}},x^{n}-1)>\delta\lambda(q^{ n}-M\Delta q^{\frac{n}{2}+k_{1}+k_{2}+2}r_{1}r_{2}W(l_{1})W(l_{2})W(\tilde{g_{1}})W( \tilde{g_{2}})),\]
and the proof follows.
Let \(\alpha\) and \(\beta\) are positive real numbers, we define \(A_{\alpha}\) and \(A_{\alpha,\beta}\) as follows:
\[A_{\alpha}=\prod_{\begin{subarray}{c}p<2^{\alpha}\\ p\text{ is prime}\end{subarray}}\frac{2}{\sqrt[\alpha]{p}}\text{ \ \ and\ \ }A_{\alpha,\beta}=\prod_{ \begin{subarray}{c}p<2^{\alpha}\\ p\text{ is prime}\end{subarray}}\frac{2}{\sqrt[\alpha]{p}}.\]
Proof of Lemmas 3.6 and 3.7 are not included as they follow from the idea of [1].
**Lemma 3.6**.: _Suppose \(r_{1}\) and \(r_{2}\) are divisors positive of \(q^{n}-1\), \(k_{1},k_{2}<n/2\), and there exist factors \(f_{1}\) and \(f_{2}\) of \(x^{n}-1\) of degrees \(k_{1}\) and \(k_{2}\), respectively in \(\mathbb{F}_{q}[x]\). Let \((2n-k_{1}-k_{2})^{2}<q\). If \(q^{\frac{n}{2}-k_{1}-k_{2}-2}>Mr_{1}r_{2}(2n-k_{1}-k_{2}+2)W(\frac{q^{n}-1}{r _{1}})W(\frac{q^{n}-1}{r_{2}})\), then \((q,n)\in A_{m_{1},m_{2}}(r_{1},r_{2},k_{1},k_{2})\)._
**Lemma 3.7**.: _Let \(r_{1}\) and \(r_{2}\) are positive divisors of \(q^{n}-1\), \(k_{1},k_{2}<n/2\), and there exist factors \(f_{1}\) and \(f_{2}\) of \(x^{n}-1\) of degrees \(k_{1}\) and \(k_{2}\), respectively in \(\mathbb{F}_{q}[x]\). Suppose \(\alpha>0\) be such that \(\alpha>\frac{4n}{n-2(k_{1}+k_{2}+2)}\), and let \(d=\frac{2\alpha}{\alpha(n-2(k_{1}+k_{2}+2))-4n}\). If_
\[q\geq\min\{\mathcal{U},max\{\mathcal{V},(2n-k_{1}-k_{2})^{2}\}\},\]
_then, \((q,n)\in A_{m_{1},m_{2}}(r_{1},r_{2},k_{1},k_{2})\), where \(\mathcal{U}=(M(r_{1}r_{2})^{1-\frac{1}{\alpha}}2^{2n-k_{1}-k_{2}}A_{\alpha}^{2 })^{d}\) and \(\mathcal{V}=(M(r_{1}r_{2})^{1-\frac{1}{\alpha}}(2n-k_{1}-k_{2}+2)A_{\alpha}^{2 })^{d}\)._
Next Lemma will play an important role in numerical examples.
**Lemma 3.8**.: _Let \(r_{1}\) and \(r_{2}\) are positive divisors of \(q^{n}-1\), \(k_{1},k_{2}<n/2\), and there exist factors \(f_{1}\) and \(f_{2}\) of \(x^{n}-1\) of degrees \(k_{1}\) and \(k_{2}\), respectively in \(\mathbb{F}_{q}[x]\) and \((2n-k_{1}-k_{2})^{2}<q\). Suppose \(\alpha\) and \(\beta\) are positive reals such that \(\alpha+\beta>\frac{4n}{n-2(k_{1}+k_{2}+2)}\), \(\delta_{\alpha,\beta}=1-2S_{\alpha,\beta}-\frac{1}{2n-k_{1}-k_{2}}>0\), where \(S_{\alpha,\beta}=\sum\limits_{\begin{subarray}{c}2^{\alpha}<p<2^{\alpha+\beta} \\ p\text{ prime}\end{subarray}}\frac{1}{p}\), and \(\Delta_{\alpha,\beta}=\frac{2v_{\alpha,\beta}+2n-k_{1}-k_{2}-1}{\delta_{ \alpha,\beta}}\), where \(v_{\alpha,\beta}\) is the number of primes between \(2^{\alpha}\) and \(2^{\alpha+\beta}\). Let \(d=\frac{2(\alpha+\beta)}{(\alpha+\beta)(n-2(k_{1}+k_{2}+1))-4n}\). If_
\[q\geq(M(r_{1}r_{2})^{1-\frac{1}{\alpha+\beta}}A_{\alpha,\beta}^{2}\Delta_{ \alpha,\beta})^{d},\]
_then \((q,n)\in A_{m_{1},m_{2}}(r_{1},r_{2},k_{1},k_{2})\)._
Proof.: Suppose \(\alpha\) and \(\beta\) are positive reals such that \(\alpha+\beta>\frac{4n}{n-2(k_{1}+k_{2}+2)}\). Let \(\frac{q^{n}-1}{r_{1}}=p_{1}^{a_{1}}\cdot p_{2}^{a_{2}}\cdots p_{x}^{a_{x}} \cdot q_{1}^{b_{1}}\cdot q_{2}^{b_{2}}\cdots q_{u}^{b_{u}}\), where \(2\leq p_{j}\leq 2^{\alpha}\) or \(p_{j}\geq 2^{\alpha+\beta}\) for \(1\leq j\leq x\), and \(2^{\alpha}<q_{k}<2^{\alpha+\beta}\) for \(1\leq k\leq u\) and let \(\frac{q^{n}-1}{r_{2}}=r_{1}^{c_{1}}\cdot r_{2}^{c_{2}}\cdots r_{y}^{c_{y}} \cdot s_{1}^{d_{1}}\cdot s_{2}^{d_{2}}\cdots s_{v}^{d_{v}}\), where \(2\leq r_{j}\leq 2^{\alpha}\) or \(r_{j}\geq 2^{\alpha+\beta}\) for \(1\leq j\leq y\), and \(2^{\alpha}<s_{k}<2^{\alpha+\beta}\) for \(1\leq k\leq v\). Assume \(l_{1}=p_{1}^{a_{1}}p_{2}^{a_{2}}\cdots p_{x}^{a_{x}}\) and \(l_{2}=r_{1}^{c_{1}}\cdot r_{2}^{c_{2}}\cdots r_{y}^{c_{y}}\), \(g_{1}\) and \(g_{2}\) are divisors of \(x^{n}-1\) such that \(\gcd(g_{i},\frac{x^{n}-1}{f_{i}})=1\), and any irreducible factor of \(x^{n}-1\) divides \(g_{i}\) or \(\frac{x^{n}-1}{f_{i}}\); \(i=1,2\). Let \(P_{1},P_{2},\ldots,P_{s}\) and \(Q_{1},Q_{2},\ldots,Q_{t}\) be all irreducible polynomials such that \(rad(\frac{x^{n}-1}{f_{1}})\)\(=P_{1}\cdot P_{2}\cdots P_{s}\) and \(rad(\frac{x^{n}-1}{f_{2}})=Q_{1}\cdot Q_{2}\cdots Q_{t}\), respectively. By definition of \(l_{1}\) and \(l_{2}\), we have
\(\sum_{k=1}^{u}\frac{1}{q_{k}}-\sum_{k=1}^{v}\frac{1}{s_{k}}-\frac{1}{2n-k_{1}-k_{2 }}\geq 1-2S_{\alpha,\beta}-\frac{1}{2n-k_{1}-k_{2}}=\delta_{\alpha,\beta}.\) If \(\delta_{\alpha,\beta}>0\), then \(\Delta=\frac{u+v+s+t-1}{\delta}+2\leq\frac{2v_{\alpha,\beta}+2n-k_{1}-k_{2}-1} {\delta_{\alpha,\beta}}+2=\Delta_{\alpha,\beta}\). From [4, Lemma 3.7], we have \(W(m)\leq A_{\alpha,\beta}m^{\frac{1}{\alpha+\beta}}\), i.e., we have \(W(l_{1})W(l_{2})\leq A_{\alpha,\beta}^{2}q^{\frac{2n}{\alpha+\beta}}(r_{1}r_{2} )^{\frac{-1}{\alpha+\beta}}\). From Seiving variation, we conclude that, if \(q^{\frac{n}{2}-k_{1}-k_{2}-2}>M(r_{1}r_{2})^{1-\frac{1}{\alpha+\beta}}\Delta_ {\alpha,\beta}A_{\alpha,\beta}^{2}q^{\frac{2n}{\alpha+\beta}}\) or equivalently, if \(q\geq(M(r_{1}r_{2})^{1-\frac{1}{\alpha+\beta}}A_{\alpha,\beta}^{2}\Delta_{ \alpha,\beta})^{d}\), then \((q,n)\in A_{m_{1},m_{2}}(r_{1},r_{2},k_{1},k_{2})\).
## 4. Numerical Examples
In this section, we shall present some numerical examples. To account for limited computing resources, we have restricted our focus to the field of characteristics 13.
Table 1.
\begin{tabular}{l l} \hline Value of \(n\) & bound on \(q\) corresponding to \(\alpha\) \\ \hline \(n=12\) & \(q\geq 1.65\times 10^{1769600}\) for \(\alpha=25.5\) \\ \(n=13\) & \(q\geq 6.2\times 10^{16610}\) for \(\alpha=18.9\) \\ \(n=14\) & \(q\geq 1.52\times 10^{1585}\) for \(\alpha=15.6\) \\ \(n=15\) & \(q\geq 5.51\times 10^{384}\) for \(\alpha=13.7\) \\ \(n=16\) & \(q\geq 3.14\times 10^{149}\) for \(\alpha=12.5\) \\ \(n=17\) & \(q\geq 8.65\times 10^{75}\) for \(\alpha=11.6\) \\ \(n=18\) & \(q\geq 3.59\times 10^{45}\) for \(\alpha=10.9\) \\ \(n=19\) & \(q\geq 3.16\times 10^{30}\) for \(\alpha=10.4\) \\ \(n=20\) & \(q\geq 1.07\times 10^{22}\) for \(\alpha=10.0\) \\ \(n=21\) & \(q\geq 6.53\times 10^{16}\) for \(\alpha=9.7\) \\ \(n=22\) & \(q\geq 2.37\times 10^{13}\) for \(\alpha=9.5\) \\ \(n\geq 25\) & \(q\geq 8.42\times 10^{7}\) for \(\alpha=8.9\) \\ \(n\geq 31\) & \(q\geq 13698\) for \(\alpha=8.3\) \\ \(n\geq 62\) & \(q\geq 12334\) for \(\alpha=9.3\) \\ \(n\geq 72\) & \(q\geq 5344\) for \(\alpha=9.3\) \\ \(n\geq 100\) & \(q\geq 1485\) for \(\alpha=9.6\) \\ \(n\geq 502\) & \(q\geq 141\) for \(\alpha=11.3\) \\ \hline \end{tabular}
Specifically, we aim to identify pairs \((q,n)\) for which \(\mathbb{F}_{q^{n}}\) contains a 3-primitive, 2-normal element \(\epsilon\), such that \(F(\epsilon)\) is a 2-primitive, 1-normal in \(\mathbb{F}_{q^{n}}\) satisfying \(\mathrm{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon)\)\(=a\) and \(\mathrm{Tr}_{\mathbb{F}_{q^{n}}/\mathbb{F}_{q}}(\epsilon^{-1})=b\), for \(a,b\in\mathbb{F}_{q}\), where \(F\in\Lambda_{q^{n}}(10,11)\). Our decision to limit ourselves to the field of characteristics 13 is due to the fact that calculations within this field are not time consuming and tedious, whereas similar work in fields of other characteristics can be conducted using advanced and efficient computing resources.
First suppose \(q\) is any prime power, \(r_{1}=3,r_{2}=2,k_{1}=2,k_{2}=1,m_{1}=10\), and \(m_{2}=11\). Let \(F\in\Lambda_{q^{n}}(10,11)\), utilizing Lemma 3.7, we have identified pairs \((q,n)\), as presented in the Table 1, with an appropriate value of \(\alpha\), for which, \((q,n)\in A_{10,11}(3,2,2,1)\), subject to \(x^{n}-1\) has a degree 2 factor in \(\mathbb{F}_{q}[x]\), and 6 divides \(q^{n}-1\).
Table 2.
\begin{tabular}{l l l} \hline \(n\) & \((\alpha,\beta)\) & bound on \(q\) corresponding \\ & & & to \((\alpha,\beta)\) \\ \hline \(n\geq 53\) & \((\alpha,\beta)=(5.7,3.6)\) & \(q\geq 13\) \\ \(n\geq 36\) & \((\alpha,\beta)=(5.7,3.5)\) & \(q\geq 137\) \\ \(n\geq 33\) & \((\alpha,\beta)=(5.7,3.5)\) & \(q\geq 358\) \\ \(n\geq 30\) & \((\alpha,\beta)=(5.9,3.6)\) & \(q\geq 1464\) \\ \(n\geq 27\) & \((\alpha,\beta)=(6.1,3.8)\) & \(q\geq 13177\) \\ \(n\geq 24\) & \((\alpha,\beta)=(6.4,4.1)\) & \(q\geq 608734\) \\ \(n\geq 21\) & \((\alpha,\beta)=(6.7,4.3)\) & \(q\geq 1.86\times 10^{9}\) \\ \(n\geq 18\) & \((\alpha,\beta)=(7.6,4.8)\) & \(q\geq 2.16\times 10^{19}\) \\ \(n=16\) & \((\alpha,\beta)=(8.5,5.3)\) & \(q\geq 1.5\times 10^{44}\) \\ \(n=15\) & \((\alpha,\beta)=(9.2,5.7)\) & \(q\geq 4.11\times 10^{83}\) \\ \(n=14\) & \((\alpha,\beta)=(10.3,6.4)\) & \(q\geq 1.84\times 10^{216}\) \\ \(n=13\) & \((\alpha,\beta)=(12.2,7.5)\) & \(q\geq 4.75\times 10^{1047}\) \\ \(n=12\) & \((\alpha,\beta)=(16.4,10)\) & \(q\geq 8.56\times 10^{24648}\) \\ \hline \end{tabular}
However, due to the computationally intensive nature of the calculations involved, we were unable to determine the required criteria when \(n=11\), and thus excluded it from our analysis. Table 1 demonstrates that for smaller values of \(q\), \((q,n)\in A_{10,11}(3,2,2,1)\), when \(n\) is large, whereas for smaller values of \(n\), \(q\) must be significantly large. To prevent this issue, we employed Lemma 3.8 to those value of \((q,n)\) in Table 1 to reduce the bound on \(q\) to some extent. Accordingly, for suitable values of \(\alpha\) and \(\beta\), Table 2 consists of pairs \((q,n)\) such that \((q,n)\in A_{10,11}(3,2,2,1)\), subject to \(x^{n}-1\) has a degree 2 factor in \(\mathbb{F}_{q}[x]\) and 6 divides \(q^{n}-1\).
## 5. Proof of theorem 1.2
First we assume, \(n\geq 14\). Using the fact, for a divisor \(f\) of \(x^{n}-1\), \(W(\frac{x^{n}-1}{f})\leq 2^{n-deg(f)}\), in Corollary 3.2, first we verify the condition \(q^{\frac{n}{2}-5}>44\cdot 6\cdot W(\frac{q^{n}-1}{3})W(\frac{q^{n}-1}{2})\cdot 2^{2n-3}\) for the pairs \((q,n)\) not listed in Table 2, such that \(x^{n}-1\) has a factor of degree 2 in \(\mathbb{F}_{q}[x]\). Suppose \(f_{i}(x)\) is a degree \(i\) factor of \(x^{n}-1\) in \(\mathbb{F}_{q}[x]\); \(i=1,2\). Using SageMath, we determine that the condition is satisfied for all pairs except those that
are enumerated below:
Table 3.
\begin{tabular}{l c} \hline powers of \(q=13\) & \(n\) \\ \hline
13 & 36, 38, 39, 40, 42, 44, 45, 46, \\ & 48, 49, 50, 51, 52 \\
13, \(13^{2}\) & 23, 25, 27, 29, 30, 31, 32, 33, \\ & 34, 35 \\
13, \(13^{2},\ 13^{3}\) & 22, 24, 26, 28 \\
13, \(13^{2},\ 13^{3},\ 13^{4}\) & 20, 21 \\
13, \(13^{2},\ldots,13^{6}\) & 16 \\
13, \(13^{2},\ldots,13^{8}\) & 15 \\
13, \(13^{2},\ldots,13^{10},\ 13^{12}\) & 14 \\ \hline \end{tabular}
We evaluate Theorem 3.1 for the values of \((q,n)\) presented in Table 3, utilizing precise factorization of \(\frac{x^{n}-1}{f_{i}}\); \(i=1,2\), in \(\mathbb{F}_{q^{n}}[x]\). We confirm that it is valid for all the pairs except the ones specified below.
Table 4.
\begin{tabular}{l c} \hline prime powers of \(q=13\) & \(n\) \\ \hline
13 & 22,27, 28, 30, 32, 36, 40, 42, 48 \\
13, \(13^{2},\ 13^{3}\) & 20, 21, 24 \\
13, \(13^{2},\ 13^{3},\ 13^{4}\) & 18 \\
13, \(13^{2},\ldots,13^{5}\) & 16 \\
13, \(13^{2},\ 13^{3},\ 13^{4},\ 13^{6},\ 13^{8}\) & 15 \\
13, \(13^{2},\ldots,13^{6},\ 13^{9}\) & 14 \\ \hline \end{tabular}
For the case \(n=13\), first we shall examine prime powers \(q\leq 4.75\times 10^{1047}\), and establish the following lemma:
**Lemma 5.1**.: _Let \(q\) be any prime power such that \(2.29\times 10^{7}\leq q\leq 4.75\times 10^{1047}\). If \(6|(q^{13}-1)\) and \(x^{13}-1\) has a degree 2 factor in \(\mathbb{F}_{q}[x]\), then \((q,13)\in A_{10,11}(3,2,2,1)\)._
Proof.: Assume that \(q\) is a prime power such that \(q\leq 4.75\times 10^{1047}\) and \(6\mid(q^{13}-1)\). Also, let \(f_{i}(x)\) is a degree \(i\) factor of \(x^{13}-1\); \(i=1,2\). We use Theorem 3.5 with \(l_{1}=\frac{q-1}{3}\) and \(l_{2}=\frac{q-1}{2}\), and suppose \(g_{i}=f_{i}\) if \(13\nmid q\) and \(g_{i}=1\) if \(13|q\), that is, \(\tilde{g_{i}}=1\) and \(s+t\leq 23;i=1,2\). Let \(p\) and \(p^{\prime}\) be primes such that \(p\) divides \(\frac{q^{13}-1}{3}\) but not \(\frac{q-1}{3}\), and \(p^{\prime}\) divides \(\frac{q^{13}-1}{2}\) but not \(\frac{q-1}{2}\), respectively, i.e., \(13|2(p-1)\) and \(13|(p^{\prime}-1)\), which means the set \(U=\{p_{1},p_{2},\ldots,p_{u}\}\) constitutes primes of the type \(\frac{13i+2}{2}\) and that of
\(V=\{p^{\prime}_{1},p^{\prime}_{2},\ldots,p^{\prime}_{v}\}\) constitutes primes of the type \(13j+1\). Suppose \(\mathcal{Q}_{m}\) denote the set of first \(m\) primes of type \(\frac{13i+2}{2}\), and \(\mathcal{Q}^{\prime}_{m}\) denote the set of first \(m\) primes of type \(13j+1\). Let
\[\mathcal{S}_{m}=\sum_{r\in\mathcal{Q}_{m}}\frac{1}{r}\text{ and }\mathcal{P}_{m}= \prod_{r\in\mathcal{Q}_{m}}r,\text{ and }\mathcal{S}^{\prime}_{m}=\sum_{r\in\mathcal{Q}_{m}}\frac{1}{r}\text{ and }\mathcal{P}^{\prime}_{m}=\prod_{r\in\mathcal{Q}^{\prime}_{m}}r.\]
Since elements of \(U\) are primes which divide \(\frac{q^{12}+q^{11}+\cdots+q+1}{3}\) and that of \(V\) are primes which divides \(\frac{q^{12}+q^{11}+\cdots+q+1}{2}\), we have \(\mathcal{P}_{u}\leq 4.39\times 10^{12571}\) and \(\mathcal{P}^{\prime}_{v}\leq 6.597\times 10^{12571}\), which gives \(u\leq 2482\) and \(v\leq 2482\), and \(\mathcal{S}_{u}\leq 0.111533\) and \(\mathcal{S}^{\prime}_{v}\leq 0.111533\). Assuming \(q\geq 10^{4}\) and since \(s+t\leq 23\), we have \(\delta=1-\sum_{i=1}^{u}\frac{1}{p_{i}}-\sum_{i=1}^{v}\frac{1}{p^{\prime}_{i}}- \sum_{i=1}^{s}\frac{1}{q^{deg(p_{i})}}-\sum_{i=1}^{t}\frac{1}{q^{deg(Q_{i})}} \geq 1-\mathcal{S}_{u}-\mathcal{S}^{\prime}_{v}-\frac{23}{q}>0.774635\), which implies \(\Delta<6438.5799\). For real number \(\alpha>\frac{4}{3}\), if \(q\geq(44\cdot 6^{1-\frac{1}{\alpha}}\cdot 6438.5799\cdot A_{\alpha}^{2})^{\frac{2 \alpha}{3\alpha-4}}\), it follows from [4, Lemma 3.7 ], Theorem 3.5 and \(\alpha=4.3\) that, \((q,13)\in A_{10,11}(3,2,2,1)\), for \(q\geq 2.289\times 10^{7}\).
For \(q\leq 2.29\times 10^{7}\), i.e., \(q=13,13^{2},13^{3},13^{4},13^{5},13^{6}\), Theorem 3.1 does not hold. For these prime powers along with those in Table 4, we find that seiving variation holds for majority of them (Table 5) with the exception of those mentioned in Thereom 1.2. Theorem 1.2 concluded.
We can observe that for most of the values of \(n\) in Table 5, \(g_{i}=f_{i}\); \(i=1,2\).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \((q,n)\) & \((l_{1},l_{2})\) & \((f_{1}(x),f_{2}(x))\) & \((g_{1}(x),g_{2}(x))\) \\ \hline \((13^{4},13)\) & (6,10) & \((x-1,(x-1)^{2})\) & \((x-1,(x-1)^{2})\) \\ \((13^{5},13)\) & (6,2) & \((x-1,(x-1)^{2})\) & \((x-1,(x-1)^{2})\) \\ \((13^{6},13)\) & (6,2) & \((x-1,(x-1)^{2})\) & \((x-1,(x-1)^{2})\) \\ \hline \((13^{3},14)\) & (6,6) & \((x+1,x^{2}+3x+1)\) & \((x+1,x^{2}+3x+1)\) \\ \((13^{4},14)\) & (6,10) & \((x+1,x^{2}+3x+1)\) & \((x+1,x^{2}+3x+1)\) \\ \((13^{5},14)\) & (6,14) & \((x+1,x^{2}+3x+1)\) & \((x+1,x^{2}+3x+1)\) \\ \((13^{6},14)\) & (6,6) & \((x+1,x^{2}+3x+1)\) & \((x+1,x^{2}+3x+1)\) \\ \((13^{7},14)\) & (6,14) & \((x+1,x^{2}+3x+1)\) & \((x+1,x^{2}+3x+1)\) \\ \((13^{8},14)\) & (6,10) & \((x+1,x^{2}+3x+1)\) & \((x+1,x^{2}+3x+1)\) \\ \((13^{9},14)\) & (6,6) & \((x+1,x^{2}+3x+1)\) & \((x+1,x^{2}+3x+1)\) \\ \((13^{10},14)\) & (6,10) & \((x+1,x^{2}+3x+1)\) & \((x+1,x^{2}+3x+1)\) \\ \((13^{12},14)\) & (30,30) & \((x+1,x^{2}+3x+1)\) & \((x+1,x^{2}+3x+1)\) \\ \hline \((13^{2},15)\) & (6,6) & \((x+4,x^{2}+x+1)\) & \((x+4,x^{2}+x+1)\) \\ \((13^{3},15)\) & (6,6) & \((x+4,x^{2}+x+1)\) & \((x+4,x^{2}+x+1)\) \\ \((13^{4},15)\) & (30,30) & \((x+4,x^{2}+x+1)\) & \((x+4,x^{2}+x+1)\) \\ \((13^{5},15)\) & (6,6) & \((x+4,x^{2}+x+1)\) & \((x+4,x^{2}+x+1)\) \\ \((13^{6},15)\) & (6,6) & \((x+4,x^{2}+x+1)\) & \((x+4,x^{2}+x+1)\) \\ \((13^{7},15)\) & (6,6) & \((x+4,x^{2}+x+1)\) & \((x+4,x^{2}+x+1)\) \\ \((13^{8},15)\) & (30,30) & \((x+4,x^{2}+x+1)\) & \((x+4,x^{2}+x+1)\) \\ \hline \((13^{2},16)\) & (6,10) & \((x+1,x^{2}+5)\) & \((x+1,x^{2}+5)\) \\ \((13^{3},16)\) & (6,6) & \((x+1,x^{2}+5)\) & \((x+1,x^{2}+5)\) \\ \((13^{4},16)\) & (6,10) & \((x+1,x^{2}+5)\) & \((x+1,x^{2}+5)\) \\ \((13^{5},16)\) & (6,10) & \((x+1,x^{2}+5)\) & \((x+1,x^{2}+5)\) \\ \((13^{6},16)\) & (6,6) & \((x+1,x^{2}+5)\) & \((x+1,x^{2}+5)\) \\ \hline \((13^{2},18)\) & (6,30) & \((x+1,x^{2}+4x+3)\) & \((x+1,x^{2}+4x+3)\) \\ \((13^{3},18)\) & (6,2) & \((x+1,x^{2}+4x+3)\) & \((x+1,x^{2}+4x+3)\) \\ \((13^{4},18)\) & (210,210) & \((x+1,x^{2}+4x+3)\) & \((x+1,x^{2}+4x+3)\) \\ \hline \end{tabular}
\end{table}
Table 5. List of pairs \((q,n)\) where criterion of Theorem 3.1 fails but Theorem 3.5 holds true for certain selection of \(l_{1}\), \(l_{2}\), \(f_{1}(x)\), \(f_{2}(x)\), \(g_{1}(x)\), and \(g_{2}(x)\).
For \((q,n)=(13,36)\), we took \((l_{1},l_{2})=(30,210)\), \((f_{1}(x),f_{2}(x))=(x+1,x^{2}+2x+3)\) and \(g_{1}(x)=g_{2}(x)=(x+1)(x+2)(x+3)(x+4)(x+5)(x+6)(x+9)(x+10)(x+11)(x^{4}+2)(x^{ 4}+5)(x^{4}+6)(x^{4}+7)(x^{4}+8)(x^{4}+11).\)
### Acknowledgements
Prof. R. K. Sharma is the ConsenSys Blockchain Chair Professor at IIT Delhi. He is grateful to ConsenSys AG for that privilege.
|
2305.16886 | Understanding Sparse Neural Networks from their Topology via
Multipartite Graph Representations | Pruning-at-Initialization (PaI) algorithms provide Sparse Neural Networks
(SNNs) which are computationally more efficient than their dense counterparts,
and try to avoid performance degradation. While much emphasis has been directed
towards \emph{how} to prune, we still do not know \emph{what topological
metrics} of the SNNs characterize \emph{good performance}. From prior work, we
have layer-wise topological metrics by which SNN performance can be predicted:
the Ramanujan-based metrics. To exploit these metrics, proper ways to represent
network layers via Graph Encodings (GEs) are needed, with Bipartite Graph
Encodings (BGEs) being the \emph{de-facto} standard at the current stage.
Nevertheless, existing BGEs neglect the impact of the inputs, and do not
characterize the SNN in an end-to-end manner. Additionally, thanks to a
thorough study of the Ramanujan-based metrics, we discover that they are only
as good as the \emph{layer-wise density} as performance predictors, when paired
with BGEs. To close both gaps, we design a comprehensive topological analysis
for SNNs with both linear and convolutional layers, via (i) a new input-aware
Multipartite Graph Encoding (MGE) for SNNs and (ii) the design of new
end-to-end topological metrics over the MGE. With these novelties, we show the
following: (a) The proposed MGE allows to extract topological metrics that are
much better predictors of the accuracy drop than metrics computed from current
input-agnostic BGEs; (b) Which metrics are important at different sparsity
levels and for different architectures; (c) A mixture of our topological
metrics can rank PaI algorithms more effectively than Ramanujan-based metrics.
The codebase is publicly available at https://github.com/eliacunegatti/mge-snn. | Elia Cunegatti, Matteo Farina, Doina Bucur, Giovanni Iacca | 2023-05-26T12:45:58Z | http://arxiv.org/abs/2305.16886v2 | # Peeking inside Sparse Neural Networks using Multi-Partite Graph Representations
###### Abstract
Modern Deep Neural Networks (DNNs) have achieved very high performance at the expense of computational resources. To decrease the computational burden, several techniques have proposed to extract, from a given DNN, efficient subnetworks which are able to preserve performance while reducing the number of network parameters. The literature provides a broad set of techniques to discover such subnetworks, but few works have studied the peculiar topologies of such pruned architectures. In this paper, we propose a novel _unrolled input-aware_ bipartite Graph Encoding (GE) that is able to generate, for each layer in an either sparse or dense neural network, its corresponding graph representation based on its relation with the input data. We also extend it into a multipartite GE, to capture the relation between layers. Then, we leverage on topological properties to study the difference between the existing pruning algorithms and algorithm categories, as well as the relation between topologies and performance.
## 1 Introduction
Pruning Dense Neural Networks (DNNs) has recently become one of the most promising research areas in machine learning. Network pruning consists in removing a portion of network parameters (i.e., weights) aiming at reducing the computational resources and the inference time while avoiding performance degradation. Currently, one of the most important findings in network pruning [26] proved that, inside a randomly initialized DNN, there are subnetworks (the so-called _"winning tickets"_) that once trained in isolation can reach the performance of the overall dense network.
The algorithms proposed to find such subnetworks can be roughly split into four categories, which differ in _how_ and _when_ they uncover the sparse architecture. The earliest works focus on _Post-Training Pruning_, i.e., methods that, once the dense network has been fully trained, are able to uncover the sparse structures using simple heuristics to remove the lower-magnitude weights from the trained network [33, 50, 3, 19]. To decrease the computational cost of training the whole DNN, [54] and [46] propose a dense-to-sparse pruning technique employing Gradual Magnitude Pruning (GMP) to reduce the number of parameters during the training phase. The Lottery Ticket Hypothesis (LTH) [26] discovers the final sparse structure using an iterative process of training-pruning. A second group of algorithms focuses instead on _Pruning at Initialization (PaI)_, where the subnetwork is retrieved prior to training, based on a scoring criterion calculated on randomly initialized weights [35, 8, 37, 44, 1]. A third family of algorithms, called _Dynamic Sparse Training (DST)_ techniques, aims at modifying the structure of the sparse network during the training process [18, 12, 34, 45, 14, 30]. The last category of pruning algorithms, based on the so-called _Strong Lottery Ticket Hypothesis (SLTH)_, differs from the previous categories since the weights are not trained at all [20, 49, 27].
Applying such pruning algorithms to a DNN \(f(x,\theta)\) provides a binary mask \(m\in\{0,1\}^{|\theta|}\) that generates a Sparse Neural Network (SNN) \(f(x,m\ominus\theta)\) characterized by a certain topology. While many papers have investigated _how_ to find sparse architectures, only a few looked at _why_ (from a topological viewpoint) SNNs perform well, and _how_ different pruning algorithms actually produce different SNN topologies. An SNN has indeed a unique topology, whose graph representation can be constructed and analyzed [31, 36]. The state-of-the-art in graph construction for SNNs with convolutional layers is based on _rolled_ representations [17, 36, 13], where each node represents a layer parameter, i.e., either a filter/channel or a kernel value. However, such _rolled_ representations do not capture the complete structure of the network. In fact, in these representations, the kernel parameters are used as static nodes. On the other hand,
convolutional operations use the kernel _dynamically_ over the input data. The latter can be represented as nodes only in _unrolled_ representations, see [4].
To overcome this limitation, we propose a novel _unrolled input-aware_ Graph Encoding (GE) which fully represents the relation between the layer parameters and the layer input data. This encoding is radically different from all the _rolled_ encodings previously proposed in the literature for the complete graph representation of convolutional layers. In particular, unlike the weighted _unrolled_ GE proposed in [4], which is only designed to associate the weights of a DNN to graph edges, our proposed GE works with both dense and sparse architectures and focuses on the presence/absence of edges, rather than their weights. Furthermore, our GE generates one bipartite graph _for each layer_, rather than one bipartite graph _for each combination of input feature maps and filters_, as in [4]. Our GE works as follows: it takes as input a neural network (dense or sparse), and the input data size, and generates one bipartite graph for each layer. Nodes correspond to that layer's inputs (e.g., in computer vision tasks, these are pixels in feature maps) while edges correspond to the masked/unmasked relation between those inputs and the output features, which in turn is based on the pruned layer parameters (in the case of SNNs; in the case of DNNs, all parameters are considered). We then extend this GE into a multipartite graph, to describe relations between layers.
Finally, we use the proposed GE to thoroughly study different categories of state-of-the-art unstructured pruning algorithms, to "peek inside" the SNNs they generate, and to understand how the _topological features_ of those SNNs are related to their _performance drop_ w.r.t. their corresponding DNNs, especially at extreme sparsity levels. We base our analysis on a large pool of SNNs obtained by combining eleven pruning algorithms, five sparsity ratios, three datasets, and four Convolutional Neural Networks (CNNs) benchmarked in the literature on SNNs, such as Conv-6 [26], as well as on state-of-the-art CNNs, such as ResNet [22], Wide-Resnet [53], and MobileNet [23; 38].
To summarize, the main contributions of this paper can be outlined as follows:
1. a novel _unrolled input-aware_ GE which correctly reflects the convolutional operations, and links consecutive layers into a single multipartite graph representing the whole SNN;
2. an extensive study about how each pruning algorithm (and algorithm category) generates sparse topologies with similar patterns;
3. an analysis of which topological features can predict the performance drop of SNNs.
## 2 Related Work
We summarize the literature on pruning algorithms and graph representations of neural networks.
### Pruning Algorithms
**Pruning at Initialization.** This set of algorithms aims at discovering the best subnetwork (prior to training) and selecting the weights based on some predefined criteria. The earliest approach of this kind is SNIP [35], which aims at selecting the weights based on their influence on the loss function. Moreover, an iterative version of SNIP is presented in [37]. GraSP [8] applies a gradient signal preservation mechanism based on Hessian-gradient product, while SynFlow [44] maximizes the synaptic strengths. More recently, ProsPR [1] has been devised in order to maximize the trainability throughout meta-gradients on the first steps of the optimization process. Lastly, NTK-SAP [51] uses neural tangent kernel theory to remove the less informative connections. These approaches are relatively cheap in terms of computational cost, since the mask is found before training. However, while increasing the sparsity ratios, the performances deteriorate faster than with other categories of pruning algorithms, due to the difficulty of training SNNs from scratch [16; 15] and to the fact that the architecture is statically determined and cannot change during the training phase.
**Dynamic Sparse Training.** This category of algorithms has been proposed to tackle the aforementioned limitations of PaI, thus allowing higher performance at the cost of a slight increase of the computational cost. Using gradient information retrieved during the training, sparse architectures can change their topology based on magnitude pruning followed by a pre-defined growth criteria e.g. based on random growth [12; 34], momentum [45], or absolute gradients [14]. To overcome the limitation of fixed layer-wise sparsity ratio, a few reparametrization techniques have been proposed [34; 45], in order to better allocate parameters over layers.
**Sanity Checks.** Recent works [43; 25] have questioned the ability of PaI algorithms to uncover the most effective architectures, stating that they rather focus on discovering the best _layer-wise sparsity ratios_. Their ability to improve performance over random pruning algorithms has been discussed in [39], which shows how even small perturbations of random pruning (Layer-wise Random Pruning such as ER [12] and ERK [14]) are able to outperform well-engineered PaI algorithms.
### Graph Representation of Sparse and Dense Neural Networks
In order to gain insights from DNN topologies, several works have tried to devise weighted graph representations of the networks. Based on such representations, early-stopping criteria [4], customized initialization techniques [29], and performance predictors [47; 48] have been introduced.
What sets an SNN apart from a DNN is its unique topology, which can provide insight into its ability to solve a given task even when removing a portion of parameters. Grounded in graph theory, the random generation of small sparse structures for both Multi-Layer Perceptrons (MLPs) [6; 42] and CNNs [52] has been studied, showing that performance is associated with the _clustering coefficient_ (which measures the capacity of nodes to cluster together) and the _average path length_ of its graph representation (i.e., the average number of connections across all shortest paths).
To investigate the topological properties of SNNs, different metrics have been proposed. In [31], Graph-Edit-Distance has been used as a similarity measure, computed only on MLPs, to show how the graph topology evolves using a _Dynamic Sparse Training_ algorithm such as SET [12]. By using Singular Vector Canonical Correlation Analysis, in [5; 2] it has been shown that different topologies are able to achieve similar performances. Clusterability has been taken into account in [17], showing that MLPs are more clusterable than CNNs. Finally, the performance of SNNs has been investigated via basic graph properties [41] and by means of Ramanujan graphs for Pal algorithms, indicating that performances are correlate to the graph connectivity [36] and can be predicted e.g. using the Iterative Mean Difference of Bound (IMDB) [13].
In all the works mentioned above, the graph representation of the convolutional layers is modelled either with a relational graph [52], or with a _rolled_ encoding based only on the kernel parameters [36; 13], rather than on the relationship between the latter and the input data. To the best of our knowledge, so far only [4] proposed an _unrolled_ GE but, besides the fact that it considers the network weights, this method generates several bipartite graphs for each convolutional layer, while our approach generates only one bipartite graph per layer. From a topological point of view, the approach from [4] has in fact two main limitations. Firstly, it does not make possible to directly calculate topological measures for any layer, since each graph contains only partial information about it. Secondly, since in the convolutional operations each \(j^{th}\) filter is convoluted with the \(a^{th}\) input feature map one by one, separating these operations does not allow computing the correct final contribution over the input data.
## 3 Methodology
In this section, we first introduce the novel _unrolled input-aware_ graph encoding and its formulation in the _bipartite_ version. We then extend it to the _multipartite_ version, which links consecutive layers. Finally, we propose topological metrics that can be extracted from such GEs.
### Bipartite Graph Encoding (BGE)
The proposed BGE encodes a neural network as a list of unweighted directed acyclic bipartite graphs \(G=(G_{1},\ldots,G_{N})\), with \(N\) the number of layers in the neural network. The individual graphs are not linked into a single graph. Our notation is summarized in Table 1.
Due to its design, the bipartite graph construction differs for linear and convolutional layers. For linear layers, we use the encoding proposed in [4; 17; 47; 13]: denoting with \(L\) and \(R\) respectively the left and right layer of the \(i\)-th bipartite
\begin{table}
\begin{tabular}{l l} \hline \hline
**Symbol** & **Definition** \\ \hline \(G=(L\cup R,E)\) & bipartite graph with left node set \(L\), right node set \(R\) (for a \\ & total of \(|L|+|R|\) nodes), and edge set \(E\) \\ \hline \(N\) & number of layers \\ \(h,w\) & height and width of the input feature map \\ \(M\) & binary mask of pruned/unpruned weights \\ \(W\) & layer parameters \\ \(h_{\text{\tiny{br}}},v_{\text{\tiny{br}}}\) & height and width of kernel \\ \(c_{\text{\tiny{in}}},c_{\text{\tiny{our}}}\) & number of input and output channels \\ \(P,S\) & padding, stride \\ \hline \hline \end{tabular}
\end{table}
Table 1: Notation used in the paper. We consider the case of vision tasks.
graph, and given a binary mask \(M_{i}\in\{0,1\}^{|L_{i}|\times|R_{i}|}\), its corresponding GE is \(G_{i}=(L_{i}\cup R_{i},E_{i})\), where \(E_{i}\) is the set of edges present in \(M_{i}\), i.e., \((a,b)\in E_{i}\iff M_{i}^{a,b}\neq 0\).
For convolutional layers, our approach is substantially different from all the previous ones proposed in the literature. Specifically, we devise our encoding based on the _unrolled_ input size: given as input, for each \(i\)-th layer, a set of feature maps \(I_{i}\in\mathbb{R}^{h_{i}\times w_{i}\times c_{in}}\), we construct the corresponding bipartite graph as \(G_{i}=(L_{i}\cup R_{i},E_{i})\), where again \(L_{i}\) and \(R_{i}\) are the two layers of the bipartite graph, and \(L_{i}\) corresponds to the flattened representation of the inputs. The size of the layer \(R_{i}\), i.e., the output feature map, is calculated based on the input size \(I_{i}\) and the layer parameters \(W_{i}\in\mathbb{R}^{c_{out}\times c_{in}\times h_{in}\times w_{in}}\):
\[|L_{i}|=h_{i}\times w_{i}\times c_{in}\hskip 28.452756pt|R_{i}|=\left(\frac{h_{ i}-h_{\textit{ker}}}{S}+1\right)\times\left(\frac{w_{i}-w_{\textit{ker}}}{S}+1 \right)\times c_{out}. \tag{1}\]
Differently from the linear layer case, the set of edges \(E\) cannot be directly computed from the convolutional mask \(M_{i}\in\{0,1\}^{c_{out}\times c_{in}\times h_{in}\times w_{in}}\) since the latter is dynamically computed over the input data:1:
Footnote 1: The formula uses cross-correlation.
\[x_{i,j}^{out}=\sum_{u=0}^{c_{in}-1}\sum_{u=-h_{u}}^{h_{\textit{ker}}}\sum_{v=- u_{\textit{ker}}}^{w_{\textit{ker}}}I_{u,v}^{in}\times M_{i+u,j+v}^{out,in} \hskip 14.226378pt\forall\ out\in[0,c_{\textit{out}}). \tag{2}\]
From Eq. (1), we know that \(I_{u,v}^{in}\) and \(x_{i,j}^{out}\) respectively correspond to a node \(a_{(u+v)\times in}\in L_{i}\) and a node \(b_{(i+j)\times out}\in R_{i}\), so in this case the edges of the bipartite graph are constructed during the convolutional operation such that:
\[E_{i}=\{(a_{(u+v)\times in},b_{(i+j)\times out})\mid M_{i+u,j+v}^{out,in}\neq 0 \hskip 5.690551pt\forall\ out,in,u,v\} \tag{3}\]
where the ranges of \(out,in,u,v\) are defined according to Eq. (1), and \(in\) and \(out\) are respectively the IDs of the input and the output channel taken into consideration for that convolutional step2, and \(i+u,j+v\) correspond to one kernel entry. Intuitively, given a layer \(l^{i}\), each input element (e.g., in computer vision tasks, each pixel) represents a node in the graph, and the connection between an element of the input (denoted as \(a\)) and an element of the output feature map (denoted as \(b\)) is present if and only if during the convolutional operation, the contribution of \(a\) for generating \(b\) is not set to zero by the mask \(M_{i}\) in the kernel cell used to convolute the two pixels. An illustration of such encoding, which highlights the construction of the graph throughout the convolutional steps for both dense and sparse networks, is shown in Figure 1.
Footnote 2: In case of depth-wise separable convolution [23], the steps are only computed if \(in=out\).
### Multipartite Graph Encoding (MGE)
The bipartite GE described above has been devised to encode, independently, each single layer (either convolutional or linear) in a network. However, the limitation of this BGE lies in the lack of connections between consecutive (and, indirectly, non-consecutive) layers. As mentioned earlier, this limitation is however common to all the other GEs proposed in the literature [4; 47; 36; 13], that analyze the layers one by one, without connecting consecutive layers \(l_{i}\) and \(l_{i+1}\). On the other hand, differently from the existing encodings, our bipartite GE can be straightforwardly extended into a multipartite GE, in order to encode the whole network as an _unweighted directed acyclic multipartite graph_\(G=(G_{1},\ldots,G_{N})\), where each pair of consecutive graphs \(G_{i}\) and \(G_{i+1}\) is linked such that \(R_{G_{i}}=L_{G_{i+1}}\)3. The set of edges for each partition \(G_{i}\) is computed as described in Section 3.1. However, an extension of the previous encoding is needed for connecting consecutive layers when a pooling operation is employed between them, as explained in Appendix C.
Footnote 3: The graph representation of residual connections is not taken into consideration since the number of parameters is much smaller compared to classical convolutional layers.
### Topological Metrics
The unrolled GE proposed allows us to study the SNNs from a topological perspective, including a first theoretical analysis of the network connectivity between consecutive layers. We compute a number of _topological metrics_ (in the following, referred to for brevity as _topometrics_) over SNN topologies. These topometrics can be broken down into three categories: two structural (that we call **local** and **regional** graph metrics), and one related to the **stability** of pruning.
The **local** graph metrics are those computable over individual nodes or edges. These metrics (1) are computationally inexpensive, and (2) are able to capture some features of the graph connectivity between consecutive layers. Node-based topometrics include the fraction of _sink_, _source_, and _disconnected nodes_ over the MGE. The sink and source4 nodes are, respectively, those with outdegree and indegree of zero. The disconnected nodes are those with neither incoming nor outgoing connections. Considering the sink and source nodes, it is possible to compute the fraction of _removable connections_, which are edge-based topometrics. The out-connections of the set of source nodes (denoted here \(\alpha\)) are \(\text{r-out}=\frac{1}{|E|}\cdot\sum_{n\in\alpha}\text{outdegree}(n)\). The in-connections of the set of sink nodes (denoted here \(\beta\)) are \(\text{r-in}=\frac{1}{|E|}\cdot\sum_{n\in\beta}\text{indegree}(n)\). In fact, both these types of connections are useless for the final SNN performance, since they are ignored at inference.
Footnote 4: Padding nodes are already removed from the source set, since they have zero in-connections by design.
The **regional** metrics are calculated over linked subgraphs in the MGE, such as \(G=(G_{i},G_{i+1})\) (hence any pair of consecutive BGEs, i.e., each tripartite slice). They (1) are more expensive computationally, but (2) can better analyze the connectivity of the networks. These topometrics are: the number of _motifs_ (of size \(3\)), the number of _connected components_ (also known as clusters) and the _edge connectivity_ (i.e., the number of "bridges" or edges to cut in order to disconnect the graph). Each topometric has been normalized based on the number of edges present in the graph representation--to prevent the graph size from being a confounding variable for the topological study conducted in Section 4.
The **stability** metrics are calculated in order to gain insights about how relevant (and stable) the graph edges are for a given task. These metrics, which we call \(SJ\) (Stability-Jaccard) and \(SO\) (Stability-Overlap), can be computed between any two graph representations of SNNs in two settings: 1) an _init_ setting, where the pruning algorithm, sparsity ratio, and dataset are fixed, while the initialization seed is changed, and 2) a _data_ setting, where the pruning algorithm, sparsity ratio, and initialization seed are fixed, while the input dataset is changed. These two metrics are computed over the graph edges respectively using the Jaccard distance (Intersection over Union) and the overlap coefficient (Szymkiewicz-Simpson):
\[SJ=\frac{\sum_{i=0}^{N}\frac{|E_{i}^{1}\cap E_{i}^{2}|}{|E_{i}^{1}\cup E_{i}^{ 2}|}\times e_{i}}{\sum_{i=0}^{N}e_{i}}\qquad SO=\frac{\sum_{i=0}^{N}\frac{|E_{ i}^{1}\cap E_{i}^{2}|}{\min(|E_{i}^{1}|,|E_{i}^{2}|)}\times e_{i}}{\sum_{i=0}^{N}e_{i}} \tag{4}\]
where \(e_{i}=\frac{|E_{i}^{1}|+|E_{i}^{2}|}{2}\). A value of either \(SO\) or \(SJ\) close to \(1\) has a different meaning per setting. In the _init_ setting, it means that the pruning algorithm finds a topological structure which is not related to the values of the initialized weights. On the other hand, for the _data_ setting a value close to \(1\) means that the algorithm finds the exact same topological structure independently from the input dataset.
Figure 1: Illustration of the proposed unrolled input-aware BGE with \(I=3\times 3\times 3\) and convolutional parameters \((c_{\textit{out}}=2,c_{\textit{in}}=3,w_{\textit{ker}}=2,h_{\textit{ker}}=2,P= 0,S=1)\). (a) and (b) show, respectively, the first and second convolutional steps and how the graph edges are generated assuming that all the kernel parameters are unmasked. (c) shows the complete graph representation before pruning the kernel parameters. (d) shows the final graph representation after pruning the kernel parameters.
Experiments
In this section, we show how our proposed _input-aware unrolled GE_ provides meaningful topometrics from graph representations of SNNs. We then use the topometrics for two purposes: 1) to classify both pruning algorithms and their categories, and 2) to develop a regression analysis to capture which topometrics can predict the accuracy drop of SNNs for different sparsity ratios. The first setting allows us to understand what makes SNNs produced by Pals, DSTs and Layer-wise Random Pruning algorithms topologically different. The second setting allows us to understand how a certain sparse topology affects the architecture performance (from now on, we use "architecture" to refer to each CNN considered in our experimentation) and what may make some pruning algorithms perform better than others. It is worth mentioning that such analyses are based only on the unweighted graph representation of SNNs, hence do not take into consideration the weight values which could be highly dependent on the hyperparameters used in the training processes.
**Experimental Setup.** We generated a large pool of SNNs for our purposes. We use eleven different pruning algorithms: four Pruning at Initialization methods (Table 2), four Dynamic Sparse Training algorithms (Table 4), and three instances of Layer-wise Random Pruning (Table 3).
Since the graph size of the proposed GE is based on the size of the input data, we selected three datasets with the same data sizes, namely CIFAR-10, CIFAR-100 [28], and the downscaled Tiny-ImageNet (of size \(32\times 32\) pixels) [11]. We then used four different architectures designed for such input size, namely Conv-6 [26], Resnet-20 [22], Wide-Resnet-28-2 [53], and MobileNet-V2 [38]. We considered five sparsity values to cover a broad spectrum, namely \(s\in[0.6,0.8,0.9,0.95,0.98]\) (as in [13]). We trained each combination of \(\langle\)pruning algorithm, dataset, architecture,sparsity\(\rangle\) for \(3\) runs, obtaining a pool of \(1,980\) sparse architectures. More information on architectures, datasets, and hyperparameters is in Appendix A; the numerical results in terms of training accuracy (which correctly reproduce those reported in the literature) are in Appendix B.
The topometrics taken into consideration in the following experiments are the ones described in Section 3.3, namely: 1) _local_ metrics, which consist of graph properties over nodes such as the fraction of source, sink, and disconnected nodes, plus metrics over edges, such as the fraction of removable connections (both in and out); 2) _regional_ metrics, which consist of the number of motifs of size \(3\) over our directed acyclic multipartite graphs, the edge-connectivity (i.e., the percentage of bridge connections), and the number of clusters; 3) _stability_ metrics, which are the \(SJ\) and \(SO\) metrics both for the _init_ and the _data_ settings; d) _combination_, which considers all these metrics together. For the classification and regression analysis, we use XGBoost [9].
### Topological Classification
The first step in order to understand if different pruning algorithms provide either similar or diverse topologies is checking if the graph representation can be correctly classified based on its topological features. This analysis has been conducted for classifying both pruning algorithms and their categories (PaI, DST, Linear-wise Random Pruning). To do that, we average the topological properties of the SNNs obtained over different runs for the same combination \(\langle\)architecture, sparsity, dataset\(\rangle\), in order to avoid overfitting, then we remove the duplicate entries. For each type of topometrics, we tested the classification accuracy over two different data subsets: 1) **Sparsity-wise**, i.e., we conduct the classification separately for each sparsity ratio, and 2) **Architecture-wise**, i.e., we conduct the classification separately
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Algorithm** & **Drop** & **Growth** & **Redistribution** & **Sparse Init.** \\ \hline \hline DSR [34] & \(|\theta|\) & random & random (zero layers) & Uniform \\ SNFS [45] & \(|\theta|\) & momentum & momentum & Uniform \\ RigL [14] & \(|\theta|\) & gradient & ✗ & ERK \\ SET-ITOP [32] & \(|\theta|\) & random & ✗ & ERK \\ \hline \hline \end{tabular}
\end{table}
Table 4: Dynamic Sparse Training Pruning Algorithms.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Pruning Method** & **Drop** & **Sanity Check** & **Training Data** & **Iterative** \\ \hline SNIP [35] & \(|\nabla_{\theta}L(\theta)|\) & ✗ & ✓ & ✗ \\ GraSP[8] & \(-H\nabla\phi\,L(\theta)\) & ✗ & ✓ & ✗ \\ Synflow [44] & \(\frac{\partial\mathcal{R}}{\partial\theta}\theta,\mathcal{R}=1\top(\prod_{l=1} ^{l}|\theta^{l}|)1\) & ✗ & ✗ & ✓ \\ ProsPr [1] & \(|\nabla_{\theta_{e}}L(\theta_{e})|\) & ✗ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pruning at Initialization Algorithms.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Algorithm** & **Layer-wise Sparsity** & **\(s^{t}\ \lor\ l\in[0,N)\)** \\ ER [12] & \(1-\frac{n^{t-1}+n^{t}}{n^{t-1}\times n^{t}}\) \\ ERK [14] & \(1-\frac{n^{t-1}+n^{t}+n^{t}+h^{t}}{n^{t-1}\times n^{t}\times w^{t}\times h^{t}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Random Pruning Algs.
for each architecture. We also consider the case where all data are taken together (this case is referred to as "General"). We report the results in Table 5.
The results prove how different pruning algorithm categories generate SNNs with _different topological features_, which can be effectively classified with an average cross-validation balanced accuracy \(\sim 0.9\). It is also clear that the network topologies become more _separable_ (i.e., the classification accuracy increases) with increasing sparsity ratio. For the classification of the pruning algorithms, the accuracy is computed over \(11\) classes, which means fewer samples per class available for training (compared to the classification by algorithm categories). However, on average, it is still possible to reach an accuracy \(\sim 0.7\). For both Sparsity-wise and Architecture-wise classification, the best results are achieved when all the topometrics are used all at once. In Appendix D, we report the feature importance scores for both the classification tasks in the "General" case.
### Performance Prediction
The previous analysis showed that, from a classification perspective, different pruning algorithms (and categories thereof) generate network topologies with topological features. However, this analysis does not reveal how those features are associated with the **performance drop** of SNNs. It is already known that: 1) the performance drop is not always proportional to sparsity ratio, and 2) the performance of SNNs for some (sparsity, architecture, dataset) combinations could be even better than that of their dense counterparts [26; 25; 39]. Aiming at discovering associations between the topometrics and the performance drop, we conduct the following analysis. Starting from our pool of SNNs, we train a regression model and analyze its coefficient of determination \(R^{2}\) (which is computed as \(1-\frac{\sum_{i}(y_{i}-\hat{y_{i}})^{2}}{\sum_{i}(y_{i}-\hat{y})^{2}}\)), which has been proved to be the most informative measure for discovering associations between input features and predicted variables [10]. For the regression, we use as inputs the same topometrics introduced before, and compute the performance drop as \(1-\frac{accuracy_{k}}{accuracy_{d}}\), where \(s\) and \(d\) respectively correspond to the sparse and deep version of any given architecture. We conduct this analysis separately for each dataset. This has been done due to the fact that the performance drop is highly related to the over-parametrization relation between the dataset and the architecture.
Then, for each dataset, we study the association between the topometrics and the performance drop for both the Sparsity-wise and Architecture-wise cases. These two analyses allow us to investigate from a topological perspective: 1) what makes certain SNNs, given the same fraction of parameters w.r.t. the dense version, perform better than others, and 2) what topological properties, for a given architecture, make its sparse version perform worse.
The \(R^{2}\) coefficient values obtained using the _combination of all the proposed topometrics_ are shown in Table 6. The results for each single category of topometrics (_local_, _regional_ and _stability_) are available in Appendix D.2. Also for this study, the results reported are based on stratified cross-validation over \(100\) runs. To further assess the validity of our results, we also conducted an _ablation_ study. For the Sparsity-wise case, we calculated the \(R^{2}\) coefficient between architectures and corresponding performance drops separately for each value of sparsity ratio. For the Architecture-wise case, we calculated the \(R^{2}\) coefficient between sparsity ratios and performance drops separately for each architecture. It can be clearly noticed that our topological approach reaches a \(R^{2}\) coefficient much higher than that of the ablation studies, meaning that the proposed topometrics: 1) have a much higher predictive power than sparsity ratio and architecture alone, and 2) particularly for the Architecture-wise case, they add valuable information that is not captured when considering only the sparsity.
In addition, we analyzed the feature importance scores (using permutation importance) obtained during the regression analysis to find the most discriminative topometrics. Figure 2 (top-row) shows the feature importance for the Sparsity-wise case. The results have been averaged over \(100\) runs and then averaged over the three datasets (the results for each dataset are reported in Appendix D.2). For the Sparsity-wise case, the feature importance follows a clear pattern when increasing the sparsity ratio. Overall, the most discriminative feature turns out to be the number of motifs, i.e.,
\begin{table}
\begin{tabular}{l l c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Classes**} & \multirow{2}{*}{**Topometrics**} & \multicolumn{3}{c}{**Accuracy**} & \multicolumn{3}{c}{**Accuracy (Sparsity-wise)**} & \multicolumn{3}{c}{**Accuracy (Architecture-wise)**} \\ \cline{3-11} & & **(General)** & 0.6 & 0.8 & 0.9 & 0.95 & 0.98 & Conv-6 & Resnet-20 & Wide-Resnet-28-2 & MobileNet-V2 \\ \hline \multirow{4}{*}{**Pruning**} & **Local** & 7.5\(\pm\)0.1 & 66\(\pm\)0.2 & 71\(\pm\)0.2 & 73\(\pm\)0.3 & 81\(\pm\)0.3 & 85\(\pm\)0.3 & 76\(\pm\)0.2 & 67\(\pm\)0.3 & 79\(\pm\)0.2 & 93\(\pm\)0.2 \\ & **Regional** & 8.3\(\pm\)0.2 & 74\(\pm\)0.3 & 85\(\pm\)0.3 & 86\(\pm\)0.3 & 88\(\pm\)0.3 & 91\(\pm\)0.2 & 87\(\pm\)0.2 & 73\(\pm\)0.3 & 84\(\pm\)0.3 & 93\(\pm\)0.2 \\ \cline{1-1} & **Stability** & 76\(\pm\)0.1 & 79\(\pm\)0.3 & 83\(\pm\)0.3 & 76\(\pm\)0.4 & 80\(\pm\)0.3 & 79\(\pm\)0.4 & 76\(\pm\)0.3 & 74\(\pm\)0.3 & 74\(\pm\)0.3 & 72\(\pm\)0.3 & 77\(\pm\)0.4 \\ \cline{1-1} & **Combination** & **.95\(\pm\)0.1** & **89\(\pm\)0.3** & **93\(\pm\)0.3** & **92\(\pm\)0.3** & **93\(\pm\)0.2** & **95\(\pm\)0.2** & **94\(\pm\)0.2** & **89\(\pm\)0.2** & **91\(\pm\)0.3** & **96\(\pm\)0.2** \\ \hline \multirow{4}{*}{**Pruning**} & **Local** & 39.0\(\pm\)0.1 & 31\(\pm\)0.2 & 39\(\pm\)0.2 & 40\(\pm\)0.3 & 40\(\pm\)0.3 & 50\(\pm\)0.4 & 36\(\pm\)0.3 & 33\(\pm\)0.3 & 34\(\pm\)0.3 & 64\(\pm\)0.3 \\ \cline{1-1} & **Regional** & 49\(\pm\)0.2 & 57\(\pm\)0.4 & 57\(\pm\)0.4 & 53\(\pm\)0.3 & 55\(\pm\)0.4 & 58\(\pm\)0.4 & 60\(\pm\)0.4 & 44\(\pm\)0.3 & 52\(\pm\)0.4 & 60\(\pm\)0.4 \\ \cline{1-1} & **Stability** & 57\(\pm\)0.2 & 67\(\pm\)0.3 & 70\(\pm\)0.3 & 61\(\pm\)0.4 & 63\(\pm\)0.3 & 60\(\pm\)0.4 & 58\(\pm\)0.3 & 55\(\pm\)0.3 & 55\(\pm\)0.3 & 58\(\pm\)0.3 \\ \cline{1-1} & **Combination** & **.72\(\pm\)0.2** & **74\(\pm\)0.3** & **80\(\pm\)0.3** & **73\(\pm\)0.3** & **72\(\pm\)0.3** & **77\(\pm\)0.3** & **73\(\pm\)0.3** & **65\(\pm\)0.3** & **66\(\pm\)0.4** & **71\(\pm\)0.3** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Cross-validation balanced accuracy, using stratified k-fold with \(k=5\), for both pruning algorithm and algorithm category classification. The values have been averaged over \(100\) runs.
the number of significant recurrent subgraphs of size \(3\) in the graph representation. Another clear trend regards the different categories of topometrics: as the sparsity ratio increases, the _local_ metrics start to be more discriminative, while the _regional_ and _stability_ ones follow the inverse trend. Different sparsity ratios remove a different portion of parameters (and the corresponding edges in the graph representation) and disconnect the networks in different ways: for this reason, the feature importance scores "evolve" with the sparsity ratios. For instance, the importance of _clusters_ and _removable connections_ increases when the networks are sparser, therefore such metrics start to be effective in the regression analysis.
The same analysis was done over the Architecture-wise case, see Figure 2 (bottom-row), where the results previously discussed have been confirmed yet again. Also in this case, the number of motifs is the most discriminative feature, followed by edge connectivity (no. of bridges). It is also interesting to analyze how in a smaller network such as Resnet-20 (whose number of parameters is \(\sim 0.1-0.2\%\) w.r.t. the other considered networks), the most discriminative feature turns out to be the number of removable connections. Overall, the metrics that are mostly associated with the performance drop are the ones related to network connectivity.
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{
\begin{tabular}{c} **Input** \\ **features** \\ \end{tabular} } & \multicolumn{4}{c}{\(\boldsymbol{R^{2}}\) **(Sparsity-wise)**} & \multicolumn{4}{c}{\(\boldsymbol{R^{2}}\) **(Architecture-wise)**} \\ \cline{3-11} & & 0.6 & 0.8 & 0.9 & 0.95 & 0.98 & Conv-6 & Resnet-20 & Wide-Resnet-28-2 & MobileNet-V2 \\ \hline \multirow{2}{*}{**CIFAR-10**} & **Topometrics** & **73\(\pm\)0.04** & **76\(\pm\)0.07** & **81\(\pm\)0.05** & **80\(\pm\)0.09** & **77\(\pm\)0.07** & **89\(\pm\)0.03** & **92\(\pm\)0.02** & **89\(\pm\)0.05** \\ & **Architectures** & 02\(\pm\)0.02 & 03\(\pm\)0.02 & 11\(\pm\)0.06 & 15\(\pm\)0.07 & 49\(\pm\)0.07 & - & - & - & - & - \\ & **Sparsity ratios** & - & - & - & - & - & - & - & - & - \\ \hline \multirow{2}{*}{**CIFAR-100**} & **Topometrics** & **49\(\pm\)0.07** & **69\(\pm\)0.07** & **73\(\pm\)0.05** & **72\(\pm\)0.08** & **80\(\pm\)0.05** & **95\(\pm\)0.02** & **95\(\pm\)0.01** & **91\(\pm\)0.02** & **92\(\pm\)0.04** \\ & **Architectures** &.01\(\pm\)0.01 &.03\(\pm\)0.02 &.40\(\pm\)0.06 &.46\(\pm\)0.07 & 40\(\pm\)0.06 & - & - & - & - \\ & **Sparsity ratios** & - & - & - & - & - &.58\(\pm\)0.03 &.86\(\pm\)0.01 &.78\(\pm\)0.03 &.64\(\pm\)0.05 \\ \hline \multirow{2}{*}{**Tiny-ImageNet**} & **Topometrics** & **74\(\pm\)0.06** & **58\(\pm\)0.04** & **86\(\pm\)0.03** & **85\(\pm\)0.05** & **81\(\pm\)0.05** & **93\(\pm\)0.02** & **91\(\pm\)0.02** & **92\(\pm\)0.03** & **89\(\pm\)0.05** \\ & **Architectures** & 23\(\pm\)0.07 & 51\(\pm\)0.08 &.51\(\pm\)0.06 &.57\(\pm\)0.04 &.55\(\pm\)0.03 & - & - & - & - & - \\ & **Sparsity ratios** & - & - & - & - & - &.52\(\pm\)0.06 &.78\(\pm\)0.01 &.63\(\pm\)0.04 &.69\(\pm\)0.05 \\ \hline \hline \end{tabular}
\end{table}
Table 6: \(R^{2}\) computed using stratified k-fold with \(k=5\) over \(100\) runs.
Figure 2: Feature importance scores for the regression analysis in the **Sparsity-wise (top row)** and **Architecture-wise (bottom row)** case. Colors highlight the contribution of different sets of metrics: _local_ (red), _regional_ (green), _stability_ (blue).
Conclusions
In this paper, we presented an in-depth analysis of the topologies of SNNs and their association with the performance drop. To do that, we studied the SNNs from a graph theory perspective, relying on a novel _unrolled input-aware_ graph encoding which correctly reflects the convolutional steps and links across layers. The main limitation of our proposed GE is the time and space complexity for the encoding creation: for each layer of the MGE, i.e., for each BGE, the time complexity is \(\mathcal{O}(c_{\textit{in}}\times c_{\textit{out}}\times step)\), while the space complexity is \(\mathcal{O}(L+R+E)\), where \(step=(\frac{l_{\textit{in}}-d_{\textit{in}}}{S}+1)^{2}\) and \(|E|=c_{\textit{in}}\times|W\setminus\{0\}|\times step\), assuming square feature maps and kernels.
On the other hand, we showed the practical applicability of our proposed GE through an analysis, both in terms of classification and prediction of the SNN performance based on topological metrics, of the most recent pruning algorithms from the literature.
Our findings are in line with the No-Free-Lunch-Theorem (NFLT) for pruning algorithms, i.e., _"No single method is SOTA at initialization. Depending on the network, dataset, and sparsity, there is a setting where each early pruning method reaches the highest accuracy."_[25]. We in fact proved how, for the sake of accuracy prediction, the importance of most topometrics changes depending on the sparsity ratio and architecture. Even though association does not imply causation, our results suggest new hints to the NFLT, namely: 1) as we showed in our classification analysis, different pruning algorithms are designed differently, and, by construction, they generate SNNs with different topological features; 2) as shown in our performance prediction analysis, the topological features that are positively associated with the performance depend on the specific sparsity ratio and architecture, hence there is no guarantee that subnetworks found by any given pruning algorithm will perform well regardless of the sparsity ratio and architecture. Taken together, our analysis sheds therefore new light on the reason why the NFLT may hold true. However, while previous studies focused on the effects of the weights in SNNs, here we were able to investigate the properties and performance of the SNNs only looking at their topology, regardless of the weight values.
|
2310.12856 | A Narrow Uniform Core with a Wide Structured Wing: Modeling the TeV and
Multi-wavelength Afterglows of GRB 221009A | The TeV afterglow of the BOAT GRB 221009A was interpreted as arising from a
narrow jet while the radio to X-ray afterglows were interpreted as arising from
a wide structured jet. However, there is no model explaining the TeV and
lower-energy multi-wavelength afterglows simultaneously. We here investigate a
two-component jet model, including a narrow uniform core with a wide structured
wing, to explain both the multi-wavelength afterglows that last up to 100 days.
We find that to explain the early TeV afterglow with the inverse-Compton
process, we need a circum-burst density higher than $\gtrsim 0.1{\rm cm^{-3}}$,
while the radio afterglow and the H.E.S.S. upper limit combine to constrain the
density to be lower at larger radii. Thus, a decreasing density profile with
radius is favored. Considering that the rising TeV light curve during the
afterglow onset favors a constant-density medium, we invoke a stratified
density profile, including a constant-density profile at small radii and a wind
density profile at large radii. We find that the two-component jet model with
such a stratified density profile can explain the TeV, X-ray and optical
afterglows of GRB 221009A, although the radio fluxes exceed the observed ones
by a factor of two at later epochs. The discrepancy in the radio afterglow
could be resolved by invoking some non-standard assumption about the
microphysics of afterglow shocks. The total kinetic energy of the two
components in our model is $\lesssim 10^{52}{\rm erg}$, significantly smaller
than that in the single structured jet models. | Jian-He Zheng, Xiang-Yu Wang, Ruo-Yu Liu, Bing Zhang | 2023-10-19T16:07:39Z | http://arxiv.org/abs/2310.12856v3 | # A Two-component Jet Model for the TeV and Multi-wavelength Afterglows of GRB 221009A
###### Abstract
The TeV afterglow of BOAT GRB 221009A is interpreted as arising from a narrow jet while the radio to X-ray afterglows are interpreted as arising from a wide structured jet. However, there is no model explaining the TeV and lower-energy multi-wavelength afterglows simultaneously. We here investigate a two-component jet model, including an inner narrow core and an outer wide wing with an angular structure, to explain both the early TeV afterglow and multi-wavelength afterglows that last up to 100 days. We find that the radio afterglow and the TeV upper limit imposed by H.E.S.S. observations combine to constrain the circum-burst density to be low at larger radii. Thus, a decreasing density profile with radius is favored. Considering that the rising TeV light curve during the afterglow onset favors a constant-density medium, we invoke a stratified density profile, including a constant-density profile at small radii and a wind density profile at large radii. We find that the two-component jet model with such a stratified density profile can explain the TeV, X-ray and optical afterglows of GRB 221009A, although the radio fluxes exceed the observed ones by a factor of two at later epochs. The discrepancy in the radio afterglow could be resolved by invoking some non-standard assumption about the microphysics of afterglow shocks, such as a decreasing fraction of accelerated particles with time. The total kinetic energy of the two components in our model is \(\lesssim 10^{52}\)erg, significantly smaller than that in the single structured jet models.
Gamma-ray burst (629) -- Gamma-ray astronomy (628) -- High energy astrophysics (739)
0000-0002-4880-7088]Jian-He Zheng
0000-0002-2885-7885]Xiang-Yu Wang
0000-0002-4883-0888]Ruo-Yu Liu
0000-0002-4883-0888]Bing Zhang
## 1 Introduction
GRB 221009A is the brightest burst ever observed. Due to its enormous energy (\(E_{\rm iso}=10^{55}\) erg) (An et al., 2023; Lesage et al., 2023) and proximity (\(z=0.151\)) (Castro-Tirado et al., 2022), GRB 221009A is an exceptionally rare event (Burns et al., 2023). The Large High Altitude Air Shower Observatory (LHAASO) observed GRB 221009A at the earliest epoch, covering both the prompt emission phase and the early afterglow in TeV band, and revealed the onset of afterglow emission in the TeV band for the first time(LHAASO Collaboration, 2023).
In addition, the temporal slope of the TeV light curve in the decaying phase steepens from \(\alpha=-1.12^{+0.01}_{-0.01}\) to \(\alpha=-2.21^{+0.30}_{-0.83}\) at \(t_{\rm b}=670^{+230}_{-110}\)s after the afterglow onset time \(T^{*}=T_{0}+226\)s, where \(T_{0}\) is the trigger time of this GRB1, indicating that the opening angle of the jet of GRB 221009A is only \(\sim 0.8^{\circ}\)(LHAASO Collaboration, 2023). This reduces the beaming-corrected energy in gamma-rays to a level of \(10^{50}-10^{51}\)erg for GRB 221009A, which agrees well with the standard energy reservoir of GRB jets (Frail et al., 2001).
Footnote 1: We note \(F_{\nu}\propto\nu^{\beta}t^{\alpha}\) throughout this paper
It has been suggested that jets of GRBs may not be uniform, but characterized by a significant anisotropy of the angular distribution of the fireball energy around the axis (Rossi et al., 200
the assumption of a structured jet model, the small half opening angle of GRB 221009A implies a narrow core component, which is only responsible for the early afterglow emission before \(10^{4}\)s (LHAASO Collaboration, 2023), while the late-time (\(>10^{4}\)s) afterglow emission may require other jet components.
The X-ray afterglow of GRB 221009A features an initial power-law decay index of \(\alpha_{1}=-1.52\pm 0.01\), steepening to \(\alpha_{2}=-1.66\pm 0.01\) after \(0.82\pm 0.07\) days after the trigger, which is not consistent with standard predictions for the emission from a top-hat jet (O'Connor et al., 2023; Williams et al., 2023). O'Connor et al. (2023) interpreted the X-ray afterglow as due to a structured jet expanding into a constant-density medium, where the jet is composed by an inner component of angular size \(\theta_{b}\) with a shallow energy profile \(dE/d\Omega\propto\theta^{-a_{1}}\) and slightly steeper lateral structure at \(\theta>\theta_{\rm b}\) with \(dE/d\Omega\propto\theta^{-a_{2}}\)(\(a_{2}>a_{1}\)). Gill and Granot (2023) also explored a structured jet model to explain the multi-wavelength afterglow of GRB 221009A, but assuming a wind density profile for the surrounding medium. In both models, the early radio emission is attributed to electrons accelerated by the reverse shock, while the optical and X-ray afterglows, as well as the late radio afterglow, arise from the forward shock. A two-component jet model consisting of a narrow top-hat jet and a broader top-hat jet has been proposed to explain the multi-wavelength data from radio to GeV afterglows of GRB 221009A (Sato et al., 2023). Besides the two-component jet models, various other types of models were also proposed to explain the multi-wavelength data of GRB 221009A (Ren et al., 2023; Zhang et al., 2023).
The above models, however, did not take into account the early TeV data observed by LHAASO, which were not available at that time. In this work, we explore a two-component jet model, including a narrow top-hat jet (core) and a wider wing with an angular structure, to explain both the TeV afterglow measured by LHAASO and lower energy multi-wavelength afterglows of GRB 221009A.
## 2 The set-up of a two-component jet model
The afterglow of GRB 221109A exhibits two breaks in light curves. The early sharp break in the TeV band is consistent with a jet break from a top hat jet (LHAASO Collaboration, 2023), while the later shallow break could be due to the change of the angular profile \(dE/d\Omega\propto\theta^{-a}\) of a structured jet (O'Connor et al., 2023; Gill and Granot, 2023). This complex behaviour suggests a jet composing of a inner, narrow top-hat core component and a outer, wide wing component with an angular structure, which can be described by (Zhang and Wang, 2023)
\[\epsilon\equiv\frac{dE}{d\Omega}=\begin{cases}\epsilon_{\rm I},\qquad\theta< \theta_{\rm j},\\ \epsilon_{\rm II}f(\theta),\theta_{\rm j}<\theta<\Theta,\end{cases} \tag{1}\]
where \(\theta_{\rm j}\) is the opening angle of the narrow core, \(\Theta\) is the maximum angle of the structured wing, and \(f(\theta)\) is the structure function of the wing. The isotropic energy of the wing could be much smaller than that of the core (\(\epsilon_{\rm I}\gg\epsilon_{\rm II}\)), which can only be determined by the afterglow data. We assume a smooth broken power-law function for the structure of the wing, as given by (Granot and Kumar, 2003)
\[f(\theta)=\left[\left(\frac{\theta}{\theta_{\rm c,w}}\right)^{2a_{1}}+\left( \frac{\theta}{\theta_{\rm c,w}}\right)^{2a_{2}}\right]^{-1/2}, \tag{2}\]
where \(\theta_{\rm c,w}\) is the transition angle from a shallow angular profile to a steeper (\(a_{2}>a_{1}\)) angular profile of the wide wing. The structured function declines as \(f(\theta)\propto\theta^{-a_{1}}\) from \(\theta_{\rm j}\) to \(\theta_{\rm c,w}\) and then transfers into \(f(\theta)\propto\theta^{-a_{2}}\) after \(\theta_{\rm c,w}\). The shallow jet break happens when the observers see the edge of \(\theta_{\rm c,w}\).
Correspondingly, one could also define a jet structure of the angle-dependent initial Lorentz factor, i.e.
\[\Gamma_{0}(\theta)=\begin{cases}\Gamma_{\rm I,0},\qquad\theta<\theta_{\rm j}\\ \Gamma_{\rm II,0}g(\theta),\theta_{\rm j}<\theta<\Theta,\end{cases} \tag{3}\]
where \(\Gamma_{\rm I,0}\gg\Gamma_{\rm II,0}\) and \(g(\theta)\) is the structure function of the Lorentz factor profile.
The narrow jet core could be a Poynting-flux dominated jet (Dai et al., 2023; Yang et al., 2023) and the wide wing could be matter-dominated, as argued in Zhang and Wang (2023).
## 3 A stratified density profile
The slopes of afterglows produced by forward shocks depend on the density profile of the circum-burst medium. The density profile of the circum-burst medium is usually described by \(n(R)\propto R^{-k}\), where \(k=0\) corresponds to a homogeneous medium, while \(k=2\) corresponds to a stellar wind from the GRB progenitor (Dai and Lu, 1998; Chevalier and Li, 2000; Panaitescu and Kumar, 2000). In the wind case, for a constant mass loss rate \(\dot{M}\) and wind velocity \(v_{\rm w}\), one has \(n=Ar^{-2}\), where \(A=3\times 10^{35}A_{\star}{\rm cm}^{-1}\), scaled to \(A_{\star}=(\dot{M}/10^{-5}M_{\odot}{\rm yr}^{-1})(v_{\rm w}/10^{3}{\rm Kms}^{- 1})^{-1}\).
The TeV light curve of GRB 221009A rises with a slope of \(\alpha=1.82^{+0.21}_{-0.18}\) before the peak (LHAASO Collaboration, 2023). This rising phase is
interpreted as the onset of the TeV afterglow, where the ejecta is coasting before deceleration. During the coasting phase, the bulk Lorentz factor of the afterglow shock is roughly a constant. The flux resulted from the synchrotron self-Compton process (SSC) rises with a slope as \(\alpha=\frac{8-(p+2)k}{4}\) if the observed frequency lies above the peak frequency of the SSC spectrum (LHAASO Collaboration, 2023). The rising slope \(\alpha=1.82^{+0.21}_{-0.18}\) of the TeV light curve is consistent with the constant-density medium (i.e., \(k=0\)). Note that this conclusion applies only to the small radius where the early (\(t<18\)s) TeV emission is produced.
The density at a larger distance could be different and can only be constrained by late-time multi-wavelength data. Fermi-LAT measurement of GeV emission at \(T_{0}+20000\)s suggests a low density at large radii, as we show below. The High Energy Stereoscopic System (H.E.S.S.) began observations 2.5 days after the trigger, yielding an upper limit on the TeV flux (Aharonian et al., 2023). This upper limit implies a small TeV to keV flux ratio at 2.5 days, which also supports a low density at larger radii.
### Lower Limit on the Density at Small Radii
We assume that the TeV emissions observed by LHAASO are produced by the afterglow synchrotron self-Compton (SSC) process. Then we can use the TeV flux to constrain the density of the surrounding medium. The SSC flux without considering the Klein-Nishina (KN) effect can be expressed as (Sari and Esin, 2001).
\[F_{\nu}=\left\{\begin{array}{ll}F_{m}^{\rm IC}(\frac{\nu}{\nu_{\rm m}^{ \rm IC}})^{\frac{1}{3}},&\nu<\nu_{m}^{\rm IC}\\ F_{m}^{\rm IC}\left(\frac{\nu}{\nu_{\rm m}^{\rm IC}}\right)^{-\frac{\nu-1}{ 2}},&\nu_{m}^{\rm IC}<\nu<\nu_{c}^{\rm IC}\\ F_{m}^{\rm IC}(\frac{\nu_{\rm m}^{\rm IC}}{\nu_{\rm m}^{\rm IC}})^{\frac{1}{ 2}}(\frac{\nu}{\nu_{\rm m}^{\rm IC}})^{-\frac{\nu}{2}},&\nu>\nu_{c}^{\rm IC} \end{array}\right. \tag{4}\]
where \(F_{\rm m}^{\rm IC}=\tau_{\rm IC}F_{\nu,\rm max}\) is the peak flux density for SSC, which scales as \(F_{\rm m}^{\rm IC}\propto n_{0}^{5/4}\) in a constant-density medium, \(p\) is the spectral index for shock-accelerated elections (\(dN_{\rm e}/d\nu_{\rm e}^{\prime}\propto\gamma_{\rm e}^{{}^{\prime}-p}\)), \(\nu_{\rm m}^{\rm IC}\) and \(\nu_{\rm c}^{\rm IC}\) are the characteristic inverse Compton frequency for minimal Lorentz factor \(\gamma_{\rm m}^{\prime}\) and cooling Lorentz factor \(\gamma_{\rm c}^{\prime}\), respectively (see the Appendix A.1 for more details).
We assume that only a fraction \(\xi_{\rm e}\) of shock-heated electrons are accelerated into a power-law form. The minimum break frequency in the SSC spectra is
\[h\nu_{\rm m}^{\rm IC}=1{\rm GeV}E_{\rm I,iso,55}^{3/4}\epsilon_{\rm e,-1.5} ^{4}\xi_{\rm e,0}^{-4}\epsilon_{\rm B,-4}^{1/2}n_{0,-0.5}^{-1/4}t_{1.5}^{-9/4}, \tag{5}\]
where \(n_{0}\) is the number density of the circum-burst medium, \(\epsilon_{\rm e}\) is the energy equipartition factor for non-thermal electrons, \(\epsilon_{\rm B}\) is the energy equipartition factors for the magnetic field (we use \(p=2.3\) to derive the coefficient in the equation). Hereafter, we adopt the convention that subscript numbers \(x\) indicate normalisation by \(10^{x}\) in cgs units.
LHAASO observations show the spectral index in 0.2-7 TeV is \(\beta\simeq-1.4\), suggesting \(\nu_{\rm m}^{\rm IC}\lesssim 200{\rm GeV}\). This imposes a limit on \(k_{\rm e}\equiv\epsilon_{\rm e}/\xi_{\rm e}\),
\[k_{\rm e}\leq 0.15E_{\rm I,iso,55}^{-3/16}\epsilon_{\rm B,-4}^{-1/8}n_{0,-0.5}^{ 1/16}. \tag{6}\]
In the spectral region \(\nu\geq\nu_{\rm m}^{\rm IC}\), the flux density at \(h\nu=300{\rm GeV}\) at 30s is given by
\[F_{\nu}(300{\rm GeV})=0.005\mu{\rm Jy}E_{\rm I,iso,55}^{\frac{7+3\mu}{8}}k_{ \rm e,-1.5}^{2(p-1)}\xi_{\rm e,0}n_{0,-0.5}^{\frac{11-p}{8}}\epsilon_{\rm B,- 4}^{\frac{p+1}{8}}. \tag{7}\]
Since the KN effect will only suppress the SSC process, the flux given by Equation (7) can be regarded as an upper limit for the observed flux density, which is \(F_{\nu,\rm obs}(300{\rm GeV})=0.0052\mu{\rm Jy}\) at \(T^{*}+30\)s (LHAASO Collaboration, 2023). Note that, if \(\nu_{\rm c}^{\rm IC}<300{\rm GeV}\), the spectrum is steeper than \(F_{\nu}\propto\nu^{(1-p)/2}\), then the inequality \(F_{\nu}(300{\rm GeV})\geq F_{\nu,\rm obs}(300{\rm GeV})\) is still applicable.
Considering that the afterglow synchrotron flux should not be greater than the observed flux by Fermi/LAT at 100 MeV at \(T^{*}+110\)s, we obtain \(\xi_{\rm e,0}k_{\rm e,-1.5}^{p-1}\leq E_{\rm iso,55}^{-\frac{(p+2)}{55}} \epsilon_{\rm B,-4}^{-\frac{(p-2)}{4}}\)(LHAASO Collaboration, 2023). Utilizing the above maximum value of \(\xi_{\rm e}k_{\rm e}^{p-1}\) and \(k_{\rm e}\), we obtain a lower limit on the circum-burst density at the shock radius corresponding to \(t=T^{*}+30\)s,
\[n_{0}\geq 0.1{\rm cm}^{-3}E_{\rm I,iso,55}^{-\frac{(9-p)}{21-p}}\epsilon_{\rm B,-4}^{-\frac{(27-p)}{21-p}}. \tag{8}\]
Below we will show that the radio afterglow, GeV emission observed by Fermi/LAT and the H.E.S.S. TeV upper limit combine to constrain the circum-burst density to be lower than Equation (8) at larger radii.
### Upper Limit on the Density at Large Radii
The observed flux at 1 GeV is \({\cal F}_{\rm obs,GeV}\sim 2\times 10^{-10}{\rm ergcm}^{-2}{\rm s}^{-1}\) at \(T_{0}+20000\)s (Liu et al., 2023) and the flux at 1 keV at the same time is \({\cal F}_{\rm obs,keV}\sim 10^{-9}{\rm ergcm}^{-2}{\rm s}^{-1}\). Considering that the GeV flux of the afterglow is contributed by both the SSC emission and synchrotron emission while the flux at 1 keV is fully contributed by the synchrotron emission, the flux ratio between the SSC component and the synchrotron component should be smaller than 0.2, i.e., \({\cal F}_{\rm GeV}^{\rm IC}/{\cal F}_{\rm keV}^{\rm syn}\leq 0.2\). Using this flux ratio, we can obtain an upper limit on the circum-burst density, in combination with the radio afterglow data.
The characteristic cooling break in the SSC spectrum are \(h\nu_{\rm c}^{\rm IC}=100{\rm PeV}E_{\rm II,iso,54}^{-5/4}\epsilon_{\rm B,-4}^{-7/ 2}n_{0,-0.5}^{-9/4}t_{4.3}^{-1/4}\) while the
KN peak \(E^{\rm IC}_{\rm C,KN}=0.26{\rm TeV}E^{-1/4}_{\rm II,iso,54}\epsilon^{-1/4}_{\rm B,-4}n^{-3/4}_{0,-0.5}t^{-1/4}_{4.3}\) is around TeV. So the GeV band probably lies between \(\nu^{\rm IC}_{\rm m}\) and \(\nu^{\rm IC}_{\rm c}\) at \(T_{0}+20000\)s. The spectral index of X-ray emission measured by Swift XRT is \(\beta=-0.78\pm 0.011\) at \(T_{0}+26000\)s, consistent with the slow cooling phase. With the spectra regimes of \(\nu_{\rm m}<\nu_{\rm keV}<\nu_{\rm c}\) and \(\nu^{\rm IC}_{\rm m}<\nu_{\rm GeV}<\nu^{\rm IC}_{\rm c}\), the expected ratio between the SSC flux at 1 GeV and the synchrotron flux at 1 keV is
\[\frac{\mathcal{F}^{\rm IC}_{\rm GeV}}{\mathcal{F}^{\rm syn}_{\rm keV}}=0.019E ^{\frac{p+1}{8}}_{\rm II,iso,54}k^{p-1}_{\rm e,-1.5}n^{\frac{7-p}{8}}_{0,-0.5}. \tag{9}\]
The value of \(k_{e}\) can be constrained by the spectral energy distribution at \(T_{0}+2.5\)d, as shown in Figure 1. The radio to X-ray afterglows are produced by the synchrotron mission of accelerated electrons. From Figure 1, one can see that the radio to X-ray emissions do not follow a single power law, indicating the presence of a spectral break between radio and X-ray frequencies. A plausible explanation is that the break \(\nu_{\rm m}\) lies at \(\nu_{\rm m}\geq 3\times 10^{12}\)Hz (The possibility that the break is the self-absorption frequency \(\nu_{a}\) is disfavored, see the Appendix B). The inequality arises from that the radio emission could include contributions from both forward shock emission and reverse shock emission. As the break frequency \(\nu_{\rm m}\) is given by
\[\nu_{\rm m}=2.3{\rm GHz}E^{1/2}_{\rm II,iso,54}k^{2}_{\rm e,-1.5}\epsilon^{1/ 2}_{\rm B,-4}t^{-3/2}_{\rm 5.3}, \tag{10}\]
we obtain
\[k_{\rm e}\geq 1.1E^{-1/4}_{\rm II,iso,54}\epsilon^{-1/4}_{\rm B,-4}. \tag{11}\]
With this lower limit of \(k_{\rm e}\), we then obtain an upper limit on the circum-burst density by using Equation (9),
\[n_{0}\leq 0.007{\rm cm}^{-3}E^{-\frac{(3-p)}{7-p}}_{\rm II,iso,54}\epsilon^{ \frac{2(p-1)}{7-p}}_{\rm B,-4}. \tag{12}\]
Such a low density is dramatically different from the constraint we derived from the early TeV afterglow \(n_{0}\geq 0.1{\rm cm}^{-3}E^{-0.36}_{\rm I,iso,55}\epsilon^{-0.5}_{\rm B,-4}\). It is hard to reconcile the discrepancy by increasing \(\epsilon_{\rm B}\), because \(h\nu_{\rm c}\propto E^{-1/2}_{\rm I,iso}\epsilon^{-3/2}_{\rm B}n^{-1}_{0}\) is more sensitive to \(\epsilon_{\rm B}\) and the requirement \(h\nu_{\rm c}\geq 10\)keV at \(T_{0}+4000\)s leads to
\[n_{0}\leq 0.25{\rm cm}^{-3}E^{-1/2}_{\rm I,iso,55}\epsilon^{-3/2}_{\rm B,-4}. \tag{13}\]
The H.E.S.S. upper limit at 2.5 day also implies a low circum-burst density. In TeV band, the flux is suppressed by the KN effect and internal \(\gamma\gamma\) absorption, so it can only be studied numerically. We model the SED of afterglows at 2.5 days in Figure 1 with the synchrotron plus SSC emission, and find that only when \(n_{0}<0.01{\rm cm}^{-3}\) the model can fit the data.
### A Stratified Density Profile: Transition from a Constant-density Medium to a Wind Medium
Therefore, we consider a stratified density profile given by
\[n(r)=\left\{\begin{array}{ll}n_{0},&r<r_{\rm c}\\ Ar^{-2}.&r\geq r_{\rm c}\end{array}\right. \tag{14}\]
The transition radius from constant-density region to the wind region occurs at \(r_{\rm c}=\sqrt{A/n_{0}}=5.5\times 10^{17}{\rm cm}A^{1/2}_{\star,-1}n^{-1/2} _{0,-1}\), corresponding to a transition time \(t_{\rm c}=210E^{-1}_{\rm I,iso,55}A^{2}_{\star,-1}n^{-1}_{0,-1}\)s. We assume the transition time is later than the jet break time \(T^{*}\)+670s, so the wind parameter should satisfy
\[A_{\star}\geq 0.17E^{1/2}_{\rm I,iso,55}n^{1/2}_{0,-1}. \tag{15}\]
The standard wind-like profile (\(k=2\)) is based on the assumption of a constant mass loss rate \(\dot{M}\) and wind velocity \(v_{\rm w}\). However, the dynamics of the stellar wind right before the death of the massive star is highly uncertain. Strong deviations from a density slope of \(k=2\) may be expected in the region very close to the GRB progenitor, due to the short evolutionary timescales after core helium exhaustion and the effects of near-critical rotation, which can significantly alter wind properties. For example, if \(\dot{M}\propto t^{a}\) and and \(v_{w}\propto t^{b}\), we have \(\rho\propto r^{-2+(a-b)/(1+b)}\). A special mass loss rate and wind velocity (\(a=3b+2\)) during the last few hundred years of the star's life can lead to \(k=0\) in the
Figure 1: Spectra energy distribution of the afterglow emission at \(T_{0}+\) 2.5 days. The red points denote the 95% C.L. upper limits from H.E.S.S. Collaborations. The blue, red and black lines represent the afterglow models with density \(n_{0}=0.1{\rm cm}^{-3}\), \(n_{0}=0.01{\rm cm}^{-3}\) and \(n_{0}=0.001{\rm cm}^{-3}\), respectively. The corresponding values of \(\epsilon_{\rm B}\) are \(3\times 10^{-4}\), \(1\times 10^{-3}\) and \(4\times 10^{-3}\), respectively for the three lines. Other parameters are \(E_{\rm II,iso}=2.2\times 10^{53}\)erg, \(p=2.4\), \(\epsilon_{\rm e}=0.1\), and \(\xi_{\rm e}=0.1\).
region close to the GRB progenitor (Yoon et al., 2006; De Colle et al., 2012). In addition, if the GRB occurs in massive stellar clusters, the circum-burst medium is much more complicated due to the colliding wind effect (Mimica and Giannios, 2011). In this case, one would expect an enhanced density due to the shocked colliding winds and a freely expanding wind at a larger distance.
## 4 Model fits of the multi-wavelength afterglow data
We consider an on-axis two-component jet model with the angular distribution described in Equation (1) expanding into a stratified circum-burst medium (Equation (14)). The model uses shock dynamics given in the Appendix A.2. We assume that a fraction \(\xi_{\rm e}\) of shock-heated electrons are accelerated into a power law distribution with a spectral index of \(p\): \(dN_{\rm e}/d\gamma_{\rm e}^{\prime}\propto\gamma_{\rm e}^{-\,p}\), where \(\gamma_{\rm e}^{\prime}\) is the electron Lorentz factor. The complete KN cross section for the inverse Compton scattering has been considered and the internal \(\gamma\gamma\) absorption within the emitting region has been taken into account (see the Appendix A.2).
For the narrow core, we use the isotropic energy \(E_{\rm I,iso}=4\pi\epsilon_{\rm I}\) to solve the shock dynamics. For the wing component, since the energy has an angular distribution, it is reasonable to use the average isotropic energy in the solid angle between \(\theta_{\rm j}\) and \(\theta\) in the calculation, which is given by
\[\bar{E}_{\rm II,iso}(\theta)=\frac{4\pi\int_{\theta_{\rm j}}^{\theta}\epsilon_ {\rm II}f(\theta^{\prime})\sin\theta^{\prime}d\theta^{\prime}}{\int_{\theta_{ \rm j}}^{\theta}\sin\theta^{\prime}d\theta^{\prime}}. \tag{16}\]
\(\bar{E}_{\rm II,iso}(\theta)\) starts to increase from \(\theta_{\rm j}\) and decreases as a power-law \(\bar{E}_{\rm II,iso}\propto\theta^{-a}\) when \(\theta\gg\theta_{\rm j}\).
### The Optical to TeV Afterglows
Figure 2: Modeling of the multi-wavelength data of GRB 221009A with the standard afterglow theory. Panel A displays the light curves in the energy band of TeV (0.3-5 TeV), keV (0.3-10 keV) and GeV (0.1-10 GeV). The upper limits after \(10^{5}\)s show the H.E.S.S. data in the energy range 0.65-10 TeV. The solid lines represent the sum of forward shock emission from the narrow core (dot-dashed lines) and the wide wing (dashed lines). Panel B displays the spectra between 0.2-7 TeV measured by LHAASO in different time intervals (LHAASO Collaboration, 2023). The light blue band and grey band denote the systematic uncertainties and EBL-related uncertainties, respectively. The solid line represents the sum of forward shock SSC emission of the narrow core and wide wing. Panel C displays the optical light curves in r band (red), i band (blue) and z band (purple). All optical data points are corrected only by the galactic extinction (Schlafly and Finkbeiner, 2011). The solid lines represent the sum of forward shock emission from the narrow core and the wide wing (dashed lines). For brevity, the lines for the emission from the narrow core are not shown. Panel D shows the radio light curves at 230GHz (pink square), 97.5GHz (green star), 15.8GHz (black circle) and 1.5GHz (red diamond). The solid lines represent the sum of forward shock emission from the narrow core, the forward shock emission from the wide wing, and the reverse shock emission of the wide wing. The dotted line represents the reverse shock emission of the wide wing.
The parameters of the angular profile of the wide wing can be estimated from the X-ray afterglow light curve analytically. For a wing with angular profile \(dE/d\Omega\propto\theta^{-a}\), the light curve of the synchrotron emission from the forward shock is given by (Beniamini et al., 2022; Zhang and Wang, 2023)
\[F_{\nu}\propto\left\{\begin{array}{ll}t^{-\frac{3(2p-1)-a(p-1)}{2(4-a)}},&\nu <\nu_{m}\\ t^{-\frac{2(2p-1)-a(p-1)}{2(4-a)}},&\nu_{m}<\nu<\nu_{c}\\ t^{-\frac{2(2p-2)-a(p-2)}{2(4-a)}}.&\nu>\nu_{c}\end{array}\right. \tag{17}\]
Before 0.8 d, the power-law slope of the X-ray afterglow is \(\alpha=-1.52\), suggesting an angular profile index of \(a=0.2\). After 0.8 d, the slope becomes steeper with \(\alpha=-1.66\), leading to \(a=0.7\).
The bulk Lorentz factor is \(\Gamma=23E_{\rm II,iso,54}^{1/4}A_{\star,-1}^{-1/4}\) at 0.8 d, suggesting that the transition angle from a shallow angular profile to a steeper angular profile is \(\theta_{\rm c,w}\sim\Gamma^{-1}\approx 2.5^{\circ}E_{\rm II,iso,54}^{-1/4}A_{ \star,-1}^{1/4}\).
We model the multi-wavelength afterglow data of GRB 221009A with the two-component jet model, which is shown in Figure 2. We first assume that there is no lateral expansion for both jet components. We find that this model can explain the TeV, X-ray and optical afterglows, with the model parameter values given in Table 1.
The early TeV afterglow emission originates from the SSC emission of the narrow jet component. At times later than \(10^{4}\)s, the SSC emission from the wing component becomes dominant, and its flux is consistent with the upper limits imposed by H.E.S.S. observations. The X-ray afterglow at times later than \(10^{4}\)s is produced by the wing component through the synchrotron emission. The optical afterglow is also produced by the synchrotron emission of the wing component. The model flux is insufficient to explain the optical data after tens of days, which could be attributed to extra contribution from a supernova (Fulton et al., 2023; Levan et al., 2023; Blanchard et al., 2023).
The isotropic energy of the narrow core is \(4\pi\epsilon_{\rm I}=9\times 10^{54}\)erg, while the isotropic energy of the wing at \(\theta=\theta_{\rm c,w}\) is \(4\pi\epsilon_{\rm II}/\sqrt{2}=2.8\times 10^{53}\)erg. The indices of the angular profile of the wing, \(a_{1}=0\) and \(a_{2}=0.8\), are consistent with the above analytical estimate. The acceleration fraction \(\xi_{e}=0.15\) for the wide wing is an order of magnitude higher than that in O'Connor et al. (2023) and Gill and Granot (2023), which avoids to overproduce the TeV flux measured by H.E.S.S. at 2.5 days.
The theoretical flux at GeV band slightly exceeds the observational data at \(\sim 10^{4}\)s, during which both the narrow jet and wide wing contribute to the flux. Semi-analytical models and numerical simulations have shown that lateral expansion may be important for narrow jets (Granot and Piran, 2012; Lu et al., 2020), so our calculation may overestimate the emission flux from the narrow core component after its jet break time if the later expansion occurs in structured jets (Gottlieb et al., 2021). The model considering the lateral expansion for the narrow core is shown in Figure 3, which now agrees better with the GeV data.
### The Radio Afterglow
The radio afterglow of GRB 221009A may be produced by both the forward shock emission and reverse shock emission. Since the narrow jet is likely to be Poynting-flux-dominated, we do not consider the reverse shock emission from this component. The reverse shock emission in our model comes from the wind wing.
The early radio data around 0.2 d exhibit a spectral shape \(F_{\nu}\propto\nu^{5/2}\), indicating that the observed frequency (\(\sim 15\)GHz) is between the self-absorption frequency and the minimum frequency, i.e., \(\nu_{\rm m}<\nu<\nu_{\rm a}\). This feature is inconsistent with the forward shock emission, which has a much higher \(\nu_{\rm m}\). Therefore, the early radio is probably dominated by the reverse shock.
Since the deceleration time of the wide jet is \(\sim\)3000s, the wide jet can be regarded as a thin shell. After the deceleration, the Lorentz factor of reverse shock follows the relation \(\Gamma_{3}\propto R^{-g}\). For the wind environment, we take \(g=1\), and the rising slope of reverse shock emission in the spectral regime of \(\nu_{\rm m,rs}<\nu<\nu_{\rm a,rs}\) is \(\alpha=\frac{5(8+5g)}{14(1+2g)}=1.55\)(Zou et al., 2005), which is close to the observed rising slope \(\alpha=1.4\)(Bright et al., 2023).
However, the observed decay slope of the radio afterglow, \(\alpha=-0.8\), is much shallower than the prediction by the reverse shock in a top-hat jet, which is \(\alpha\approx-2\). Zhang and Wang (2023) studied the reverse shock evolution in a structured jet and found the decay slope can be shallower when the initial Lorentz factor of the jet has an angular structure, i.e., \(\Gamma_{0}\propto\theta^{-\rm kr}\) (see Appendix C). Considering this effect, the decaying slope of the reverse shock emission is \(\alpha=-1.35\) for \(a=0\) and \(\alpha=-1.54\) for \(a=0.8\), as discussed in Appendix C.
From Figure 2 and 3, we can see that the model does not explain the radio afterglow data satisfactorily. The forward shock emission exceeds the high-frequency radio data by a factor of about 2 in the case assuming no lateral expansion for the narrow jet and by a factor of 1.5 assuming lateral expansion.
Indeed, it is found that the light curves of radio afterglows of many GRBs can not be explained satisfactorily by the standard afterglow theory (e.g. Levine et al., 2023). This could indicate our lack of knowledge about the radio afterglow. Below we discuss some non
standard scenarios that could possibly resolve the discrepancy in the radio afterglows of GRB 221009A.
## 5 Non-Standard Afterglow Model
In the standard model of afterglows, the microphysical parameters \(\epsilon_{\rm e},\epsilon_{\rm B},\xi_{\rm e}\) and \(p\) are usually assumed to be a constant in the whole afterglow. The assumption simplifies the modeling and can explain afterglows of many GRBs. However, for those GRBs which are thoroughly investigated (e.g. GRB 130427A & GRB 170817A), time-evolving microphysics parameters are proposed to explain some unusual behaviour of afterglows (e.g. Maselli et al., 2014; Takahashi et al., 2022).
For GRB 221009A, the radio observations after 10 days impose a stringent limit on \(\nu_{\rm m}\) and \(k_{\rm e}\). At \(T_{0}+54\) days, the spectrum between X-ray and radio is not a single power-law, so \(\nu_{\rm m}\) is required to be higher than \(3\times 10^{11}\)Hz, which gives
\[k_{\rm e}\geq 1.7E_{\rm II,iso,54}^{-1/4}c_{\rm B,-3}^{-1/4}. \tag{18}\]
On the other hand, we obtain an upper limit of \(k_{\rm e}\lesssim 0.4\) at 2.5 d from the SED modeling for \(A_{\star}=0.17\) and \(E_{\rm II,iso}=10^{54}\)erg. The discrepancy of \(k_{\rm e}\) at the two times can not be solved by increasing \(\epsilon_{\rm B}\) in Equation (18), because the X-ray spectra at \(T_{0}+260008\) requires \(h\nu_{\rm e}\geq 10\)keV, resulting in \(\epsilon_{\rm B}\leq 1.2\times 10^{-3}E_{\rm II,iso,54}^{1/4}A_{\star,-1}^{-4/3}\). Furthermore, the isotropic equivalent energy decreases with the time as \(E_{\rm II,iso}\propto t^{-\frac{\alpha}{4-\alpha}}\) for a structured jet (see the Appendix C), then a higher value of \(k_{\rm e}\) is needed at 54 days. Thus, we speculate that \(k_{\rm e}\) may increase with time in GRB 221009A. As \(k_{\rm e}=\epsilon_{\rm e}/\xi_{\rm e}\), the increase of \(k_{\rm e}\) can be due to a decrease of \(\xi_{\rm e}\) or a increase of \(\epsilon_{\rm e}\).
### Decreasing Acceleration Fraction \(\xi_{\rm e}\)
The fraction of particles being accelerated in afterglow shocks depends on the conditions of the shock, including the bulk Lorentz factor, the magnetization parameter and the direction of the magnetic field (Sironi et al., 2015). Particle in cell (PIC) simulations find that the fraction of non-thermal electrons is about \(\xi_{\rm e}\sim 1-10\%\)(Spitkovsky, 2008; Sironi et al., 2013). These simulations are conducted with the bulk Lorentz factor \(\Gamma=10-20\), which is different from the condition at early time when \(\Gamma_{0}\geq 100\). On the other hand, the magnetization degree can influence the direction of the magnetic field, possibly changing the acceleration fraction as well (Sironi et al., 2013). Thus the acceleration fraction \(\xi_{\rm e}\) could be time-varying.
The high-frequency radio data (\(\nu>90\)GHz) lies above the power-law extrapolation of \(F_{\nu}\propto\nu^{-0.2}\) from 1-20 GHz (see Figure 5 in Laskar et al. (2023)), implying that the forward shock may contribute to these bands. A decreasing \(\xi_{\rm e}\) is helpful to avoid the overshoot in the radio flux at late times, because the synchrotron emission from
Figure 3: Same as Figure 2, but assuming that the narrow core has lateral expansion when the bulk Lorentz factor drops to \(\Gamma_{1}\leq\theta_{\rm j}^{-1}\).
the forward shock, \(F_{\nu}\propto\epsilon_{\rm e}^{-2/3}\xi_{\rm e}^{5/3}\), is sensitive to \(\xi_{\rm e}\) in the spectral regime of \(\nu<\nu_{\rm m}\). For the X-ray and optical band, the evolution of \(\xi_{\rm e}\) affects the light curve only slightly, as \(F_{\nu}\propto\epsilon_{\rm e}^{p-1}\xi_{\rm e}^{2-p}\).
We assume the fraction of accelerated electrons decrease with time as \(\xi_{\rm e}\propto t^{-a_{\rm c}}\). In a wind environment, the light curve of the structured jet is
\[F_{\nu}\propto\left\{\begin{array}{ll}t^{-\frac{a}{3(4-a)}-\frac{5}{3}\alpha _{\rm e}},&\nu<\nu_{m}\\ t^{-\frac{2(3p-1)-a(p-1)}{2(4-a)}+\alpha_{\rm e}(p-2)},&\nu_{m}<\nu<\nu_{c}\\ t^{-\frac{2(3p-2)-a(p-2)}{2(4-a)}+\alpha_{\rm e}(p-2)}.&\nu>\nu_{c}\end{array}\right. \tag{19}\]
The spectral index of X-ray afterglow measured by Swift XRT at later epochs is \(\beta\approx-0.78\), suggesting a power-law index of \(p\simeq 2.5\). After 0.8 d, the decay slopes of X-ray and high-frequency radio afterglows are \(\alpha=-1.66\) and \(\alpha=-0.78\) respectively. To fit the multi-wavelength data simultaneously, we need \(a=0.9\) and \(\alpha_{\xi}=0.4\). Because the bulk Lorentz factor of the afterglow shock decreases with time as \(\Gamma\propto t^{-1/(4-a)}\), we then derive \(\xi_{\rm e}\propto\Gamma^{-1.24}\). Before 0.8 d, the decay slope of the X-ray afterglow is \(\alpha=-1.52\), suggesting an angular profile index of \(a=0.2\). We assume \(\xi_{\rm e}=1\) when the Lorentz factor is greater than 60.
We model the multi-wavelength afterglows with a time-varying \(\xi_{\rm e}\), which is shown in Figure 4. The fit in the radio band is improved. For low-frequency radio afterglows, the model flux is still below the data at later times. The excess could arise from some extra low-velocity ejecta components in this GRB, which may produce mostly radio emissions.
Since \(\epsilon_{\rm e}\) and \(\xi_{\rm e}\) are coupled in the afterglow modeling, an alternative model is that \(\epsilon_{\rm e}\) is increasing with time. The main difference lies in that the X-ray and optical flux depends on \(\epsilon_{\rm e}\) as \(F_{\nu}\propto\epsilon_{\rm e}^{p-1}\). To fit the X-ray and optical light curves, we would need a steeper angular profile for the jet structure with \(a\approx 2\).
### Decaying Magnetic Field
In the standard afterglow shock model, we assume a homogeneous magnetic field in the downstream of the shock. Nonetheless, the realistic magnetic field may have a spatial distribution behind the shock. PIC simulations and theoretical analyses of relativistic collisionless shocks both suggest that the coherent length of magnetic fields is much smaller than the shock size of GRB afterglows (Chang et al., 2008; Lemoine, 2015; Sironi et al., 2015).
Since the cooling time of radio-emitting electrons is much longer than the shock dynamic time, these electrons may radiate most of their energy at the back of the blast wave, where the magnetic field has decayed to a low value (Lemoine, 2013; Wang et al., 2013). The standard model assuming a constant \(\epsilon_{\rm B}\) may overestimate the radio flux at later times. Taking a realistic magnetic field might reduce the radio flux and solve the discrepancy.
Figure 4: Same as Figure 2, but assuming that the acceleration fraction \(\xi_{a}\) in the two components is decreasing with time.
## 6 Conclusions and Discussions
GRB 221009A, as the Brightest of All Time (BOAT) event, provides rich multi-wavelength afterglow data spanning from GHz to TeV. We find that the late-time multi-wavelength observations, including radio, X-ray, GeV and TeV, combine to constrain the density of the circum-burst medium to be lower than \(n_{0}<0.01\rm{cm^{-3}}\), while the LHAASO observation at early time requires a constant-density medium with \(n_{0}\geq 0.1\rm{cm^{-3}}E_{\rm{l,iso,55}}^{-\frac{(9-p)}{21+p}}e_{\rm{B,-4}}^{- \frac{2(7-p)}{21+p}}\) at small radii. Therefore, we propose a stratified density profile that incorporates a constant-density medium at small radii and a wind-like medium at large radii to explain the afterglows of GRB 221009A.
Motivated by the multi-wavelength data, we employ a two-component jet model, comprising a uniform narrow jet core and a structured wing. This model can explain the afterglows from optical to TeV bands, although the flux at high-frequency radio bands exceeds the data by a factor of two after the second day (see Figure 2 and 3). We find that the discrepancy could be resolved by invoking time-varying micro-physical parameters of afterglow shocks (see Figure 4).
The model flux is lower than the observed flux at low-frequency radio bands (e.g., \(\nu=1.3\rm{GHz}\)) after \(\sim\)2 days (see Figure 4). The excess could be due to an extra electron component, possibly resulting from low-velocity ejecta components or other acceleration mechanisms. The hard spectra of low-frequency radio data \(\beta\approx-0.2\) is unusual in GRB afterglows, suggesting a hard injection spectra \(p=1.4\) for electrons, which is contradictory to standard diffusive shock acceleration theory. Such a hard spectrum could result from some other acceleration mechanism, such as shear acceleration (Liu et al., 2017; Rieger and Duffy, 2022).
Compared to single structured jet models (O'Connor et al., 2023; Gill and Granot, 2023), the beaming-corrected kinetic energy in our two-component jet model is an order of magnitude lower. In our model, the beaming-corrected kinetic energy in the narrow core is \(E_{\rm{I,b}}=E_{\rm{I,iso}}(1-\cos\theta_{\rm{j}})=5\times 10^{50}\rm{erg}\) (LHAASO Collaboration, 2023). The kinetic energy in the structured wing is \(E_{\rm{II,b}}=2\int_{\theta_{\rm{j}}}^{\theta}\frac{dE}{d\Omega}d\Omega\), where \(\theta\) is the angle corresponding to the last observing time \(t\) before seeing the maximum angle \(\Theta\). The beaming-corrected energy increase as \(E_{\rm{II,b}}\propto\theta^{2}\propto t^{\frac{2-n}{4-n}}\)(Beniamini et al., 2022). We find the beaming-corrected energy of the structured wing is \(E_{\rm{II,b}}=5.7\times 10^{51}(t/100\rm{days})^{3/8}\rm{erg}\) for \(a_{2}=0.8\) and \(E_{\rm{II,b}}=4.9\times 10^{51}(t/100\rm{days})^{1/3}\rm{erg}\) for \(a_{2}=1\). These values are significantly lower than the energy budget required by single structured jet models, which are \(8\times 10^{52}(t/80\rm{days})^{0.372}\rm{erg}\) in the model of O'Connor et al. (2023) and \(4\times 10^{52}(t/100\rm{days})^{0.375}\rm{erg}\) in the model of Gill and Granot (2023), respectively.
The authors thank Hai-Ming Zhang, Liang-Duan Liu and Yu-Jia Wei for useful discussions. This work is supported by the National Key R&D Program of China under the Grant No. 2018YFA0404203, the National Natural Science Foundation of China (grant numbers 12121003, U2031105), China Manned Spaced Project (CMS-CSST-2021-B11).
## Appendix A Methods
### Analytical Methods
The analytical solution of the bulk Lorentz factor \(\Gamma\) and the radius \(R\) of relativistic blast wave in an arbitrary power-law density profile \(n=Ar^{-k}\) are give by (Granot and Sari, 2002)
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline & \(4\pi\epsilon_{\rm{I}}\)(erg) & \(\theta_{\rm{j}}\) & \(\Gamma_{\rm{I,0}}\) & \(\epsilon_{\rm{I,e}}\) & \(\epsilon_{\rm{I,B,-3}}\) & \(\xi_{\rm{I,e}}\) & \(p_{\rm{I}}\) & \(4\pi\epsilon_{\rm{II}}\)(erg) & \(\theta_{\rm{c,w}}\) & \(\Gamma_{\rm{II,0}}\) & \(a_{1}\) & \(a_{2}\) & \(\epsilon_{\rm{II,e}}\) & \(\epsilon_{\rm{II,B,-3}}\) & \(\xi_{\rm{II,e}}\) & \(p_{\rm{II}}\) \\ \hline FS1 & \(9\times 10^{54}\) & \(0.6^{\circ}\) & 560 & 0.04 & 1 & 1 & 2.2 & \(4\times 10^{53}\) & \(3^{\circ}\) & 60 & 0 & 0.8 & 0.06 & 2 & 0.15 & 2.4 \\ FS2 & \(9\times 10^{54}\) & \(0.6^{\circ}\) & 560 & 0.04 & 1 & — & 2.2 & \(4\times 10^{53}\) & \(3^{\circ}\) & 60 & 0 & 0.8 & 0.06 & 2 & — & 2.5 \\ RS & — & — & — & — & — & — & — & \(4\times 10^{53}\) & \(3^{\circ}\) & 60 & 0 & \(0.8/1.0^{\circ}\) & 0.04 & 0.5 & 0.01 & 2.2 \\ \hline \end{tabular}
* This parameter follows the forward shock.
\end{table}
Table 1: Parameter values used in the modeling of the multi-wavelength afterglow data. The subscript I and II represent the parameters for the narrow core and wide wing, respectively. Forward shock1 (FS1) denotes the standard forward shock model shown in Figure 2, while Forward shock2 (FS2) denotes the non-standard forward shock model with a time-varying \(\xi_{\rm{e}}\propto\Gamma^{-\alpha_{\Gamma}}\), which was shown in Figure 4. We use \(\alpha_{\rm{I,\Gamma}}=1.35\) and \(\alpha_{\rm{II,\Gamma}}=1.25\) in the non-standard case. The parameter values for the density profile are \(n_{0}=0.2\rm{cm^{-3}}\) and \(A_{*}=0.17\).
\[\Gamma=1.15^{2-k}\Bigg{(}\frac{\left(17-4k\right)E_{\rm iso}}{4^{5-k}\left(4-k \right)^{3-k}\pi Am_{\rm p}c^{5-k}t_{\rm z}^{3-k}}\Bigg{)}^{\frac{1}{2(4-k)}}, \qquad R=1.3^{-k-1}\Bigg{(}\frac{\left(17-4k\right)\left(4-k\right)E_{\rm iso}t _{\rm z}}{4\pi Am_{\rm p}c}\Bigg{)}^{\frac{1}{4-k}},\] (A1)
where \(t_{\rm z}=t/(1+z)\) is the time corrected by redshift, 1.15 and 1.3 are numerical correction factors. Assuming the injection spectra of electrons is a single power law \(dN_{\rm e}/d\gamma_{\rm e}^{\prime}\propto\gamma_{\rm e}^{\prime\,-p}\). The prime marks the comoving frame of the shock. The minimum Lorentz factor and the cooling Lorentz factor are
\[\gamma_{\rm m}^{\prime}=\frac{\epsilon_{\rm e}}{\xi_{\rm e}}\frac{p-2}{p-1} \frac{m_{\rm p}}{m_{\rm e}}(\Gamma-1),\qquad\gamma_{\rm c}^{\prime}=\frac{6 \pi m_{\rm e}c}{\sigma_{\rm T}\Gamma B^{\prime 2}t_{\rm z}},\] (A2)
where \(\epsilon_{\rm e}\) is the equapartition factor of electrons, and \(\xi_{\rm e}\) is the fraction of accelerated electrons. The magnetic field in the comoving frame is \(B^{\prime}=\sqrt{8\pi\epsilon_{\rm B}nm_{\rm p}c^{2}(\Gamma-1)(\tilde{\gamma} \Gamma+1)/(\tilde{\gamma}-1)}\), where \(\tilde{\gamma}\) is the adiabatic index of the shock. We use the fitting formula for \(\hat{\gamma}\) from Pe'er (2012), which is \((5-1.21937z+0.18203z^{2}-0.96583z^{3}+2.32513z^{4}-2.39332z^{5}+1.07136z^{6})/3\), where \(z=\zeta/(0.24+\zeta)\), \(\zeta=(\frac{\Gamma\beta_{\rm sh}}{3})(\frac{\Gamma\beta_{\rm sh}+1.07(\Gamma \beta_{\rm sh})^{2}}{1+\Gamma\beta_{\rm sh}+1.07(\Gamma\beta_{\rm sh})^{2}})\), and \(\beta_{\rm sh}=\sqrt{1-\Gamma^{-2}}\) is the velocity of the shock.
In a constant medium(\(k=0\)), the characteristic break frequencies in the synchrotron emission spectrum are
\[\nu_{\rm m}=\frac{\Gamma\gamma_{\rm m}^{\prime\,2}\nu_{\rm L}^{\prime}}{1+z}=2.4\times 10^{11}{\rm Hz}E_{\rm iso,55}^{1/2}k_{\rm e,-1}^{1/2}\epsilon_{\rm B,-4}^{1/2}t_{5}^{-3/2},\quad\nu_{\rm c}=\frac{\Gamma\gamma_{\rm c}^{\prime\,2} \nu_{\rm L}^{\prime}}{1+z}=6.7\times 10^{17}{\rm Hz}E_{\rm iso,55}^{-1/2}n_{0,-0.5}^{-1} \epsilon_{\rm B,-4}^{-3/2}t_{5}^{-1/2},\] (A3)
where \(\nu_{\rm L}^{\prime}=eB^{\prime}/2\pi m_{\rm e}c\) is the Lamour frequency of electrons. The characteristic break energies for SSC emission are
\[h\nu_{\rm m}^{\rm IC}=2{\gamma_{\rm m}^{\prime\,2}}h\nu_{\rm m}=65{\rm GeV}E_{ \rm iso,55}^{3/4}k_{\rm e,-1}^{4}\epsilon_{\rm B,-4}^{1/2}n_{0,-0.5}^{-1/4}t_{ 1.5}^{-9/4},\quad h\nu_{\rm c}^{\rm IC}=2{\gamma_{\rm c}^{\prime\,2}}h\nu_{ \rm c}=28{\rm Pe}E_{\rm iso,55}^{-5/4}\epsilon_{\rm B,-4}^{-7/2}n_{0,-0.5}^{-9/ 4}t_{1.5}^{-1/4}.\] (A4)
If \(h\nu_{\rm c}^{\rm IC}\gtrsim\Gamma\gamma_{\rm c}^{\prime}m_{\rm e}c^{2}\), the cooling break energy of SSC flux is \(E_{\rm c,KN}^{\rm IC}=0.2\Gamma\gamma_{\rm c}^{\prime}m_{\rm e}c^{2}=0.75{\rm TeV }E_{\rm iso,55}^{-1/4}t_{\rm B,-4}^{-3/4}t_{0,-0.5}^{-1/4}\) due to the KN effect (Nakar et al., 2009).
In the observer frame, the peak flux density of the synchrotron and SSC emission are
\[F_{\nu,{\rm max}}=(1+z)\frac{N_{\rm e}P_{\nu}}{4\pi D_{\rm L}^{2}}=7{\rm Jy}E_{ \rm iso,55}\xi_{\rm e,0}\epsilon_{\rm B,-4}^{1/2}n_{0,-0.5}^{1/2},\quad F_{\rm m }^{\rm IC}=\tau_{\rm IC}F_{\nu,{\rm max}}=0.19\mu{\rm Jy}E_{\rm iso,55}^{5/4} \xi_{\rm e,0}\epsilon_{\rm B,-4}^{1/2}n_{0,-0.5}^{5/4}t_{1.5}^{1/4},\] (A5)
where \(P_{\nu}=\sqrt{3}e^{3}\Gamma B^{\prime}/m_{\rm e}c^{2}\) is the spectral power of synchrotron, \(N_{\rm e}=\int 4\pi r^{2}\xi_{\rm e}n(r)dr\) is the number of accelerated electrons, \(D_{\rm L}=716{\rm Mpc}\) is the luminosity distance, \(\tau_{\rm IC}=n_{0}\sigma_{\rm T}R/3\) is the optical depth of Inverse Compton.
### Numerical Methods
The dynamical equations of the blast wave is described by (Huang et al., 1999)
\[\frac{d\Gamma}{dm_{\rm sw}}=-\frac{\Gamma^{2}-1}{M_{0}+\left[f+2\Gamma\left(1- f\right)\right]m_{\rm sw}},\qquad\frac{dR}{dt}=\frac{\beta_{\rm sh}c}{1-\beta_{ \rm sh}},\] (A6)
where \(M_{0}=E_{\rm iso}/(\Gamma_{0}-1)c^{2}\) is the initial mass of the ejcta, \(m_{\rm sw}=\int 4\pi r^{2}\rho(r)dr\) is the swept-up mass and \(f\) is the radiative efficiency. We employ a constant isotropic energy \(E_{\rm iso}=4\pi\epsilon_{\rm I}\) for the narrow jet and the average isotropic energy \(E_{\rm iso}=\bar{E}_{\rm II,iso}(\theta)\) for the wide jet.
We adopt \(f=\epsilon_{\rm e}(t_{\rm syn}^{\prime}(\gamma_{\rm m}^{\prime})+t_{\rm IC}^{ \prime-1}(\gamma_{\rm m}^{\prime}))/(t_{\rm syn}^{\prime-1}(\gamma_{\rm m}^{ \prime})+t_{\rm IC}^{\prime-1}(\gamma_{\rm m}^{\prime})+t_{\rm ad}^{\prime-1})\) in our calculations, where \(t_{\rm ad}^{\prime}=R/\Gamma\beta_{\rm sh}c\) is timescale of adiabatic cooling, and \(t_{\rm syn}^{\prime}(\gamma_{\rm m}^{\prime})\) and \(t_{\rm syn}^{\prime}(\gamma_{\rm m}^{\prime})\) are cooling timescale of synchrotron and inverse Compton at \(\gamma_{\rm m}^{\prime}\), respectively.
For electron spectra, we include the cooling of synchrotron and inverse Compton in the calculation \(\gamma_{c}^{\prime}=6\pi m_{\rm e}c/\sigma\Gamma B^{\prime 2}t_{\rm z}(1+f_{\rm KN}Y)\), where \(Y\equiv P_{\rm SSC}/P_{\rm syn}\) is the ratio between SSC power and synchrotron power. The KN suppress factor is defined as \(f_{\rm KN}\equiv P_{\rm SSC}^{\rm KN}/P_{\rm SSC}^{\rm T}\), where the \(P_{\rm SSC}^{\rm T}=\frac{4}{3}\pi\sigma_{\rm T}c\gamma^{2}U_{\rm syn}^{\prime}\). The total energy loss rate of Compton scattering in KN regime is (Blumenthal & Gould, 1970)
\[P_{\rm IC}^{\rm KN}=12\gamma^{2}\sigma_{\rm T}c\int_{0}^{\infty}\varepsilon^{ \prime}\frac{dn^{\prime}}{d\varepsilon^{\prime}}d\varepsilon^{\prime}\int_{0}^{1} \frac{qG\left(q,\Gamma_{e}\right)}{(1+\Gamma_{e}q)^{3}}dq,\] (A7)
where \(\Gamma_{e}=4\gamma\varepsilon^{\prime}/m_{\rm e}c^{2}\), \(G\left(q,\Gamma_{e}\right)\) is a function of KN cross section and \(dn^{\prime}/d\varepsilon^{\prime}\) is the differential number density of synchrotron photons.
\[G\left(q,\Gamma_{e}\right)=2q\ln q+\left(1+2q\right)\left(1-q\right)+\frac{ \Gamma_{e}^{2}q^{2}\left(1-q\right)}{2\left(1+\Gamma_{e}q\right)}.\] (A8)
The spectral power of synchrotron emission is
\[P_{\nu}^{\prime}(\nu^{\prime})=\frac{\sqrt{3}eB^{\prime}}{m_{\rm e}c^{2}}\int_ {\gamma_{\rm m}}^{\infty}\left(\frac{\nu^{\prime}}{\nu_{c}^{\prime}}\int_{\nu ^{\prime}/\nu_{c}^{\prime}}^{\infty}K_{5/3}(z)dz\right)\frac{dN_{\rm e}}{d \gamma_{\rm e}^{\prime}}d\gamma_{\rm e}^{\prime},\] (A9)
where \(\nu_{c}^{\prime}=3\nu_{\rm L}^{\prime}/2\) and \(K_{5/3}(z)\) is the modified Bessel function. The spectra of SSC emission are calculated by the strict expressions from Blumenthal & Gould (1970).
\[\frac{dN^{\prime}}{dE^{\prime}}=\frac{3\sigma_{\rm T}c}{4}\int_{\gamma_{\rm m }}^{\infty}\left(\int\frac{dn^{\prime}}{d\varepsilon^{\prime}}\frac{d \varepsilon^{\prime}}{\varepsilon^{\prime}}\frac{G(q,\Gamma_{\rm e})}{{ \gamma_{\rm e}^{\prime}}^{2}}\right)\frac{dN_{\rm e}}{d\gamma_{\rm e}^{\prime }}d\gamma_{\rm e}^{\prime}\] (A10)
where \(q=w/\Gamma_{e}(1-w)\), \(w=E^{\prime}/\gamma_{\rm e}^{\prime}m_{\rm e}c\).
We also consider the internal \(\gamma\gamma\) absorption for VHE photons in this code. The optical depth due to internal \(\gamma\gamma\) absorption is
\[\tau_{\gamma\gamma,{\rm int}}(\varepsilon_{\gamma}^{\prime},\varepsilon^{ \prime})=\int_{\varepsilon_{\rm th}}\sigma_{\gamma\gamma}(\varepsilon_{\gamma} ^{\prime},\varepsilon^{\prime})\frac{R}{\Gamma}\frac{dn^{\prime}}{d\varepsilon ^{\prime}}d\varepsilon^{\prime}d\Omega,\] (A11)
where \(\sigma_{\gamma\gamma}\) is the cross section, \(\varepsilon_{\rm th}=2m_{\rm e}^{2}c^{4}/\varepsilon_{\gamma}^{\prime}(1-{ \rm cos}\theta)\) is the threshold energy of pair production. The cross section is
\[\sigma_{\gamma\gamma}(\varepsilon_{\gamma}^{\prime},\varepsilon^{\prime})= \frac{3}{16}\sigma_{\rm T}\left(1-\beta_{\rm cm}^{2}\right)\left[2\beta_{\rm cm }\left(\beta_{\rm cm}^{2}-2\right)+\left(3-\beta_{\rm cm}^{4}\right)\ln\left( \frac{1+\beta_{\rm cm}}{1-\beta_{\rm cm}}\right)\right],\] (A12)
where \(\beta_{\rm cm}=\sqrt{1-2m_{\rm e}^{2}c^{4}/\varepsilon_{\gamma}^{\prime} \varepsilon^{\prime}(1-{\rm cos}\theta)}\). For an intracisic flux density \(F_{\nu}\), the flux density after the internal \(\gamma\gamma\) absorption is \(F_{\nu}(\frac{1-e^{-\gamma\gamma,{\rm int}}}{\tau_{\gamma\gamma,{\rm int}}})\).
For Figure 1, we use analytic methods from Equation (A1)&(A2) to compute dynamics and electron spectra, and then obtain the flux numerically with consideration of KN effects and internal absorptions.
## Appendix B Absorption frequency as the break frequency
In the main text we use \(\nu_{\rm m}\) as the break frequency between radio (97.5GHz) and X-ray (keV). Here we discuss the possibility that \(\nu_{\rm a}\) is the break frequency. The spectra of synchrotron emission are also affected by self-absorption. Frequency below absorption frequency \(\nu_{\rm a}\) drops rapidly as \(F_{\nu}\propto\nu^{5/2}\). If the frequency break between X-ray and radio is attributed to self-absorption, \(\nu_{\rm a}\) is required to be \(\nu_{a}\geq 400\)GHz at \(T_{0}+10^{5}\)s. The expression of \(\nu_{\rm a}\) depends on the values of \(\nu_{\rm m}\) and \(\nu_{\rm c}\). We define \(\nu_{\rm a1}\) is the absorption frequency when \(\nu_{\rm a}\) is in the regime \(\nu_{\rm a}<\nu_{\rm m}<\nu_{\rm c}\). The values of \(\nu_{\rm a1}\) are
\[\nu_{\rm a1}=\left\{\begin{array}{ll}5.3{\rm GHz}E_{{\rm iso},55}^{1/5} \eta_{0,-1}^{3/5}k_{\rm e,-1}^{-1}\epsilon_{{\rm B},-4}^{1/5},&k=0\\ 0.09{\rm GHz}E_{{\rm iso},55}^{-2/5}A_{\star,-1}^{6/5}k_{\rm e,-1}^{-1} \epsilon_{{\rm B},-4}^{1/5}\epsilon_{5}^{-3/5}.&k=2\end{array}\right.\] (B13)
Apparently, \(\nu_{\rm a1}\) dissatisfy the condition \(\nu_{\rm a}\geq 400\)GHz unless we employ a very small value of \(k_{\rm e}\)., which are \(k_{\rm e}\sim 10^{-3}\) in constant medium and \(k_{\rm e}\sim 10^{-5}\) in wind-like medium. However, such parameters are extremely small and fail to produce sufficient flux when the isotropic energy is \(10^{55}\)erg. Although \(\nu_{\rm a1}\) is more sensitive to the density in wind profile \(\nu_{\rm a1}\propto A_{\star}^{6/5}\), large wind density \(A_{\star}>1\) enhances the TeV-keV ratio in later epoch, resulting over-production for the TeV flux.
Moreover, applying such a small value of \(k_{\rm e}\) would yield a very small \(\nu_{\rm m}\). When \(\nu_{\rm m}\) is smaller than \(\nu_{\rm a}\), expressions of \(\nu_{\rm a1}\) is not applicable. We define \(\nu_{\rm a2}\) as the absorption frequency when \(\nu_{\rm m}<\nu_{\rm a}<\nu_{\rm c}\). The absorption frequency \(\nu_{\rm a2}\) evolve as \(\nu_{\rm a2}\propto t^{-\frac{3n+2}{2(p+4)}}\) in constant medium and \(\nu_{\rm a2}\propto t^{-\frac{3(p+2)}{2(p+4)}}\) in wind-like medium. Since \(\nu_{\rm a2}\) is decreasing more rapidly with respect to time, we must have a larger \(\nu_{\rm a1}\), which corresponds to more extreme parameters.
## Appendix C Reverse Shock in a Structured Jet
In a structured jet, the angular energy distribution is \(dE/d\Omega\propto\theta^{-a}\) and the initial Lorentz factor \(\Gamma_{0}\) may have an angular distribution \(\Gamma_{0}\propto\theta^{-\rm{kr}}\)(Zhang and Wang, 2023). If the \(k_{\rm{T}}\leq 1\), the angular edges seen by observers are all decelerated since the radiative cone is \(\theta\sim\Gamma^{-1}\). Hence, the forward shock dynamics in independent of the distribution of \(\Gamma_{0}\). In this case, emissions from the forward shock can be described by Equation (17). For the wind medium, the angular profile of the Lorentz factor of the shocked shell follows
\[\Gamma_{3}(\theta)=\Gamma_{0}(\theta)\left(\frac{R(\theta)}{R_{\rm{dec}}( \theta)}\right)^{-g}\] (C14)
where \(g=1\)(Zou et al., 2005; Gao et al., 2013) for a reverse shock in the thin-shell approximation and \(R_{\rm{dec}}\) is the deceleration radius of the ejecta, which scales as \(R_{\rm{dec}}(\theta)\propto E_{\rm{II,iso}}(\theta)\Gamma_{0}(\theta)^{-2}\).
Our model used \(k_{\rm{T}}=1\), so \(\Gamma_{3}\propto\theta^{-1}\). Using \(R(\theta)=2\Gamma_{3}(\theta)^{2}t\), one obtains \(\Gamma_{3}(\theta)\propto t^{-\frac{1}{4-a}}\), \(\Gamma_{0}(\theta)\propto t^{-\frac{1}{4-a}}\), \(E_{\rm{II,iso}}(\theta)\propto t^{-\frac{a}{4-a}}\). The self-absorption frequency of reverse shock is \(\nu_{\rm{a,rs}}\propto E_{\rm{II,iso}}^{\frac{\theta p-4}{p+4}}(\theta)\Gamma _{0}^{-\frac{2\theta p-16}{7(p+4)}}(\theta)t^{-\frac{13p+24}{7(p+4)}}\) (\(\nu_{\rm{a,rs}}<\nu_{\rm{a,rs}}\)). We find \(\nu_{\rm{a,rs}}\propto t^{-1}\) when \(k_{\rm{T}}=1\), which is closed to the observed scaling relation \(\nu_{\rm{a,rs}}\propto t^{-1.08\pm 0.04}\) from Bright et al. (2023). Correspondingly, the minimum frequency is \(\nu_{\rm{m,rs}}\propto E_{\rm{II,iso}}^{\frac{\theta}{7}}(\theta)\Gamma_{0}^{- \frac{2\theta}{7}}(\theta)t^{-\frac{13}{7}}\propto t^{-1}\) and peak flux density is \(F_{\nu,\rm{max,rs}}\propto E_{\rm{II,iso}}^{\frac{23}{21}}(\theta)\Gamma_{0}^{- \frac{29}{21}}(\theta)t^{-\frac{23}{21}}\propto t^{-\frac{a}{4-a}}\), respectively.
Thus the peak flux density of self-absorption \(F_{\nu,\rm{a}}=F_{\nu,\rm{max}}(\frac{\nu_{\rm{a,rs}}}{\nu_{\rm{a,rs}}})^{ \frac{1}{2}-p}\propto t^{-\frac{3}{4-a}}\) is consistent with the observed values \(F_{\nu,\rm{a}}\propto t^{-0.70\pm 0.02}\) when the jet structure is flat with \(a=0\). After the crossing of absorption frequency, the reverse shock emission declines as \(F_{\nu}=F_{\nu,\rm{max}}(\frac{\nu}{\nu_{\rm{a,rs}}})^{\frac{1-p}{2}}\propto t ^{-\frac{3}{4-a}-\frac{p-1}{2}}\). Assuming \(p=2.2\), the slopes of decaying phase are \(F_{\nu}\propto t^{-1.35}\) for \(a=0\) and \(F_{\nu}\propto t^{-1.54}\) for \(a=0.8\).
|
2302.13959 | Make Every Example Count: On the Stability and Utility of Self-Influence
for Learning from Noisy NLP Datasets | Increasingly larger datasets have become a standard ingredient to advancing
the state-of-the-art in NLP. However, data quality might have already become
the bottleneck to unlock further gains. Given the diversity and the sizes of
modern datasets, standard data filtering is not straight-forward to apply,
because of the multifacetedness of the harmful data and elusiveness of
filtering rules that would generalize across multiple tasks. We study the
fitness of task-agnostic self-influence scores of training examples for data
cleaning, analyze their efficacy in capturing naturally occurring outliers, and
investigate to what extent self-influence based data cleaning can improve
downstream performance in machine translation, question answering and text
classification, building up on recent approaches to self-influence calculation
and automated curriculum learning. | Irina Bejan, Artem Sokolov, Katja Filippova | 2023-02-27T17:00:06Z | http://arxiv.org/abs/2302.13959v2 | Make Every Example Count: On Stability and Utility of Self-Influence for Learning from Noisy NLP Datasets
###### Abstract
Increasingly larger datasets have become a standard ingredient to advancing the state of the art in NLP. However, data quality might have already become the bottleneck to unlock further gains. Given the diversity and the sizes of modern datasets, standard data filtering is not straight-forward to apply, because of the multifacetedness of the harmful data and elusiveness of filtering rules that would generalize across multiple tasks. We study the fitness of task-agnostic self-influence scores of training examples for data cleaning, analyze their efficacy in capturing naturally occurring outliers, and investigate to what extent self-influence based data cleaning can improve downstream performance in machine translation, question answering and text classification, building up on recent approaches to self-influence calculation and automated curriculum learning.
## 1 Introduction
Deep learning on increasingly larger and diverse data sources brought impressive advances in natural language processing (NLP), however, data quality might have become the major bottleneck to unlock further gains Kumar et al. (2020). NLP data is usually acquired via large-scale weakly-labeled data scraping or crowd-sourcing labels from non-expert human annotators, which are both error-prone Frenay and Verleysen (2014); Bowman and Dahl (2021). At the same time, mislabeled or ambiguous training data is also known to hurt models' performance through overfitting or memorization in overparameterized networks Zhang et al. (2016). Finally, not all data is equally easy to learn and overly complex instances may hinder learning as well. All of those cases, - label noise, out-of-distribution, ambiguous or difficult-to-learn examples, - can be covered by an umbrella term _outliers_ to highlight the fact that such an instance is harmful to learning and to downstream performance.
Cleaning training data by filtering out harmful instances is a standard approach that has lead to improvements in the past Khayrallah and Koehn (2018); Peskov et al. (2019), but it has two problems. First, defining what is harmful in a task-agnostic way is hard and so, until recently, mostly task-dependent heuristics have been employed Wang et al. (2018); Junczys-Dowmunt (2018); Liu et al. (2018). More principled approaches attempt to formally define an influence of a training instance via the concept of influence functions (IFs) Cook and Weisberg (1980), which quantify the effect on the loss on a test point when removing an individual training point. For example, Koh and Liang (2017) used access to gradients and the Hessian of the loss to approximate the loss change at point \(z\) that would occur had a training point \(x\) been infinitesimally upweighted in the training set. IFs have been used for interpretability and debugging of machine learning models Koh and Liang (2017); Han et al. (2020), data poisoning attacks Koh et al. (2018) and detecting dataset errors Kong et al. (2022); Schioppa et al. (2021). In these works, it has been conjectured and empirically tested that filtering highly self-influential (\(z=x\)) points, i.e., the ones that would cause a large loss delta on themselves (i.e. they are "unsupported" by other data points and need to be memorized), does lead to improvements in downstream metrics in synthetic and real scenarios.
These improvements, however, contrast with the observations that IFs are sensitive to model and training hyperparameters in the general, \(z\neq x\), case Basu et al. (2020); K and Sogaard (2021), due to violation of the convexity assumption by IFs in deep learning: For a number of CNN-based architectures, Basu et al. (2020) showed that depth and width of the network, its architecture, training regularization and the stochastic approximations inside IF have strong effects on the IF accuracy and stability (measured on retrieved influential \(x\)s for
a fixed \(z\)), which are aggravated with the network size. K and Sogaard (2021) further found that IFs are sensitive to the parameter initialization, ordering of the training data and batch size. Both papers thus doubted that influence scores of training instances would be reliable for practical purposes, and that retraining after removing or fixing the flagged instances would lead to improvements.
The second problem, - which affects the self-influence based filtering too, - is that it is not straight-forward to come up with a filtering rule that would generalize across multiple tasks. Most scalar-based filtering schemes (incl. self-influence) prescribe setting a threshold cut-off value to delineate outliers from the rest data. This may be reasonable for datasets where the to-be-filtered data portion has no apparent signal, however, in more realistic scenarios most of data is at least somewhat useful even if it is an outlier. Applying threshold filtering in such situation may lead to performance decrease as a portion of useful signal will be inevitably lost.
In this paper, we study the stability of _self-influence scores_, which are task-agnostic and, if stable, would be an attractive candidate to serve as the data cleaning pipeline foundation; unlike general influence, the stability of self-influence has not been covered in detail by previous studies. We further analyze the efficacy of capturing naturally occurring outliers by IFs and investigate to what extent self-influence can improve downstream performance in large NLP tasks with Transformer architectures, building up on recent improvements in IF approximation accuracy and scalability with the Arnoldi iteration Based IFs (ABIF) (Schioppa et al., 2021), and automated curriculum learning (AutoCL) (Kreutzer et al., 2021).
First, we show that, unlike the general (\(z\neq x\)) influence scores, the self-influence (\(z=x\)) scores _are_ stable with respect to training and model hyperparameters, and across architecture variations. Next, we illustrate that threshold filtering based on self-influence, although capturing most of synthetic label noise, does not cleanly separate the naturally occurring outliers from the rest of the data, despite providing a partial ordering of data subsets by quality. This makes tuning of the filtering threshold largely ineffective for relatively clean datasets. Finally, we improve upon filtering in a way that does not involve a thresholding score or throwing away data, by relying on the bandit-based AutoCL that has access to ABIF self-influence scores and to a metric that tracks learning progress.
In more detail, our contributions are:
* **Stability of self-influence scores.** We start by measuring how stable self-influence scores are, since this is a prerequisite for both successful data filtering and data scheduling. To this end, in SS5, we study correlation and overlap of data ranked by self-influence scores across different model states, i.e., different final states the training converges to as a function of varying batch size and random seeds of data sampling and IF calculation. We also explore the correlation between model prediction stability (defined below as _model churn_) and sensitivity to architecture changes. We find that the self-influence are stable for a fixed architecture across common training hyperparameters, but care should be exercised in transferring findings between architectures of different capacity.
* **Effectiveness of self-influence scores.** In SS6, we employ a suite of different in-distribution and out-of-distribution (o.o.d. ) evaluation setups and show that filtering out highly self-influential examples is more effective for the o.o.d. setup. We hypothesize that self-influence capturing general outliers prevents learning systematic noise patterns that would otherwise artificially inflate performance in the in-distribution evaluation setup, making it harder to improve upon with filtering. Furthermore, we investigate what is captured by influence scores using both on natural outliers and synthetic label noise, showing that natural data can be spread among high and low influential samples, thus the common top-X% filtering strategies can be ineffective.
* **Data filtering automation.** Datasets vary heavily in quality and the amount of noise that can be removed using high self-influence scores without hindering performance is generally unknown beforehand; in particular, because high self-influence captures, in addition to outliers and mislabeled examples, under-represented training examples that can significantly improve accuracy (Feldman and Zhang, 2020). Filtering based on fixed percentages (Schioppa et al., 2021) can be costly to tune or inaccurate, while attempts to automate it using abrupt changes at the top of the ranking (Lam et al., 2022) have a low
recall rate. To remedy, in SS7 we employ AutoCL to dynamically detect, during training, the harmful and or the useful training data quantiles from the self-influence ranking, and to adjust on-the-fly the ratio of high or low influence examples to train on. This is more general than threshold filtering, which is a particular (static and hard-weighted) case of general (dynamic and soft-weighted) schedules.
## 2 Methods
Influence functions.An influence function \(I(x,z)\), is an approximation to the loss change at the test point \(z\) after an infinitesimal upweighting of a training point \(x\)(Koh and Liang, 2017),
\[I(x,z)=\langle\nabla_{\Theta}L(z),H^{-1}\nabla_{\Theta}L(x)\rangle, \tag{1}\]
where \(\nabla L(x)\) denotes the gradient of the loss function at the training point \(x\) and \(H=\nabla_{\Theta}^{2}L\) is the Hessian of the model at parameters \(\Theta\). For deep learning, the Hessian is impractical to compute exactly, so Koh and Liang (2017) proposed an approximate estimation procedure to calculate (1). Recently, Schioppa et al. (2021) proposed a more accurate and stable method that uses Arnoldi iteration to approximate the inverse Hessian in subspaces spanned by \(H\)'s largest eigenvalues. This enabled scaling up the computation of influence scores to hundreds of millions of training points and model parameters. We use their released code1 to compute \(I(x,y)\) and we refer to it as Arnoldi-based Influence Function (ABIF).
Footnote 1: github.com/google-research/jax-influence.
TracIn (Pruthi et al., 2020) is a gradient-only alternative influence definition that relies on multiple checkpoints to approximate by how much each \(x\)'s gradient changes model parameters and in turn the loss at \(z\); it chains \(\nabla L(x)\) and \(\nabla L(z)\) over \(C\) checkpoints to arrive at the estimate:
\[I_{T}(x,z)=\frac{1}{C}\sum_{i=1}^{C}\langle\nabla_{\Theta_{i}}L(x),\nabla_{ \Theta_{i}}L(z)\rangle. \tag{2}\]
Self-influence and outliers.For both influence definitions, the self-influence of a training point \(x\) can be derived from them setting \(z=x\). It has been conjectured that high values of self-influence indicate data outliers (Koh and Liang, 2017; Pruthi et al., 2020); intuitively, if removing \(x\) deteriorates the loss value on itself, then \(x\) should be different enough from the rest of data so that the prediction on \(x\) could not be learned from the rest of the training data and had be memorized. Note that grounding the influence definition in the magnitude of loss delta covers many possible causes for being an outlier, such as an unsystematic mislabeling (i.e., true noise), ambiguity (i.e., multiple possible labels, depending on the information missing from the input), being out-of-distribution, or being a difficult example (for the model) for other reasons.
Automated curriculum learning(AutoCL) covers a range of algorithms, where not only the training data is presented to a neural network in a different order than random sampling, but also where this order is adapted alongside main training based on learning progress (Graves et al., 2017; Kreutzer et al., 2021). This is particularly useful in our situation, as we can learn (via the self-influence scores proxy) to ignore the outlying data samples and prioritize the most helpful ones, without having to choose apriori the percentage of data to filter.
We use the framing of curriculum learning as a multi-armed bandit problem (Graves et al., 2017), where arms represent distinct subsets of the data that are considered bandit actions and are played at each training step \(t\). A bandit learns alongside with the main task which action \(a^{t}\) to play and informs the model, which trains on a uniformly sampled batch from that subset and sends back a scalar reward feedback for this choice: \(y^{t}=Y^{t}[a^{t}]\), where \(Y^{t}\) would be the unknown loss vector of all possible actions. The bandit learns over time by minimizing the regret \(R=\mathbb{E}[\sum_{t}y^{t}]-\min_{a}\sum_{t}Y^{t}_{a}\) of not having played the best-in-hindsight arm, using the EXP3 and EXP3 algorithms (Auer et al., 2002).
To quantify the learning progress, existing metrics are looking at the loss decrease or the increases in model complexity to compute the reward Graves et al. (2017); among those, in this work, we use the normalized prediction gain (_pgnorm_): \(1-\mathcal{L}(\theta^{t+1})/\mathcal{L}(\theta^{t})\) and cosine similarity between the gradient of the training batch and the reward batch. This metric can be evaluated either on a training batch or a development batch. As in (Kreutzer et al., 2021; Kumar et al., 2019), we calculate _pgnorm_ rewards on randomly sampled batches from the validation set.
## 3 Tasks and Datasets
Throughout this study, we investigate how self-influence methods perform and generalize across multiple NLP tasks, varying the tasks' nature, size, noise levels and model architectures: machine translation (Paracrawl), question answering (Natural Questions, TriviaQA) and text classification (Wikipedia Toxicity Subtypes).
MT:Paracrawl.We consider the German-English translation task from the Paracrawl corpus (Banon et al., 2020), which consists of 100M sentence pairs obtained via web-crawling and contain misaligned sentences, random text fragments, mismatch between entities or other forms of noise. For validation and test set, we took the newstest sets from WMT17 and WMT16 resp., to match the setup of Schioppa et al. (2021), who filtered this corpus with ABI. We evaluate using BLEU as reported by SacreBLEU (Post, 2018).
QA:Natural Questions.The NQ dataset consists of real queries issued to the Google search engine, alongside Wikipedia fragments that could potentially contain the answer (Kwiatkowski et al., 2019). Each query can have a short answer (incl. empty) and a long-form answer, the latter requiring to predict spans from the fragments. Since we run our NQ experiments with a seq2seq model, we adopt a version of the dataset which only covers short answers from Guo et al. (2022), who split the official training set of 307k samples, to use 90% for training, 10% as the dev set for fine-tuning, and the official dev set of 7,830 samples as the test set. From a data quality perspective, being real user queries, NQ answers are relatively clean but contain a high degree of natural ambiguity, due to variability among annotators. About 33% of NQ answers are debatable (varying expert annotator feedback) and 16% are wrong, meaning the Wikipedia fragment provides no evidence for the ground-truth answer (Kwiatkowski et al., 2019).
QA:TriviaQA.This dataset includes 95k question-answer pairs authored by trivia enthusiasts, who gathered evidence documents for answering the question, while the questions are drawn from Wikipedia and Bing search results (Joshi et al., 2017). This is a particularly high quality supervised task, but is still difficult to learn: the input length is on average 10 times longer than in NQ, bringing additional challenges such as complex, compositional questions that require higher cross sentence reasoning. We evaluate both TriviaQA and NQ using the Exact-Match (EM) and F1 scores.
Classification:Wikipedia Toxicity.The dataset contains a collection of 223k human annotated comments that comes from an archive of Wikipedia talk page comments (Wulczyn et al., 2016), out of which 63k are the test set. While the original dataset covers a variety of toxicity subtypes, we only look at binary classification of comments into toxic and non-toxic as in (Ebert et al., 2022), and report accuracy and the F1 score.
## 4 Models and Influence Functions Setup
We experiment with three different architectures: the standard Transformer-base, two sizes of the state-of-the-art LongT5 architecture for long inputs, and the classic BERT-base model.
### Models
Transformer.For the machine translation task, we implement the standard, 6-layer, Transformer-base architecture (Vaswani et al., 2017) trained for 200k steps with a batch size of 128 using T5X (Roberts et al., 2022) on 16-core TPUv2 and fixed input length of 128 on Paracrawl dataset. For training, we use Adafactor, with a learning rate of 2.0 and the rsqrt decay (Vaswani et al., 2017) of 0.8, a dropout rate of 0.1 and a 32k SentencePiece model.
LongT5.For our QA tasks, we use the LongT5-base and LongT5-large architecture (Guo et al., 2022)2 implemented in T5X, using a 32k SentencePiece model, a batch size of 128 and AdaFactor
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline
**Dataset** & **Task** & **Noise** & **Model** & **Training** & **Architecture** & **Params** & **Train/Dev/Test** \\ \hline \hline Paracrawl & MT & very high & enc-dec & from scratch & Transformer-base & 60M & 100M/3k/3k \\ Natural Questions & QA & low & enc-dec & fine-tuning & LongT5-base/large & 220M/770M & 276k/31k/7.8k \\ TriviaQA & QA & very low & enc-dec & fine-tuning & LongT5-base & 220M & 88k/11k/11k \\ Wikipedia Toxicity & text-class. & high & enc & fine-tuning & BERT-base/T5-base & 110M/220M & 144k/16k/63k \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset and model statistics.
as in the original work. We fine-tune the models on NQ for 261k steps on a 16-core TPUv2 to convergence, with a learning rate of 0.001, dropout rate of 0.1 and a fixed input length of 4,096. We employ the same setup for TriviaQA, except we double the input length, to account for TriviaQA's much longer contexts than in NQ.
Bert.For the text toxicity classification, we experiment with the BERT-base architecture Devlin et al. (2019), and fine-tune for 35k steps with early stopping, a batch size of 32, learning rate of \(10^{-6}\), weight decay of \(5\cdot 10^{-6}\), input length of 128 and a dropout rate of 0.5.
T5.To use our T5X AutoCL implementation, we reframed the toxicity classification task as a seq2seq task in T5 Raffel et al. (2019) by predicting two tokens: _toxic_ and _safe_, treating other output tokens as misclassifications, and report F1 for the toxic class. The T5-base model is trained for 120k steps, with input length of 128, batch size of 64, dropout rate of 0.1, using Adafactor, with a learning rate of \(10^{-2}\) and decay rate of \(10^{-1}\).
### Self-Influence calculation
Abi.To trade-off between memory/speed for accuracy, ABIF introduces several hyperparameters that may impact its accuracy, including the choice of layer's gradients to use, the number of top-\(\tilde{p}\) eigenvalues to use, and the number of Arnoldi iterations. We use 30 projectors and 60 iterations for computation for NQ, TriviaQA and Paracrawl self-influence scores, and compare it to 100 projectors and 200 iterations in an ablation.
The question of which layers are more effective for influence functions is still open, with recent work showing particularly good results using the last or few last layers Han et al. (2020); Barshan et al. (2020), but also using the first layers Yeh et al. (2022). Therefore, we experiment with the two IF methods in three variants: _first_, _last_ and _all_, where _first_ indicates the first two layers of the encoder (and decoder), _last_ indicates the last two layers of the encoder (and decoder), while _all_ uses all model parameters, and draw a comparison between them. For the Transformer on Paracrawl experiments, _first_ and _last_ only include the first two encoder layers and the last two decoder layers.
TracIn.We employ TracIn as a baseline for NQ self-influence scoring, and use a fixed projection size of 1,024 and three checkpoints, one from the beginning of the training (at 60k steps), middle of training (140k) and the final checkpoint (260k). We could not scale TracIn to 100M Paracrawl examples, so we use its version by Schioppa et al. (2021), who reduce gradient dimensionality to 30 using random Gaussian matrices for the last checkpoint (\(C=1\)) at 100k steps.
## 5 Stability of Self-Influence
In this section, we investigate the stability of self-influence scores with respect to model states, ABIF-specific hyperparameters and model architecture.
We evaluate stability with Spearman rank correlation and the 90th percentile overlap (i.e. overlap between top-10% examples), suggested by K and Sogaard (2021) as an alternative to global correlation since one normally cares only about highly self-influential examples.
It is important to understand whether self-influence ranking, given that it is thought to be predictive of data quality, is an inherent data property or it is mainly rooted in the model configuration. In this regard, we look at the extent to which model stability and self-influence stability are interconnected. K and Sogaard (2021) showed that TracIn was significantly more stable to the changes in batch size and ordering of the data, but still sensitive to initialization. Indeed, as weight initialization has a strong impact on the training stability, which is known to affect Transformer-based models Devlin et al. (2019); Dodge et al. (2020), we need to quantify instability pertaining to the architecture. To this end, we use the _model churn_ metric Milani Fard et al. (2016), which is the joint expected percentage of errors on a test distribution. For example, if model A is right on 9% of the examples that B gets wrong and model B is right on the 10% of the examples that A gets wrong, the churn is of 19%. While changing weight initialization does not always impact accuracy, it can result in a net zero wins and losses, referred to as unnecessary churn.
### Dependence on model states
We investigate on the NQ dataset if the self-influence scores are sensitive to the model state, i.e., initialization of the model, data ordering or batch size. Previously, K and Sogaard (2021) showed instability of general influence functions with regards to these model states, while here we turn attention to the question of _self_-influence stability, given its foundational role for data filtering.
Setup.We analyze the sensitivity to model state changes on two model architectures/tasks (LongT5-base on NQ and Transformer-base on Paracrawl), following the same method: We fine-tune the models two times, first freezing all hyper-parameters, and the second time, varying the targeted hyperparameters (batch size, data order and model initialization) to analyze the worst-case instability scenario. For both runs, we compute the self-influence scores for the training set using all three variants of ABIF (30 projectors) and TracIn, and pair the two model runs results to compare their stability. We found it too slow to evaluate TracIn when using _all_ layers, so only report results obtained with ABIF.
Results.From Table 2, we see that both methods are considerably more stable to changes in the model state, than in (K and Sogaard, 2021), where the maximum 90th percentile overlap was 32.77 for IFs and Spearman correlation below 0.07. Despite that the 90th percentile overlaps for Transformer are lower, we can see the ranking correlation is still high and believe that, because Paracrawl is a very noisy dataset (>90% of it is noise, as we show below), the overlaps are less informative.
The choice of layers has a significant impact on the stability, the last layers being more stable compared to the first layers, which is consistent with previous work (Han et al., 2020; Barshan et al., 2020) where the last layer also yielded better results. We believe that these results indicate that self-influence is robust enough to be relied on in detecting training outliers.
### Dependence on model architecture
Basu et al. (2020) found that network architecture, in particular its depth and width, impact the accuracy of IFs. Here, we investigate to what extent self-influence is sensitive to a broader set of model changes that affect model capacity and capabilities. In order for self-influence to surface dataset error/outliers, a low degree of instability across such changes would be necessary to avoid misattribution of self-influence stability to model architecture. We compare the self-influence scores resulted from:
* LongT5-base vs. LongT5-large: we fine-tune both models with the same hyperparameters configuration to analyze the sensitivity of self-influence to model size that increases from 220M to 770M parameters. Given that here the number of ABIF projectors is kept unchanged, which could be suboptimal for LongT5-large, we look at an alternative way of changing capacity in the next bullet point.
* Local vs. Transient-Global attention of LongT5: we fine-tune two LongT5-base models, each with a different attention, yet the same configuration of other hyperparameters, to analyze the sensitivity to increased capability at same model size.
Results.We summarize the results in Table 3. Changing the model capacity (size or attention) has a negative effect on the stability of self-influence scores, with the size hyperparameter affecting it less. With respect to layers, using the _first_ or _all_ configurations makes self-influence scores more stable to large capacity changes, in comparison to _last_ layers, which was more robust to training parameter modifications. We conclude, given the strong correlation between increase in churn and decrease in stability, that model instability is one of the contributors to the self-influence scores' instability. Importantly, self-influence scores appear to be particular to a model architecture/capacity and should be used with caution across differently-powered models. This is expected, as the architecture and model capacity, unlike training hyperparameters, define the loss landscape and so the loss changes under training data perturbations. In the following, we hence calculate and use self-influence scores for fixed architectures to minimize the chances of running into instabilities.
\begin{table}
\begin{tabular}{c c c c c c} \multirow{2}{*}{**Layers**} & \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**LongT5: Natural Questions**} & \multicolumn{2}{c}{**Transformer: Paracrawl**} \\ \cline{3-5} & & **90th \(\cap\)** & **Spearman** & **90th \(\cap\)** & **Spearman** \\ \hline \hline _first_ & ABIF & 77.78 & 0.781 & 40.71 & 0.727 \\ TracIn & 80.49 & 0.938 & 63.47 & 0.872 \\ \hline _last_ & ABIF & 87.67 & 0.933 & 51.03 & 0.726 \\ TracIn & 86.95 & 0.949 & 64.83 & 0.901 \\ \hline _all_ & ABIF & 78.03 & 0.804 & 44.58 & 0.771 \\ \end{tabular}
\end{table}
Table 2: Stability of self-influence estimates to changing model states (batch size, data ordering and model initialization), using ABIF and TracIn compared on LongT5-base trained on NQ, and Transformer trained on Paracrawl.
### ABIF-pertinent instability
Finally, given that ABIF is a newly developed method, we inspect if different choices of hyperparameters for ABIF can contribute to the instability, including layers choice, number of top eigenvalues (30 vs. 100) and the initialization seed, by recomputing the self-influence scores with various configurations using the fixed hyperparameters variant of the LongT5-base fine-tuned earlier.
Results.From Table 4 we can see that contributions that pertain to ABIF itself are not of concern since there are largely unaffected by the different choice of its parameters. The 90th percentile overlap is lower, despite the almost perfect correlation, because the overlap is sensitive to insignificant movements in the vicinity of the cut-off value.
## 6 Effectiveness of Self-Influence Scores
The impact of filtering highly self-influential examples on the downstream performance, altogether with the recall of synthetically perturbed training samples by high self-influence has been used to measure the correctness of influence functions (Guo et al., 2020; Schioppa et al., 2021), given that the ground-truth estimate via leave-one-out is unfeasible to compute even for medium-sized models. Previous work was, however, inconsistent in matching the distribution of the downstream evaluation set to that of the training data: Koh and Liang (2017); Kocijan and Bowman (2020) use an in-distribution evaluation sets, while Guo et al. (2020); Schioppa et al. (2021) focus on o.o.d. evaluation. We are interested in whether filtering of highly self-influential examples is more helpful for o.o.d. evaluation (by removing true outliers) or if it can also improve performance on test sets distributed as training data (which may contain the same errors as the train set).
Setup.To evaluate the downstream performance of filtering, we calculated the self-influence scores using all three layer settings of ABIF, sorted them to retrieve the highly self-influential examples, and experimented with different cut-offs (2%, 5%, 10%, 20%, 50%,..) given that the ratio of (harmful) outliers in each dataset is unknown. Then we retrained on the filtered data and report the best results obtained across the layers choices.
We consider three tasks with same distribution evaluation (on NQ, TriviaQA and Toxicity) and three o.o.d. setups (NQ, Paracrawl and Toxicity):
* Training a Transformer-base model on Paracrawl using the same setup as above, and evaluating on the newstest16 dataset.
* Fine-tuning the LongT5-base on NQ as before, but evaluating on the TriviaQA dataset to make it o.o.d.. To align the task definitions, we only keep the normalized answer from TriviaQA's answers list, whereas usually the metrics are computed
\begin{table}
\begin{tabular}{c c c}
**Layers** & **90th** \(\cap\) & **Spearman** \\ \hline \hline \multicolumn{3}{c}{number of projectors} \\ \hline _first_ & 96.79 & 0.99 \\ _last_ & 96.74 & 0.99 \\ _all_ & 93.90 & 0.99 \\ \hline \hline \multicolumn{3}{c}{initialization} \\ \hline _first_ & 96.21 & 0.99 \\ _last_ & 96.92 & 0.99 \\ _all_ & 97.08 & 0.99 \\ \end{tabular}
\end{table}
Table 4: Stability of self-influence estimates with respect to ABIF hyperparameters using LongT5 on NQ.
\begin{table}
\begin{tabular}{c c c c c c}
**Model A** & **Model B** & **Churn** & **Layer** & **90th** \(\cap\) & **Spearman** \\ \hline \hline LongT5-base & LongT5-base & & _first_ & 77.78 & 0.781 \\ TGIobal attention & TGIobal attention & & & _all_ & 78.03 & 0.804 \\ \(|B|\)**=128,** & \(|B|\)**=64,** & & _last_ & 87.67 & 0.933 \\ \(seed_{shuf}\)**=0,**\(seed_{init}\)**=0** & \(seed_{shuf}\)**=43,**\(seed_{init}\)**=43** & & & _last_ & 87.67 & 0.933 \\ \hline LongT5-**base** & LongT5-**large** & & & _first_ & 68.06 & 0.630 \\ TGIobal attention & TGIobal attention & & & & & \\ \(|B|\)**=128,** & \(|B|\)**=128,** & 12.77\% & _all_ & 67.89 & 0.621 \\ \(seed_{shuf}\)**=0,**\(seed_{init}\)**=0 & \(seed_{shuf}\)**=0,**\(seed_{init}\)**=0** & & & _last_ & 42.00 & 0.432 \\ \hline LongT5-base & LongT5-base & & & _first_ & 61.19 & 0.591 \\
**TGIobal attention** & **Local attention** & & & & & \\ \(|B|\)**=128,** & \(|B|\)**=128,** & 13.54\% & _all_ & 60.05 & 0.591 \\ \(seed_{shuf}\)**=0,**\(seed_{init}\)**=0 & \(seed_{shuf}\)**=0,**\(seed_{init}\)**=0** & & & _last_ & 41.92 & 0.292 \\ \end{tabular}
\end{table}
Table 3: Relation between model stability and its architecture, capacity or training hyperparameters, on the NQ dataset. Bold marks differences between models A and B. The first group of ABIF results is from Table 2.
against each of the given answers and the maximum score is reported.
* Fine-tuning on Wikipedia Toxicity, but evaluating on the o.o.d. Civil Comments (CivilC) development set, which consists of 97k web comments (Borkan et al., 2019).
Results.From Table 5 we see that self-influence filtering using ABIF bring higher improvements for the o.o.d. evaluation setup: up to +9 BLEU points on Paracrawl-newstest16 and +3 F1 points on NQ-TriviaQA setup, with a negligible improvement in the Toxicity-CivilC case, which shows that training on cleaner datasets improve performance on an o.o.d. test set. However, in the in-distribution setup, TriviaQA and NQ trained on full data always outperform filtering, which is not a surprising result given both are high quality datasets, but also brings very small improvements on Wikipedia Toxicity, which is expected to be very noisy. This shows that the common heuristics of filtering highly self-influential examples (Koh and Liang, 2017) might not be the best fit for the in-distribution tasks and we develop further why.
### Noise captured by high influence scores
Here we study how a naturally occurring noise, as annotated by human-experts, is partitioned by the self-influence rankings. Additionally, we compare to synthetic noise, which has been tested previously to be accurately retrieved by high self-influence in Schioppa et al. (2021), to see if a significant difference occurs.
Setup.We use the 5-way annotations done by expert human annotators on a set of 205 uniformly sampled NQ examples and released with the original paper (Kwiatkowski et al., 2019). The examples were annotated as correct, debatable and wrong, but we treat the debatable entries to be in the correct bucket as there is no strong evidence that suggests they are wrong. We compute the self-influence scores on the _first_ and the _last_ layers and analyze the natural noise is being ranked.
For comparison, we looked at the ability of retrieving synthetic noise via the self-influential examples and compute the recall in the top-10%/20%/30% of self-influence scores. We alter the original data by uniformly sampling 10% of the dataset for which we shuffle the labels, ensuring all labels were changed from its initial value. This is important because a significant amount of questions have no answer (they are not answerable), because they are natural search queries.
Results.The synthetic noise retrieval confirms previous findings with a high recall, as 29% of the synthetic noise lands in the top-20% ranks and 94% in the top-30%, when using _last_ layers (vs. 84% in top-30% for _first_ layers). We hypothesize that the synthetic noise is not predominantly in top-10% because other forms of outliers are already present in the dataset that are more harmful to the model.
The behaviour of natural noise is considerably different. It barely comes up among highly self-influential buckets as we can see in Figure 1, but is distributed predominantly among the lowest and mid-influential buckets. Examples annotated as having wrong labels are absent from the top-20%. Additionally, we see that input noise does not affect the model as much as label noise, given that examples with wrong input are almost evenly distributed in the lower 80% of the data. These results suggest that manual tuning of the filtering threshold of self-influence rankings may not find a percentile range which, if removed, improves performance.
## 7 Automated Filtering of Outliers
As we showed above, using self-influence to filter noisy training data has the drawback of the difficulty of choosing a right cut-off threshold. Offline filtering algorithms were proposed to identify a
\begin{table}
\begin{tabular}{c c c c c c c}
**Train** & **Eval** & **\% Used** & **BLEU** & **Acc** & **EM** & **F1** \\ \hline \hline \multicolumn{6}{c}{In-Distribution test set} \\ \hline \multirow{2}{*}{NQ} & \multirow{2}{*}{NQ} & **100\%** & - & - & **59.05** & **63.97** \\ & & 90\% & - & - & 58.49 & 63.06 \\ & & **100\%** & - & - & **78.08** & **80.17** \\ TriviaQA & TriviaQA & 98\% & - & - & 77.49 & 79.72 \\ & & 90\% & - & - & 74.64 & 76.96 \\ & & 100\% & - & 92.61 & - & 95.8 \\ Toxicity & Toxicity & **95\%** & - & **93.15** & - & **96.13** \\ & & 90\% & - & 92.86 & - & 95.97 \\ \hline \hline \multicolumn{6}{c}{Out-of-Distribution test set} \\ \hline \multirow{2}{*}{Paracrawl} & \multirow{2}{*}{newstest16} & 100\% & 21.36 & - & - & - \\ & & **10\%** & **30.45** & - & - & - \\ & & 100\% & - & - & 16.06 & 20.55 \\ \multirow{2}{*}{NQ} & \multirow{2}{*}{TriviaQA} & **95\%** & - & - & **18.96** & **23.52** \\ & & 90\% & - & - & 17.52 & 21.85 \\ \multirow{2}{*}{Toxicity} & \multirow{2}{*}{CivilC} & 100\% & - & 95.28 & - & 97.51 \\ & & **90\%** & - & **95.42** & - & **97.59** \\ \end{tabular}
\end{table}
Table 5: Performance of percentile filtering of outliers (highly self-influential examples as per ABIF self-influence scores) on in- and out-of-distribution test sets. We report the maximum over _all_, _first_, and _last_ settings.
threshold (Junczys-Dowmunt et al., 2018), but trial-and-error search for a fixed threshold, such as top-K or top-X%, based on the downstream performance, remains popular, which is costly and does not generalize across datasets. Lam et al. (2022) explored clustering strategies or identifying abrupt changes in the magnitude of the sorted self-influence scores, but this resulted in a low recall rate. Hence, we explore the ability of automated curriculum learning to dynamically identify the noisy parts of data based on the buckets of self-influence scores without committing to completely remove any of them.
First, we validate how different the quality signal is from each individual self-influence bucket. In Figure 2, we see that the highest self-influence bucket has a visibly lower (though not zero) performance compared to the rest of buckets, which are in turn difficult to separate, possibly, because they contain data of different grades of usefulness. A positive aspect of AutoCL is that, due to exploration, the model will regularly visit all buckets, and may dynamically up- or down-weight buckets depending on the current needs of the model, overcoming the rigidness of a fixed threshold filtering.
Setup.We verify the feasibility of this approach using two definitions of the self-influence, given by ABIF and by TracIn, and check that the findings are consistent across three datasets (NQ, Paracrawl and Toxicity on T5). We map the actions of the bandit to equal-sized data buckets obtained by splitting using the self-influence ranking into a predefined number of buckets. We first explore 10 buckets, which should allow the bandit to at least match the performance of filtering, and then consider more granular setups (20 and 25 buckets) which would be infeasible to manually test against filtering. We expect the bandit not to use much of the high self-influential buckets, nor the lowest bucket, which prior work found to be less useful because of its simplicity (Feldman and Zhang, 2020; Schioppa et al., 2021). Following (Schioppa et al., 2021), we report BLEU at 10k and 200k steps.
We use a fixed learning rate of 0.001 and an exploration rate of 0.01, tuned between two variants of the algorithm (EXP3 and EXP3S) and the rewards function used (_pgnorm_ vs. _cosine similarity_ between gradients of the train and the reward batch). As baselines we use the filtering methods from the previous sections.
\begin{table}
\begin{tabular}{c c c c c} & **Config** & **Method** & **BLEU@10k** & **BLEU@200k** \\ \hline \hline \multirow{4}{*}{**C**} & Baseline & & 13.75 & 21.36 \\ & Filtered 90\% & ABIF & **26.8** & 30.45 \\ & Filtered 90\% & TracIn(1) & 22.10 & 27.87 \\ & AutoCL - 5 bins & ABIF & 21.45 & 27.48 \\ & AutoCL - 10 bins & ABIF & 25.50 & 30.45 \\ & AutoCL - 25 bins & ABIF & 24.13 & **31.38** \\ & AutoCL - 25 bins & TracIn(1) & 18.60 & 29.33 \\ \hline \hline \multirow{4}{*}{**C**} & **Config** & **Method** & **EM** & **F1** \\ \cline{2-5} & Baseline & & 59.05 & 63.97 \\ & Filtered 10\% & ABIF & 58.49 & 63.06 \\ & Filtered 10\% & TracIn(3) & 58.09 & 62.75 \\ & AutoCL - 10 bins & ABIF & **59.72** & **64.33** \\ & AutoCL - 25 bins & ABIF & 59.20 & 64.01 \\ & AutoCL - 25 bins & TracIn(3) & 59.59 & 64.32 \\ \hline \hline \multirow{4}{*}{**C**} & **Config** & **Method** & **Acc** & **F1** \\ \cline{2-5} & Baseline & & 91.73 & 67.37 \\ \cline{1-1} & Filtered 5\% & ABIF & 91.80 & 67.22 \\ \cline{1-1} & AutoCL - 10 bins & ABIF & **93.61** & **70.56** \\ \cline{1-1} & AutoCL - 25 bins & ABIF & 92.09 & 67.58 \\ \end{tabular}
\end{table}
Table 6: Performance improvements using filtering of noisy samples using threshold filtering and and AutoCL, using ABIF and TracIn (value of \(C\) in brackets).
Figure 1: Distribution of the correct and incorrect examples annotated by expert annotators in 5 equally-sized buckets (quantiles) computed based on the self-influence ranking and ordered from low (0) to high influence (4). We present results using both self-influence scores computed using _last_ layers (right) and _first_ layers (left). A distinction is made between examples with only wrong input (context), only wrong label, with both of them wrong.
Results.From Table 6, we see that AutoCL strongly outperforms filtering methods when given enough bucket granularity, for very noisy Paracrawl, noisy Toxicity and low noise NQ, and regardless of the task and influence function definition. We improve over filtering on Paracrawl by +1 BLEU point, on Toxicity by +3.2 F1 points, and on NQ by +1.2 F1 points. Given ABIF led to similar results to TracIn for NQ and Paracrawl, we only look at ABIF for Toxicity. It is interesting to note that, initially (@10k), filtering with a good threshold outperforms AutoCL which requires time to learn which are the useful buckets.
We check if the multi-armed bandit's learned policy (softmax probabilities of choosing a data bucket) is interpretable in Figure 3. In general, there is no incentive for the policy to be interpretable as it targets loss improvements only and it may undertake redundant switches between neighboring buckets in setups with high bucket granularity with the same end performance. That said, for Paracrawl, the model quickly learns to drop the top-92% of the training examples as ranked by self-influence, which almost matches our best filtering threshold of 90%, instead training on \(\nicefrac{{3}}{{3}}\) of time on the bottom-4%/8% and only \(\nicefrac{{1}}{{3}}\) of time from the top-4% which corresponds to the lowest influence and is known to be less useful to the model. For NQ, there is more of bucket switching, and the model initially uses the mid-influence buckets (top-50%/80%), followed by the high-influence outlier buckets (top-80%) which are quickly dropped and continues alternating between the lowest buckets (in bottom-20%). For Toxicity, the policy is not interpretable (though the high influence bucket is heavily used at all stages) but still brings more gains compared to the filtering. As TriviaQA is a very clean human-curated dataset, both filtering does not improve over baseline, while AutoCL brings only nominal and insignificant gains (see SSA).
Finally, one might wonder if AutoCL improvements are due to self-influence or due to the increased data scheduling flexibility, i.e., if a simpler example difficulty signal would suffice to attain similar gains. In SSA, we run AutoCL on NQ/TriviaQA buckets buckets defined via signals based on QA domain knowledge (context length, word rarity, context-question lexical overlap and question type) and find that self-influence is indeed crucial for the observed gains.
## 8 Related Work
Dataset error detection.Deep learning models' performance can be sensitive to noisy datasets Kumar et al. (2020); Zhang et al. (2016); Khayrallah and Koehn (2018), and various instance-based scoring methods have been proposed, alongside with sorting and thresholding to filter out noisy training example, including bilingual cross-entropy difference to rank parallel corpora Axelrod et al. (2011), "forgetting events", where an individual training example transitions from being classified correctly to incorrectly over the course of learning Toneva et al. (2018), or area-under-margin Pleiss et al. (2020), computed as the difference between the logits value in classification tasks. Influence functions were used to detect dataset errors Koh and Liang (2017); Schioppa et al. (2021), by looking at self-influence - how much a training point influences its own loss. However, these methods led to various degrees of success: in the case of IFs, Koh and Liang (2017) and Guo et al. (2020) improve performance by directly fixing/augmenting the mislabeled highly self-influential training examples, but filtering lowest or highest influential examples out did not outperform training on full data in other studies Guo et al. (2020); Kocijan and Bowman (2020). At the same time, it brought consistent performance gains on an o.o.d. task for MT Schioppa et al. (2021), bringing up the question whether filtering helps for improving performance of in-distribution test sets or is more effective for o.o.d. generalization.
Harmful noise vs. useful noise.A shortcoming of filtering/data selection is that noise is not always harmful. Prior work has shown that training
Figure 2: LongT5-base performance trained individually on each self-influence bin, from B1 (bottom-10%) to B10 (top-10%), of the NQ dataset.
with noise, or with injected artificial noise, can increase the stability of the models vis-a-vis noisy data (Vaibhav et al., 2019; Belinkov and Bisk, 2017; Heigold et al., 2018), which can be caused by differences in noise types and their interaction with the target task. Al Sharou et al. (2021) noted that only "harmful noise" that affects the performance of the system or does not carry the intended meaning should be removed, while "useful noise" should be kept or even added to the training data if it naturally occurs at test time (Rolnick et al., 2017). Moreover, the performance of the noise detection methods were evaluated, due to a lack of suitable datasets, on synthetic noise, but Jiang et al. (2020) found synthetic noise to affect much more DNN's capability to generalize than real noise. We look particularly if that holds true for influence functions and analyze what kind of noise is captured by highly self-influential examples.
Dynamic data schedules.Following this limitation of filtering methods, different training schedules that account for the training dynamics have been developed inspired by Bengio et al. (2009) and van der Wees et al. (2017). Swayamdipta et al. (2020) proposed a "easy-to-hard" training schedules based on the mean (confidence) and standard deviation (variability) of the gold label probabilities over the training epochs, where _hard_ points to samples with low confidence and low variability. Wang et al. (2018) proposes using a small amount of trusted data to help the model measure noise and do online data selection to train on gradually noise-reduced data batches, resembling active learning. Similarly, Nguyen et al. (2020) uses a self-ensemble for label filtering during training, by gradually allowing supervision from clean labels and stopping learning on the filtered noisy labels. Jiang et al. (2020) develops a method that use curriculum learning and vicinal risk minimization to handle both real and synthetic noise. We note that curriculum-based methods have been more effective than filtering, but it has some inherent limitations, such as defining "easy" and "hard" or designing an effective training schedule following these definitions. To overcome this limitation, we use automated curriculum learning (Graves et al., 2017; Kreutzer et al., 2021) that employ a multi-armed bandit to learn the most effective training schedule.
## 9 Conclusion
We proposed a general method for improving downstream model performance in the presence of noisy training data based on self-influence and multi-armed bandit curriculum learning, and without relying on threshold-based filtering. First, we showed that self-influence scores of training instances calculated with the Arnoldi iteration based procedure (Schioppa et al., 2021) are stable with respect to varying training hyperparameters such as batch size, random seeds, and the procedure's hyperparameters, and thus can be a reliable foundation for data cleaning methods. We further demonstrated that not all data outliers from the human points of view receive similarly-valued self-influence scores, what necessitates generalizing threshold filtering to a dynamic data reweighing. Finally, we showed that dynamically reweighing based on multi-armed bandits which pick self-influence buckets to sample batches from, improves over threshold filtering on noisy and clean datasets.
Figure 3: Learned AutoCL policies, showing the softmax probabilities attributed to each bucket (where B0 is the lowest influence bin). The policy extracted from Paracrawl (left) using 25 bins (only B0-B3 in the legend) and from NQ (middle), Toxicity (right) using 10 bins. |
2306.08130 | Quasi-Periodic Peak Energy Oscillations in X-ray Bursts from SGR
J1935+2154 | Magnetars are young neutron stars powered by the strongest magnetic fields in
the Universe (10$^{13-15}$ G). Their transient X-ray emission usually manifests
as short (a few hundred milliseconds), bright, energetic ($\sim$ 10$^{40-41}$
erg) X-ray bursts. Since its discovery in 2014, magnetar J1935+2154 has become
one of the most prolific magnetars, exhibiting very active bursting episodes,
and other fascinating events such as pulse timing anti-glitches and Fast Radio
Bursts. Here, we present evidence for possible 42 Hz (24 ms) quasi-periodic
oscillations in the $\nu F_{\nu}$ spectrum peak energy (Ep) identified in a
unique burst detected with the Fermi Gamma-ray Burst Monitor in January 2022.
While quasi-periodic oscillations have been previously reported in the
intensity of magnetar burst lightcurves, quasi-periodic oscillations in the Ep
have not. We also find an additional event from the same outburst that appears
to exhibit similar character in Ep, albeit of lower statistical quality. For
these two exceptional transients, such Ep oscillations can be explained by
magnetospheric density and pressure perturbations. For burst-emitting plasma
consisting purely of $e^+e^-$ pairs, these acoustic modes propagate along a
highly magnetized flux tube of length up to around $L\sim 130$ neutron star
radii, with $L$ being lower if ions are present in the emission zone. Detailed
time-resolved analyses of other magnetar bursts are encouraged to evaluate the
rarity of these events and their underlying mechanisms. | Oliver J. Roberts, Matthew G. Baring, Daniela Huppenkothen, Ersin Gogus, Yuki Kaneko, Chryssa Kouveliotou, Lin Lin, Alexander J. van der Horst, George Younes | 2023-06-13T20:50:44Z | http://arxiv.org/abs/2306.08130v1 | # Quasi-Periodic Peak Energy Oscillations in X-ray Bursts from SGR J1935+2154
###### Abstract
Magnetars are young neutron stars powered by the strongest magnetic fields in the Universe (\(10^{13-15}\) G). Their transient X-ray emission usually manifests as short (a few hundred milliseconds), bright, energetic (\(\sim 10^{40-41}\) erg) X-ray bursts. Since its discovery in 2014, magnetar SGR J1935+2154 has become one of the most prolific magnetars, exhibiting very active bursting episodes, and other fascinating events such as pulse timing anti-glitches and Fast Radio Bursts. Here, we present evidence for possible 42 Hz (24 ms) quasi-periodic oscillations in the \(\nu F_{\nu}\) spectrum peak energy (\(E_{\rm p}\)) identified in a unique burst detected with the Fermi Gamma-ray Burst Monitor in January 2022. While quasi-periodic oscillations have been previously reported in the intensity of magnetar burst lightcurves, quasi-periodic oscillations in the \(E_{\rm p}\) have not. We also find an additional event from the same outburst that appears to exhibit similar character in \(E_{\rm p}\), albeit of lower statistical quality. For these two exceptional transients, such \(E_{\rm p}\) oscillations can be explained by magnetospheric density and pressure perturbations. For burst-emitting plasma consisting purely of \(e^{+}e^{-}\) pairs, these acoustic modes propagate along a highly magnetized flux tube of length up to around \(L\sim 130\) neutron star radii, with \(L\) being lower if ions are present in the emission zone. Detailed time-resolved analyses of other magnetar bursts are encouraged to evaluate the rarity of these events and their underlying mechanisms.
Neutron stars (1108); Magnetars (992); Compact objects (288); Soft gamma-ray repeaters (1471) 0000-0002-4002-2886]Oliver J. Roberts
0000-0002-4073-2886]Matthew G. Baring
0000-0002-4073-3878]Daniela Huppenkothen
0000-0002-4073-3880]Erin Gogus
0000-0002-4073-3880]Chryssa Kouveliotou
0000-0002-4073-3880]Lin Lin
0000-0002-4073-3880]Alexander J. van der Horst
0000-0002-4073-3880]George Younes
## 1 Introduction
SGR J1935+2154 (hereafter SGR J1935) was discovered in 2014 (Stamatikos et al., 2014) with the _Burst Alert Telescope_ (BAT) on _the Neil Gehrels Swift Observatory_ (hereafter _Swift_), when the source emitted a few short X-ray bursts. Further observations of the source with the _Swift_ X-ray Telescope (XRT), _Chandra_ and _XMM-Newton_ observatories, found a spin period of 3.24 s and a period derivative of 1.43\(\times 10^{-11}\) s-s\({}^{-1}\), indicating a dipole magnetic field strength of \(\sim\)2.2 \(\times\)\(10^{14}\) G (Israel et al., 2016) at the equator, which confirmed the magnetar nature of the source. The source has exhibited prolific bursting behavior since its discovery, with multiple burst-active episodes in 2015 and 2016 (Younes et al., 2017), 2018, 2019 and 2020 (Lin et al., 2020), and in 2021 and 2022 (Roberts, 2023). Comprehensive investigations of SGR J1935 bursts, using data from the \(Fermi\) Gamma-ray Space Telescope's Gamma-ray Burst Monitor (_Fermi_/GBM) and _Swift_/BAT during these active episodes, showed that the magnetar became progressively more burst-active in every subsequent outburst episode (Lin et al., 2020). The
2020 SGR J1935 outburst was the most energetic, producing a "burst storm" on April 27\({}^{th}\) detected with multiple instruments, including _Fermi_/GBM (Kaneko et al., 2021) and NICER (Younes et al., 2020), the latter reporting a burst rate maximum of 0.2 bursts/s over a \(\sim\)1200 s pointed observation. A hard burst with a \(\nu\)F\({}_{\nu}\) peak energy (\(E_{\rm p}\)) reaching up to \(\sim\)80 keV on April 28\({}^{th}\), which was contemporaneous with a Fast Radio Burst (FRB; Ridnaia et al., 2021; CHIME/FRB Collaboration et al., 2020; Bochenek et al., 2020; Mereghetti et al., 2020), confirmed that at least one magnetar, SGR J1935, was a source of these exotic radio transients. SGR J1935 also underwent a radio-pulsar phase in October 2022, a few days after an anti-glitch (i.e., sudden spin down) was inferred from X-ray monitoring observations (Younes et al., 2022). Two additional FRB-like bursts from the source were detected during the same period (Kirsten et al., 2021).
In this paper, we present \(E_{\rm p}\) oscillations found in two bursts from SGR J1935, detected on the 12\({}^{th}\) and 16\({}^{th}\) of January, 2022. These observations open up new territory for probing the physics of magnetar bursts and the character of neutron star magnetospheres. In Section 2, we introduce the data analysis methods and spectral analysis. In Section 3, we identify through visual inspection of the brightest bursts from SGR J1935 (\(f\geq 1\times 10^{-6}\)), two candidates that appear to exhibit \(E_{\rm p}\) oscillations. We present the time-integrated spectral fits for each burst, the time-resolved energy spectra, a periodicity search and temporal analysis. We present a theoretical interpretation of these results in Section 4, and summarize our findings in Section 5.
## 2 Data Analyses
We use the triggered Time-Tagged Event (TTE) data (time resolution of \(\sim\)2 \(\mu\)s, 128 pseudo-logarithmically spaced energy channels), obtained with the _Fermi_ Gamma-ray Burst Monitor's (GBM's) thallium-doped Sodium Iodide (NaI) detectors with good detector-to-source angles (\(\leq\)60\({}^{\circ}\)). These detectors have an effective spectral range of \(\sim\)8\(-\)900 keV, which nicely overlaps with the energy range of normal magnetar burst spectra (\(\leq\)300 keV). More information on the _Fermi_/GBM instrument and its data types can be found in Meegan et al. (2009) and von Kienlin et al. (2020).
### Spectral Analysis
We performed spectral analysis on bright SGR J1935 bursts, emitted during the source activation starting in 2019 through 2022. While analyzing the _Fermi_/GBM TTE data, we found two bursts (labeled throughout this study as "Burst 1" and "Burst 2"), that appear to show variations in their \(\nu\)F\({}_{\nu}\) peak or \(E_{\rm p}\) values. We performed a time-resolved spectral analysis of these bursts using a power-law with an exponential cutoff, commonly referred to as a Comptonized (COMPT) model. This model is defined as:
\[F(E)=A\Bigg{(}\frac{E}{E_{\rm piv}}\Bigg{)}^{\alpha}\mathrm{exp}\left[-\frac{( \alpha+2)E}{E_{\rm p}}\right], \tag{1}\]
where, \(A\) is the amplitude in ph/s/cm\({}^{2}\)/keV, \(E_{\rm p}\) is the \(\nu\)F\({}_{\nu}\) peak in keV, \(\alpha\) is the power-law index, \(E_{\rm piv}\) is the pivot energy in keV, fixed at 100 keV in this study.
In addition to performing time-integrated and time-resolved spectral analysis using the COMPT model, other models such as a simple power-law (PL), an Optically-Thin Thermal Bremsstrahlung (OTTB), a single black-body (BB) and a double black-body (BB+BB), were also used. These were fit to the burst spectra using RMFIT (v4.4.2), developed specifically for the analysis of _GBM_ data1. We selected background intervals devoid of bursts several seconds before and after each event, fitting each interval with a polynomial function to model the background during the burst, which was subsequently subtracted. The Detector Response Matrices (DRMs) for triggered bursts are available as a part of the publicly accessible data products. For untriggered bursts, we generated DRMs using GBMSP v2.0. For the spectral analysis of all the bursts in our sample, we used the NaI-detector energy range of \(8-900\) keV. We neglect an energy interval of \(\sim\)4 keV centered around 35 keV, where the k-edge feature from Iodine in the NaI detectors appears in the data, as this has not been modeled perfectly (Bissaldi et al., 2009). The exclusion of this small portion of data from our analysis does not affect the results, but improves the statistics.
Footnote 1: [https://fermi.gsfc.nasa.gov/ssc/data/analysis/rmfit](https://fermi.gsfc.nasa.gov/ssc/data/analysis/rmfit)
We use the Bayesian Information Criterion (BIC; Schwarz, 1978; Liddle, 2007) to determine preferred models for the bursts in this study, which is often used for model comparisons with the maximum likelihood statistics. We calculated BIC for each spectral fit as follows:
\[\mathrm{BIC}=-2\ln\mathcal{L}_{\rm max}+k\ln N=\mathrm{CSTAT}+k\ln N, \tag{2}\]
where \(\mathcal{L}_{\rm max}\) is the maximum likelihood, \(k\) is the number of free parameters in the spectral model and \(N\) is the number of data points. We then calculated \(\Delta\)BIC for each pair of the four models for comparing the posterior probabilities of the given two models. We employ \(|\Delta\)BIC\(|\geq\) 12 to select a preferred model, which corresponds to the Bayes factor of \(\sim\)400, indicating that the posterior probability of the model with a smaller BIC value is higher by \(>99.7\%\)(Kass & Raftery, 1995; Anderson & Burnham, 2004; Liddle, 2007).
We only report the parameters from the COMPT and BB+BB model fits to our data, as they fit the data better than the PL, OTTB and BB models. We compare the BIC values between the COMPT and BB+BB model fits by defining \(\Delta\)BIC = BIC\({}_{\rm COMPT}\) - BIC\({}_{\rm BB+BB}\), where a BB+BB model is the preferred fit to the burst spectrum when \(\Delta\)BIC \(\geq\) 12, and a COMPT model is the preferred fit if \(\Delta\)BIC \(\leq\)\(-\)12, at 3-\(\sigma\) significance. The duration and spectral parameters from fitting the candidates using the aforementioned models, are shown in Table 1.
### Localization and Burst History
In the following sections we discuss the time-history and localization of the two burst outliers detected in January 2022. The most recent event (Burst 1; see Table 1) was localized to a Right Ascension (RA) and Declination (\(\delta\)) of 294.2\({}^{\circ}\) and 23.8\({}^{\circ}\) respectively, within the 1-\(\sigma\) error (3.6\({}^{\circ}\)) of the known position of SGR J1935+2154 (RA: 293.7\({}^{\circ}\), \(\delta\): 21.9\({}^{\circ}\)(Israel et al., 2016)). Similarly, the other event (Burst 2, which occurred around 3.8 days earlier than Burst 1) was localized to RA: 291.8\({}^{\circ}\), \(\delta\): 24.5\({}^{\circ}\), with an error of 4.5\({}^{\circ}\) (1-\(\sigma\)), 3.3\({}^{\circ}\) degrees from the SGR J1935+2154 position. Both events are consistent, within their errors, with an origin from SGR J1935; they occurred during a heightened period of burst-activity from the source, detected by multiple instruments (i.e., Roberts & Fermi GBM Team (2022); Ridnaia et al. (2022); Kozyrev et al. (2022)). We used a Bayesian block algorithm, similar to that used in Lin et al. (2020), to determine the duration of each burst (\(\tau_{BB}\)), shown in Table 1.
## 3 Results
The time-integrated COMPT and BB+BB spectral fits for Burst 1 and Burst 2 are presented in Table 1, noting that the spectral analysis for Burst 1 was performed separately for the whole event (which includes the faint, weak emission), as well as solely for the brighter, second emission episode. The fits were performed over the energy range \(10-1000\) keV, though most burst counts above background were at energies \(\lesssim 100\) keV. We find that a COMPT model fit to the spectrum is preferred for both candidates.
### Sgr J1935 \(E_{\rm p}\) Oscillation Candidates
The best example of fluctuations in \(E_{\rm p}\) is Burst 1, which triggered _Fermi_/GBM at 14:09:38.94 UTC on January 16\({}^{th}\), 2022, and subsequently assigned the burst number, bn220116590. The burst appears in the data as a faint pulse followed about 100 ms later by a very bright pulse, shown in the left panel of Fig. 1, with a total duration of 693 ms. The time-integrated spectral results for both the whole burst and the brighter, second emission episode are shown separately in Table 1. We use NaI detectors 0, 1, 2, 3 and 5, as these all have good detector-source angles (\(\leq\)60\({}^{\circ}\)). Upon fitting the brighter, second pulse, we find that the overall \(E_{\rm p}\) has a soft-hard-soft trend over that entire emission duration. When using finer temporal binning, the spectrum suggests possible periodic perturbations in \(E_{\rm p}\) (right panel of Fig. 1). Using the distance to the supernova remnant G57.2+0.8 (purported to be the host of SGR J1935), of 9 kpc (Pavlovic et al., 2013; Zhong et al., 2020), we derive E\({}_{iso}\) and L\({}_{iso}\) values of 3.9\(\times\)10\({}^{40}\) erg and 1.9\(\times\)10\({}^{41}\) erg/s, respectively for the second, 200 ms emission episode.
The next best example of an event with fluctuations in \(E_{\rm p}\) is Burst 2, which triggered _Fermi_/GBM at 19:58:04.04 UTC on January 12\({}^{th}\), 2022, and was subsequently assigned the burst number, bn22012832. Burst 2 appears as a bright burst with a duration of 254 ms. We use NaI detectors 0, 1, 3 and 5, as these all had good detector-source angles (\(<\)60\({}^{\circ}\)). When fitting a COMPT model to the spectrum, the general \(E_{\rm p}\) behavior of the burst is a quick rise followed by a slow decay, with weak perturbations on a similar timescale to that observed in Burst 1 (see Fig. 1). The time-integrated spectral results for this burst are shown in Table 1. Using the distance to SGR J1935 of 9 kpc, we derive E\({}_{iso}\) and L\({}_{iso}\) values of 2.2\(\times\)10\({}^{40}\) erg and 8.8\(\times\)10\({}^{40}\) erg/s, respectively for this 254 ms event.
Comparing both burst spectra, the burst spectrum of Burst 1 is found to have a clear ingress and egress in \(E_{\rm p}\) overall (i.e., the envelope), peaking at around 50 ms after the onset of the second emission episode (25 ms in the right panel of Fig. 1)2. This is markedly different from the flatter overall \(E_{\rm p}\) behavior of Burst 2 and likely implies different viewing
geometries of the source environment. While a BB+BB model was not found to be the preferred fit to either burst spectrum, we calculate the BB region sizes to try to put constraints on the flux tube geometry. For Burst 1, we find the BB regions to be 283 \(\pm\) 52 km\({}^{2}\) and 31 \(\pm\) 3 km\({}^{2}\) for \(kT_{1}\) and \(kT_{2}\), respectively. Similarly, we calculate the BB regions for Burst 2 to be 359 \(\pm\) 125 km\({}^{2}\) and 57 \(\pm\) 8 km\({}^{2}\) for \(kT_{1}\) and \(kT_{2}\), respectively. The \(kT_{1}\) region size is used to provide a constraint on the diameter of the cross section of the activated flux tube, as described in Section 4.3.
The light curve and \(E_{\rm p}\) time traces in Figure 1 display similar character in their general shapes, or envelopes, suggesting that there is a significant correlation between the flux \(\mathcal{F}\) and \(E_{\rm p}\). To explore this, we binned flux and \(E_{\rm p}\) data in 4 ms time intervals spanning the burst from the rapid risetime and decay of each event. A correlation of \(\mathcal{F}\,\propto\,E_{\rm p}^{3.2\pm 0.2}\) was found, with a Spearman rank order correlation coefficient and chance probability of 0.91 and 5.50\(\times 10^{-29}\), respectively. This differed considerably from the \(\mathcal{F}\,\propto\,E_{\rm p}^{2}\) correlation obtained by (Roberts et al., 2021) for the GRB 200415A magnetar giant flare from the Sculptor galaxy, a correlation that is nominally a signature of Doppler beaming/boosting from ultra-relativistic outflows from a rotating star. Thus, we conclude here that the plasma that generated these two special bursts was likely moving only mildly-relativistically along the active field lines, with the radiation being only somewhat anisotropic at each emission local.
Figure 1: **Left:** The lightcurves of Burst 1 (bn220116590) and Burst 2 (bn22012832) plotted alongside each other within a 500 ms window starting at -130 ms. The trigger time for each event is the zero time. The first, weaker emission episode in Burst 1 lasts about 100 ms, starting at \(\sim\)125 ms. **Right:** A \(\sim\)175 ms window showing the \(E_{\rm p}\) behavior of Burst 1 (pink) and Burst 2 (purple). Burst 1 is shifted in time by -166 ms in order to show the similar \(E_{\rm p}\) oscillating frequency of 24 ms (42 Hz), highlighted by the dashed grey lines. The temporal binning for both panels is 4 ms.
### Periodicity Search for Burst 1
We performed three complementary searches of the time series of \(E_{\rm p}\) values of Burst 1 at both 2 ms and 4 ms resolution for statistical evidence of a periodic or quasi-periodic oscillation (QPO); two Fourier approaches and a more sophisticated Gaussian Processes technique. As Burst 2 was found to be weaker than Burst 1, we did not search its time series for any spectral QPOs (abbreviated hereafter QPSOs3) in this event. To search for periodic signals in the \(E_{\rm p}\) lightcurves, we follow the procedure of Huppenkothen et al. (2013). We first generate a periodogram from the \(E_{\rm p}\) time series, fit a power law period distribution to this periodogram, and subsequently find the highest maximum outlier. The significance is calibrated using 1000 simulated periodograms based on posterior distributions of the power-law model parameters generated using Markov Chain Monte Carlo simulations. We assumed wide, flat priors for the power-law index and the logarithm of the power-law amplitude, and a normal prior with a mean of 2 and a width of 0.5 around a noise value of 2. We find no statistical evidence to reject the null hypothesis, namely that the periodogram is consistent with stochastic variability following a power-law power spectrum, and thus find no evidence for a periodic signal in the data using this analysis.
Footnote 3: In order to differentiate between traditional QPOs and the spectral QPOs found in this study, we named the latter with the abbreviation, QPSO (quasi-periodic spectral oscillations) throughout the remainder of the paper.
Next, we search for QPSOs by comparing the power-law model for the periodogram to a model consisting of a power-law model and a QPSO - parametrized as a Lorentzian - through a Likelihood Ratio Test (LRT). Once again, the LRT is calibrated via 1000 simulations generated from an MCMC sample of the null hypothesis (the power-law model). Here, we find weak evidence in the 2 ms data (\(p=0.039\)) that the observations might not be drawn from the null hypothesis. As the \(E_{\rm p}\) values are spectral model parameters associated with uncertainties, and because Fourier periodograms by default do not take uncertainties into account, we check the effects of incorporating the uncertainties through simulations. Here, for each point in the time series, we draw from a normal distribution with the mean given by the best-fit \(E_{\rm p}\) value, and the standard deviation given by the \(1\sigma\) parameter uncertainties. In this way, we draw 1000 time series. Subsequently, we compute the LRT between the models with and without a QPSO for each simulated time series, and compare the resulting distribution to the distribution generated from the MCMC simulations of only the power-law model. We find that, overall, while not exactly the same, the LRT of simulations generated from the best-fit values and their uncertainties have a similar distribution to the LRTs generated from the power-law model
Figure 2: Searching for (quasi-)periodicities in \(E_{\rm p}\) with a Gaussian Process-based model. **Left:** E\({}_{\rm p}\) values with uncertainties (black) as a function of time, along with posterior draws of the skew-Gaussian mean function (green) and posterior draws of the combined model of skew-Gaussian and damped random walk in orange. Overall, this model, containing only aperiodic stochastic variability on top of the overall skew-Gaussian trend, fits the \(E_{\rm p}\) very well. **Middle:** as in the left panel, but for a model where the damped random walk was replaced by a QPSO, parametrized as a stochastically driven, damped harmonic oscillator. This model, too, describes the data adequately well. The resulting Bayes factor does not allow us to favour one model over the other. **Right:** the posterior probability density for the period of the QPSO in the model shown in the middle panel, showing that the time series constrains the QPSO period well.
only, with no QPSO present. The periodogram of the best-fit values is a mild outlier (\(p\sim 0.04\)) with respect to both distributions, and thus we conclude that there is weak evidence for a rejection of the null hypothesis that the periodogram was generated by pure noise.
Fourier-based methods have a number of shortcomings in this context. Firstly, they assume a stationary process, which the \(E_{\rm p}\) time series of a magnetar burst is not, and which is known to bias the significance of QPO detections (Hubner et al., 2022). Secondly, as explained above, they cannot robustly take into account the uncertainties on the \(E_{\rm p}\) value. Finally, null hypothesis tests can only ever reject a null hypothesis, and thus cannot yield direct evidence _for_ the detection of a QPSO. With this in mind, we implement the Gaussian Process-based method recently introduced in Hubner et al. (2022). Gaussian Processes encompass a class of time domain-based models that addresses many of the above shortcomings. They enable direct (Bayesian) model comparison via comparison of model evidences in the form of Bayes factors. They can natively include non-stationarity via mean functions, and they include prescriptions to take into account data uncertainties. We choose a skew-Gaussian function as a mean model to account for the broad trend in the time series, and implement three different hypotheses for the variability seen on shorter timescales than that trend: a damped random walk in the \(E_{\rm p}\) variate to parametrize stochastic, aperiodic variability; a stochastically driven, damped harmonic oscillator as a QPSO model (corresponding to a Lorentzian in the Fourier domain); and a model combining both damped random walk and Lorentzian.
For all models, we implement wide, uninformative priors as described in Hubner et al. (2022), sample the posterior of each model via Dynamic Nested Sampling (Speagle, 2020), and finally compute Bayes factors to compare all models. Unlike null hypothesis tests, Bayes factors directly compare two models to each other, and thus can be used to yield evidence for or against a given model. We find that the QPSO-only and the red noise-only models produce very similar evidences, \(\log(B)=0.018\) for both 2 ms and 4 ms time series. This is smaller than the uncertainties in the calculation of the evidences \(\sigma_{B}\sim 0.2\), and both models thus describe the data equally well. We note that both are mildly preferred over the model containing both red noise and a QPSO, \(\log(B)=2\), likely owing to the increase in model parameters (and thus prior volume) of the more complex model. In Figure 2, we present the 2 ms time series along with posterior draws from the model containing only the mean function and red noise, and for the model containing only the mean function and a QPSO. We find that indeed, both are able to describe the observed data well. We also note, however, that the posterior probability density for the QPSO's period parameter is narrow and unimodal, \(P=0.02415^{+0.0023}_{-0.0018}\) seconds, suggesting that the QPSO is well-constrained.
## 4 Discussion
To recap, the time-integrated spectra of the two selected bright bursts from SGR J1935+2154 were found to prefer a non-thermal fit, and values of \(E_{\rm iso}\) and \(L_{\rm iso}\) for these two transients were comparable to those of typical energetic magnetar bursts. Our time-resolved analysis of \(E_{\rm p}\) in the brighter event (Burst 1) uncovered suggestive evidence of \(E_{\rm p}\) fluctuations, contrasting its lightcurve morphology which doesn't exhibit traditional QPO signatures. Careful and thorough analysis to evaluate the periodicity in these \(E_{\rm p}\) oscillations uncovered a well-constrained, 24 ms (i.e., 42 Hz) QPSO, the first time such possible periodic spectral fluctuations have been observed. Here we address some nuances of the QPSO analysis, summarize prior evidence for light curve QPOs in bursts, and deliver a theoretical interpretation for what could generate QPSOs in these two events.
### Bayes Factors in QPSO Probes
It is important to mention that there are several caveats concerning the Bayes factors adopted in the \(E_{\rm p}\) oscillation analysis. First, the models that were compared were purely empirical choices, and in the absence of a physical model capable of generating the data we see, were our only option for searching for QPSOs in this kind of data. The damped random walk model makes strong assumptions about the data-generating process; it is worth bearing in mind that its standard assumptions originate in studies of black holes, not magnetars. We expect that a more physically-motivated model for generating the variability would render the Bayes factor a more reliable metric for model selection. Second, Bayes factors are known to be very sensitive to prior choices. Here we set wide, uninformative priors on the model parameters, so that our lack of physical knowledge of the data-generating process likely impacts the significance of the result. Third, a statistically sound analysis should rely on more than a single number. Here, we jointly considered the evidence from both the Fourier-based analysis, which yielded a mild outlier, and the Gaussian Process analysis, which yielded an inconclusive Bayes factor, but a large _effect size_ in the form of a narrow, well-constrained posterior on the QPSO period. For a classical noise process, we would expect the period posterior to largely reproduce the wide, flat, uninformative prior rather than be heavily concentrated and unimodal, as is seen here.
Finally, it is important to note the impact of chosen priors on the models to be compared, which in the Gaussian approach were the damped random walk and the Lorentzian QPSO. In the analysis above, we have considered the Bayes factor in isolation, which in practice is equivalent to assigning equal probability to both the damped random walk and the Lorentzian model. If QPSOs are physically expected, as discussed at length in Section 4.3, then priors change and the confidence in the existence of the 24 ms QPSO in Burst 1 rises accordingly. The combination of these considerations leads us to conclude that this candidate discovery is worth reporting, especially in the context of the observation and analysis of future bright bursts, where improved count statistics and longer durations may yield more confident detections.
### QPOs in Magnetar Bursts
Quasi-periodic oscillations with frequencies of \(\sim\)18-625 Hz have been identified in light curves from Galactic magnetar giant flares from SGR 1806-20 and SGR 1900+14 (Strohmayer and Watts, 2005, 2006; Huppenkothen et al., 2014; Miller et al., 2019). Recently, QPOs have also been identified in the tail (7 to 160 ms) of extragalactic magnetar giant flare GRB 200415A (183\(\pm\)20 Hz at 2.5\(\sigma\) signficance, p-value of 2.3\(\times 10^{-2}\,\); see Roberts et al., 2021), and during the peak (3 ms) of GRB 200415A (2132 and 4250 Hz; see Castro-Tirado et al., 2021). Huppenkothen et al. (2014c) identified a weak QPO signal centered at \(\sim\)260 Hz in an SGR J1550-5418 burst, and more significant QPOs centered at \(\sim\)93 and \(\sim\)127 Hz when using stacking of the same data. A QPO centered at \(\sim\)57 Hz was also found by stacking 30 short, individual bursts from SGR 1806-20 (Huppenkothen et al., 2014a). More recently, a 40 Hz QPO at 3.4\(\sigma\) (p-value of 2.9\(\times 10^{-4}\)) was reported from SGR J1935+2154 in data from the Hard X-ray Modulation Telescope (HXMT; an instrument on Insight) over an energy interval of 18-50 keV (Li et al., 2022). This unusual burst was observed contemporaneously with FRB 200428 during the 2020 outburst, suggesting that some FRBs are related to strong oscillation processes in neutron stars.
All these QPOs were identified in the intensity of the lightcurve over frequencies predominately from 10-2000 Hz. These are most likely associated with global torsional/axial oscillations of the neutron star, particularly in the lower frequency band below 150 Hz (Levin, 2007; Huppenkothen et al., 2014b; Miller et al., 2019). However, models to explain spectral fluctuations (QPSOs) at similar frequencies are lacking, likely primarily because such a phenomenon has not been reported before. Consequently, we now present an original theoretical interpretation of the QPSOs (\(E_{\rm p}\) oscillations) suggested by the spectro-temporal analysis above.
### Magnetar Burst Acoustics
The emission region for the highly luminous bursts necessarily is highly optically thick to Thomson scattering due to the high densities of the radiating charges (e.g., Lin et al., 2011, 2012). Thus the microphysical fluctuation (Thomson diffusion) timescales should be very small. In addition, the Alfven mode frequencies lie below the ion plasma frequency \(\omega_{p}=\sqrt{4\pi e^{2}n_{p}/m_{p}}\), which is likely \(\sim 10^{14}\,\)Hz for plasma number densities of \(n_{p}\sim 10^{22}\,\)cm\({}^{-3}\) that can be deduced from burst radiation efficiency arguments (Baring and Harding, 2007). While very low frequency (long wavelength) Alfven waves can be contemplated, the high plasma density effectively scatters and absorbs them. The result is a decoherence that indicates that Alfven modes do not drive the \(E_{\rm p}\) oscillations that are clearly a macro-scale phenomenon. The high opacity implies that the emission environment should approximate, to a considerable extent, a quasi-thermodynamic ensemble of photons, pairs and possibly also ions.
The natural manifestation of thermodynamic fluctuations in such a highly magnetic environment is via density ( \(\rho\,\)) perturbations along surface-to-surface magnetic flux tubes accompanied by pressure ( \(P\,\)) variations mediated by the gaseous equation of state (EOS), i.e. acoustic oscillations. Pressure changes/fluctuations will then generate varying values for the observed photon \(E_{\rm p}\) value. Generally, the anticipated chaotic nature of the burst region should preclude clear oscillation signatures if the emission zone straddles many field lines. Yet, occasionally, the activated flux tube could possess a range of small cross sectional areas \(A\), and then pressure wave signatures might not be totally obscured by the environmental chaos. The radiating plasma "bounces" between flux tube footpoints, with adiabatic expansion and compression along the tube seeding the \(E_{\rm p}\) oscillations. We explore here the implications of this picture.
A fairly "constrained" magnetic flux tube with a small cross section can serve as an acoustic cavity for the burst zone, so that natural oscillations can be permitted along its length on timescales of arclength \(\mathcal{S}\) of the tube divided by the sound speed \(c_{s}\). This should be commensurate with the favored \(24\,\)ms period in the QPO analysis of the \(E_{\rm p}\) variability. Observe, that the discussion above indicated that the historical explanations of QPOs in the light curves of magnetar giant flares (e.g., Strohmayer and Watts, 2005, 2006) have considered sub-surface seismic modes. In
principle, these could provide a driver for the \(E_{\rm p}\) fluctuations. Yet, any periodic imprint of sub-surface drivers for burst activation is generally obscured by the high opacity of the flux tube, _unless_ the seismic mode frequency approximately coincides with the natural frequency of the acoustic cavity, in which case the sub-surface driving is resonant. This may or may not be the case for the bursts studied here.
Returning to the acoustics, since the radiation efficiency of the burst region is likely small, and the photon peak energies are well below the electron rest mass energy, the system pressure is putatively dominated by the gas contribution. The sound speed can then be _estimated_ using the formalism provided by Synge (1957) for a 3D relativistic Maxwell-Boltzmann distribution of a single species. For the purposes of this discussion, we will presume that \(\,e^{\pm}\,\) pairs constitute the gas. Note that the pair distributions strictly should be treated as one-dimensional due to rampant cyclotron/synchrotron cooling perpendicular to the strong magnetic fields. Yet addressing this nuance via the introduction of 1D Maxwellians does not qualitatively alter the conclusions drawn here.
For relativistic pairs, the sound speed is expressed in Eq. (316) of Synge (1957) in terms of modified Bessel functions whose argument is \(\,1/\Theta\,\), where \(\,\Theta=kT/m_{e}c^{2}\,\) is the dimensionless temperature. The observed values of \(\,E_{\rm p}\,\) in the 30-35 keV range (see Figs. 1 and 2) for the photons suggest \(\,kT\sim 10\,\)keV for the temperature of the photon-pair conglomerate. This translates to \(\,\Theta\sim 0.02\,\), for which the sound speed approximately satisfies the familiar non-relativistic expectation \(\,c_{s}^{2}/c^{2}\approx 5\Theta/3\,\) (i.e. \(\,\gamma P/\rho\,\) for adiabatic index \(\,\gamma=5/3\,\)). It then follows that the 3D sound speed realizes \(\,c_{s}\approx 0.176c\,\). The addition of moderate radiation pressure will increase this somewhat, but still fall well short of the relativistic EOS result \(\,c_{s}=c/\sqrt{3}\,\),
With this evaluation, the natural timescale for sound propagation over a neutron star radius is \(\,R_{\rm NS}/c_{s}\sim 0.18\,\)ms for pair plasma (with \(\,R_{\rm NS}=10^{6}\,\)cm). Accordingly, a \(\,24\,\)ms fluctuation might signal acoustic propagation on a flux tube of length around \(\,\mathcal{S}\sim 130R_{\rm NS}\,\). The length might be somewhat smaller because of the adiabatic cooling along the flux tube that lowers both \(\,\Theta\,\) and \(\,c_{s}\,\) away from its footpoints at the stellar surface. Given magnetic flux conservation along such tubes, the cross sectional area \(A\) rises at high altitudes and inversely as the field strength \(B\), so that the plasma density scales roughly as \(n_{p}\propto A^{-1}\propto B\) as it flows along a flux tube. The obvious deliverable here is that if a flux tube with a relatively small cross sectional area constitutes the active burst zone, then observations of \(E_{\rm p}\) fluctuations can provide a measure of the flux tube length. The arclength \(\,\mathcal{S}\,\) of a dipolar field line from footpoint to footpoint in flat spacetime geometry is stated in Eq. (23) of Wadiasingh et al. (2018). This can easily be expressed in terms of the footpoint colatitude \(\,\theta_{\rm f}\,\), and in the domain of polar field lines, \(\,\theta_{\rm f}\ll 1\,\) yields the approximation \(\,\mathcal{S}\approx 2.76R_{\rm NS}/\theta_{\rm f}^{2}\,\). This dipolar result would indicate footpoint colatitudes \(\,\theta_{\rm f}\sim 8^{\circ}\,\) for a \(\,24\,\)ms fluctuation. This estimate increases somewhat (i.e. moving away from the pole) when twists are introduced to modify the field morphology, thereby lengthening field loops (profoundly near the pole: Hu et al., 2022). This \(\,\theta_{\rm f}\,\) estimate for an activated flux tube is broadly consistent with those obtained in axisymmetric plasma simulations of twisted magnetospheres (e.g., Chen & Beloborodov, 2017). We note that if ions are abundant in the burst zone, then the sound speed will drop considerably, thereby decreasing the inferred flux tube length \(\,\mathcal{S}\,\) and moving its locale somewhat towards the magnetic equator.
The two-blackbody emission region area estimates of \(\,\sim 280-360\,\)km\({}^{2}\) for the \(\,T_{1}\sim 5\,\)keV cool Planck component (see Section 3.1) can be employed to estimate the typical cross sectional diameter \(\,d\,\) of the flux tube; the cool region most likely corresponds to the major portion of the tube's length \(\,\mathcal{S}\,\) centered on its equatorial apex. Assuming that the tube has an approximately circular cross section, its external observer-facing surface area is of order \(\,\pi\mathcal{S}\,d/2\,\), so that one deduces that the typical cross sectional diameter of the tube _near its apex_ is around \(\,1.4-1.8\,R_{\rm NS}\ll\mathcal{S}\,\), so that the flux tube is appropriately slender. The corresponding tube diameter near its surface footpoints is a small fraction of \(\,R_{\rm NS}\,\) for both dipolar and twisted field geometries.
Thus, with this intriguing and serendipitous observation, if \(E_{\rm p}\) fluctuations can be confirmed in bursts, they enable a path to discerning their emission locale. Yet the norm will be the circumstance where the active region possesses a large number of flux tubes of different lengths and spanning a larger cross sectional area \(\,A\,\), thereby providing destructive interference (noisiness) that obscures signature QPSO periods for acoustic oscillations. Thus, we expect that observations of \(E_{\rm p}\) oscillations will be rare in the magnetar burst database, as appears to be the case.
## 5 Summary
In this study, we have presented the possible discovery of fluctuations of the spectral peak energy \(E_{\rm p}\) from two bursts from SGR J1935+2154 during an outburst in early 2022. Time-integrated analysis of both bursts using _Fermi_/GBM data found that their spectra were non-thermal. Time-resolved analysis of \(E_{\rm p}\) in both events provided initial evidence of \(E_{\rm p}\) oscillations (QPSOs) over the brightest part of each event, while their lightcurve morphology
presented no indication of classical QPOs. Careful periodicity analysis supported this QPSO hypothesis through the suggestion of a well-constrained, 24 ms (42 Hz) QPSO in one of the events (Burst 1). We conjecture that these heretofore unprecedented events can be explained by density and pressure perturbations propagating along a highly magnetized flux tube that serves as an acoustic cavity. This tube is nominally of a length of \(\mathcal{S}\lesssim 130\) neutron star radii for a purely pair plasma in the emission zone, though the \(\mathcal{S}\) estimate drops considerably if ions are present and thereby lower the sound speed.
Both Burst 1 and Burst 2 occurred within \(\sim 4\) days of each other, during the same outburst in January 2022. The spectroscopic and lightcurve data shown in Fig. 1 show tantalizingly similar properties for the bursts. This suggests, perhaps speculatively, that both events and their acoustic oscillations arise either along the same flux tube, or a pair of flux tubes of comparable dimensions anchored on similar magnetic field lines. This would then provide an interesting constraint on the activation timescales over which the environment of the magnetosphere of SGR J1935+2154 (and perhaps, magnetars in general), change.
Detailed time-resolved analyses of other magnetar bursts are encouraged in order to find more examples of acoustic oscillations from magnetar sources. However, we note the difficulties in obtaining similar findings due to the need for magnetar bursts to be sufficiently bright and long enough in order to obtain enough oscillations or cycles, with which to confidently identify similar or varied QPSO frequencies above the noise level. This and the expected rarity of the proposed mechanism may make such instances rare when searching magnetar burst databases. Additionally, we use this study to advocate for physically accurate null hypothesis models to validate the detection of any future QPOs and QPSOs, as the current literature and models do not accurately describe magnetars sufficiently well.
O.J.R. gratefully acknowledges NASA funding through contract 80MSFC17M0022. M.G.B. acknowledges the generous support of the National Aeronautics and Space Administration through grants 80NSSC22K0777 and 80NSSC22K1576. D.H. is supported by the Women In Science Excel (WISE) programme of the Netherlands Organisation for Scientific Research (NWO). E.G. and Y.K. acknowledge the support from the Scientific and Technological Research Council of Turkey (TUBITAK project number 121F266).
|
2305.11131 | Parameterized Complexity of Equality MinCSP | We study the parameterized complexity of MinCSP for so-called equality
languages, i.e., for finite languages over an infinite domain such as
$\mathbb{N}$, where the relations are defined via first-order formulas whose
only predicate is $=$. This is an important class of languages that forms the
starting point of all study of infinite-domain CSPs under the commonly used
approach pioneered by Bodirsky, i.e., languages defined as reducts of finitely
bounded homogeneous structures. Moreover, MinCSP over equality languages forms
a natural class of optimisation problems in its own right, covering such
problems as Edge Multicut, Steiner Multicut and (under singleton expansion)
Edge Multiway Cut. We classify MinCSP$(\Gamma)$ for every finite equality
language $\Gamma$, under the natural parameter, as either FPT, W[1]-hard but
admitting a constant-factor FPT-approximation, or not admitting a
constant-factor FPT-approximation unless FPT=W[2]. In particular, we describe
an FPT case that slightly generalises Multicut, and show a constant-factor
FPT-approximation for Disjunctive Multicut, the generalisation of Multicut
where the ``cut requests'' come as disjunctions over $d = O(1)$ individual cut
requests $s_i \neq t_i$. We also consider singleton expansions of equality
languages, i.e., enriching an equality language with the capability for
assignment constraints $(x=i)$ for either finitely or infinitely many constants
$i \in \mathbb{N}$, and fully characterize the complexity of the resulting
MinCSP. | George Osipov, Magnus Wahlström | 2023-05-18T17:23:40Z | http://arxiv.org/abs/2305.11131v1 | # Parameterized Complexity of Equality MinCSP+
###### Abstract
We study the parameterized complexity of MinCSP for so-called _equality languages_, i.e., for finite languages over an infinite domain such as \(\mathbb{N}\), where the relations are defined via first-order formulas whose only predicate is \(=\). This is an important class of languages that forms the starting point of all study of infinite-domain CSPs under the commonly used approach pioneered by Bodirsky, i.e., languages defined as reducts of finitely bounded homogeneous structures. Moreover, MinCSP over equality languages forms a natural class of optimisation problems in its own right, covering such problems as Edge Multicut, Steiner Multicut and (under singleton expansion) Edge Multiway Cut. We classify MinCSP(\(\Gamma\)) for every finite equality language \(\Gamma\), under the natural parameter, as either FPT, W[1]-hard but admitting a constant-factor FPT-approximation, or not admitting a constant-factor FPT-approximation unless FPT=W[2]. In particular, we describe an FPT case that slightly generalises Multicut, and show a constant-factor FPT-approximation for Disjunctive Multicut, the generalisation of Multicut where the "cut requests" come as disjunctions over \(d=O(1)\) individual cut requests \(s_{i}\neq t_{i}\). We also consider _singleton expansions_ of equality languages, i.e., enriching an equality language with the capability for assignment constraints \((x=i)\), \(i\in\mathbb{N}\), for either a finite or infinitely many constants \(i\), and fully characterize the complexity of the resulting MinCSP.
###### Contents
* 1 Introduction
* 1.1 Related work
* 1.2 Our results
* 2 Preliminaries
* 3 Classifications for Equality Constraint Languages
* 3.1 Polynomial-Time Complexity of Equality CSP
* 3.2 Polynomial-Time Complexity of Equality MinCSP
* 3.3 Parameterized Complexity of Equality MinCSP
* 3.4 Approximation of Equality MinCSP
* 4 Reductions
* 4.1 Expressive Power of Some Relations
* 4.2 Hardness from Split Paired Cut
* 4.3 Hardness of ODD3 and NAE3
* 4.4 Reductions and Multicut Variants
* 5 Triple Multicut
* 6 Disjunctive and Steiner Multicut
* 6.1 Main Loop of the Disjunctive Multicut Algorithm
* 6.2 Simplification Procedure
* 6.2.1 Initial Phase
* 6.2.2 Random Covering of Shadow
* 6.3 An Improved Algorithm for Steiner Multicut
* 7 Singleton Expansion
* 7.1 The first step
* 7.2 Strictly negative languages
* 7.3 Constant languages
* 7.3.1 At most two singletons
* 7.3.2 At least three singletons
* 8 Discussion
Introduction
Let \(D\) be a fixed domain, and let \(\Gamma\) be a finite set of finitary relations over \(D\). \(\Gamma\) is referred to as a _constraint language_. A _constraint_ over \(\Gamma\) is a pair \((R,X)\), less formally written \(R(X)\), where \(R\in\Gamma\) is a relation of some arity \(r\) and \(X=(x_{1},\ldots,x_{r})\) is a tuple of variables. It is _satisfied_ by an assignment \(\alpha\) if \((\alpha(x_{1}),\ldots,\alpha(x_{r}))\in R\). For a constraint language \(\Gamma\), the _constraint satisfaction problem_ over \(\Gamma\), CSP\((\Gamma)\), is the problem where an instance \(I\) is a collection of constraints over \(\Gamma\), on some set of variables \(V\), and the question is if there is an assignment such that all constraints in \(I\) are satisfied. In the optimization variant MinCSP\((\Gamma)\), the input also contains an integer \(k\) and the question is whether there is an assignment such that all but at most \(k\) constraints are satisfied. Less formally, a constraint language \(\Gamma\) determines the "type of constraints" allowed in an instance of CSP\((\Gamma)\) or MinCSP\((\Gamma)\), and varying the constraint language defines problems of varying complexity (such as \(k\)-SAT, \(k\)-Colouring, \(st\)-Min Cut, etc.). After decades-long investigations, _dichotomy theorems_ have been established for these problems: for every constraint language over a finite domain, CSP\((\Gamma)\) and MinCSP\((\Gamma)\) is either in P or NP-complete, and the characterizations are known [17, 46, 44, 36]. For fixed cases, such as the Boolean domain \(D=\{0,1\}\), _parameterized_ dichotomies are also known, characterizing every problem MinCSP\((\Gamma)\) as either FPT or W[1]-hard [33], and similarly for approximate FPT algorithms [13]. This work represents significant advancements of our understanding of tractable and intractable computational problems (classical or parameterized).
But as highlighted by Bodirsky [3, 9], there are also many problems from a range of application domains that do not lend themselves to a formulation in the above CSP framework, yet which can be formulated via CSPs over structures with _infinite_ domains. Unfortunately, CSPs with fixed templates over infinite domains are not as well-behaved as over finite domains; it is known that the problem CSP\((\Gamma)\) over an infinite domain can have any computational complexity (including being intermediate), making any dichotomy impossible [6, 9]. There are also questions of how an arbitrary infinite-domain relation would be represented. The approach used by Bodirsky, which is the standard approach for the study of infinite-domain CSPs, is to consider a language \(\Gamma\) as a _reduct of a finitely bounded homogeneous structure_. Less technically, consider a structure, for example \((\mathbb{Q},<)\) or \((\mathbb{Z},<)\), and let \(\Gamma\) be a finite language where every relation in \(\Gamma\) has a quantifier-free first-order definition over the structure; i.e., \(\Gamma\) is a _first-order reduct_ of the structure.1 For such languages a dichotomy is plausible, and many cases have been settled, including _temporal_ CSPs, i.e., first-order reducts of \((\mathbb{Q},<)\)[8]; _discrete temporal_ CSPs, i.e., first-order reducts of \((\mathbb{Z},<)\)[10]; CSPs over the universal random graph [12]; and many more.
Footnote 1: The definition can be assumed to be quantifier-free since these structures admit quantifier elimination.
Our goal is to study the parameterized complexity of MinCSPs over such structures. Many important problems in parameterized complexity, which are not well handled by CSP optimization frameworks over finite-domain CSPs, can be expressed very simply in this setting. For example, the MinCSP with domain \(\mathbb{Q}\) and the single relation \(<\) is equivalent to the Directed Feedback Arc Set problem, i.e., given a digraph \(D\) and an integer \(k\), find a set \(X\) of at most \(k\) arcs from \(D\) such that \(D-X\) is acyclic. (Here, the vertices of \(D\) become variables, the arcs constraints, and the topological order of \(D-X\) becomes an assignment which violates at most \(|X|\) constraints.) Other examples include Subset Directed Feedback Arc Set, which corresponds to MinCSP\((<,\leq)\), and Symmetric Directed Multicut which corresponds to MinCSP\((\leq,\neq)\). The former is another important FPT problem [22], while FPT status of the latter is open [27].
The structure we study in this paper is \((\mathbb{N},=)\). The relations definable over this structure are called _equality constraint languages_. Here, \(\mathbb{N}\) is an arbitrary, countably infinite domain; first-order reducts of \((\mathbb{N},=)\) are simply relations definable by a quantifier-free first-order formula whose only predicate is \(=\). Equivalently, relations in an equality language accept or reject an assignment to their arguments purely based on the partition that the assignment induces. Since every first-order formula is allowed to use equality in this framework, equality languages are contained in _every_ other class of languages studied in the framework. Hence, characterizing the complexity of equality languages is a prerequisite for studying any other structure.
Moreover, the setting also covers problems that are important in their own right, as it captures undirected _graph separation_ problems. In particular, (Vertex/Edge) Multicut is defined as follows. The input is a graph \(G\), an integer \(k\), and a set of _cut requests_\(\mathcal{T}\subseteq\binom{V(G)}{2}\), and the task is to find a set \(X\) of at most \(k\) vertices, respectively edges, such that for every cut request \(st\in\mathcal{T}\), vertices \(s\) and \(t\) are in different connected components in \(G-X\). Multicut is FPT parameterized by \(k\) -- a breakthrough result, settling a long-open question [43, 15]. As with the above examples, there appears to be no natural way of capturing Multicut as a finite-domain CSP optimization problem.2 However, it naturally corresponds to MinCSP\((=,\neq)\) over domain \(\mathbb{N}\), where edges correspond to soft \(=\)-constraints and cut requests to crisp \(\neq\)-constraints. Another classic problem is Multiway Cut, which is the special case of Multicut where the cut requests are \(\mathcal{T}=\binom{T}{2}\) for a set \(T\) of _terminal vertices_ in the graph. Multiway Cut was among the first graph separation problems shown to be FPT [42], and remains a relevant problem, e.g., for the question of _polynomial kernelization_[37, 45]. While Multiway Cut is not directly captured by an equality CSP, it is captured by the _singleton expansion_ of the
setting, i.e., adding "assignment constraints" (see later).
We characterize \(\textsc{MinCSP}(\Gamma)\) for any equality language \(\Gamma\) as being in P or NP-hard, FPT or W[1]-hard, and in terms of admitting constant-factor FPT approximations. We also characterize the complexity for singleton expansions of \(\Gamma\).
### Related work
Bodirsky and Kara [7] characterized \(\textsc{CSP}(\Gamma)\) as in P or NP-hard for every equality language \(\Gamma\). Bodirsky, Chen and Pinsker [5] characterized the structure of equality languages up to pp-definitions (_primitive positive definitions_, see Section 2); these are too coarse to preserve the parameterized complexity of a problem, but their results are very useful as a guide to our search. For much more material on CSPs over infinite domains, see Bodirsky [4]. Singleton expansions (under different names) are discussed by Bodirsky [4] and Jonsson [29]. We have taken the term from Barto et al. [1].
Many variations on cut problems have been considered, and have been particularly important in parameterized complexity [24] (see also [23]). We cover Multicut, Multiway Cut and _Steiner cut_. Given a graph \(G\) and a set of terminals \(T\subseteq V(G)\), a Steiner cut is an edge cut in \(G\) that separates \(T\), i.e., a cut \(Z\) such that some pair of vertices in \(T\) is disconnected in \(G-Z\). Steiner Cut is the problem of finding a minimum Steiner cut. This can clearly be solved in polynomial time; in fact, using advanced methods, it can even be computed in _near-linear_ time [38, 21]. Steiner Multicut is the generalization where the input contains a set \(\mathcal{T}=\{T_{1},\ldots,T_{t}\}\) of terminal sets and the task is to find a smallest-possible cut that separates every set \(T_{i}\). Clearly, this is NP-hard if \(t\geq 3\). Bringmann et al. [16] considered parameterized variations of this and showed, among other results, that Edge Steiner Multicut is FPT even for terminal sets \(T_{i}\) of unbounded size, if the parameter includes both \(t\) and the cut size \(k\). On the other hand, parameterized by \(k\) alone, Steiner Multicut is W[1]-hard for terminal sets of size \(|T_{i}|=3\).
Other parameterized CSP dichotomies directly relevant to our work are the dichotomies for Boolean MinCSP as having constant factor FPT-approximations or being W[1]-hard to approximate [13] (with additional results in a later preprint [14]) and the recent FPT/W[1]-hardness dichotomy [33].
The area of FPT approximations has seen significant activity in recent years, especially regarding lower bounds on FPT approximations [28, 30, 2, 40]. In particular, we will need that there is no constant-factor FPT-approximation for Nearest Codeword in Boolean codes unless FPT=W[1][14, 2], or for Hitting Set unless FPT=W[2][40]. Lokshtanov et al. [41] considered fast FPT-approximations for problems whose FPT algorithms are slow; in particular, our result for Steiner Multicut builds on their algorithm giving an \(O^{*}(2^{O(k)})\)-time 2-approximation for Vertex Multicut.
### Our results
We study the classical and parameterized complexity of MinCSP(\(\Gamma\)) for every finite equality language \(\Gamma\), as well as for singleton expansions over equality languages. We consider both exact FPT-algorithms and constant-factor FPT-approximations. We provide our results in overview here; for details, see the body of the paper.
Unsurprisingly, MinCSP(\(\Gamma\)) for an equality language \(\Gamma\) is NP-hard except in trivial cases, since MinCSP(\(=\,,\neq\)) already corresponds to Edge Multicut. Specifically, MinCSP(\(\Gamma\)) is in P if \(\Gamma\) is _constant_, in which case every relation in \(\Gamma\) contains the tuple \((1,\ldots,1)\), or _strictly negative_, in which case every relation in \(\Gamma\) contains every tuple \((1,\ldots,r)\) where all values are distinct (proper definitions of the terms are found in Section 3). In all other cases, MinCSP(\(\Gamma\)) is NP-hard.
**Theorem 1** (Theorem 18 and Corollary 32).: _Let \(\Gamma\) be an equality constraint language. Then MinCSP(\(\Gamma\)) is in P if \(\Gamma\) is constant or strictly negative. Otherwise, MinCSP(\(\Gamma\)) is NP-hard and has no constant-factor approximation under the Unique Games Conjecture._
For our FPT results, we introduce the following generalization of Vertex Multicut.
\begin{tabular}{|l l|} \hline Vertex Multicut & with Deletable Triples (aka Triple Multicut) \\ Instance: & A graph \(G\), a collection \(\mathcal{T}\subseteq\binom{V(G)}{3}\) of vertex triples, and integer \(k\). \\ Parameter: & \(k\). \\ Question: & Are there subsets \(Z_{V}\subseteq V(G)\) and \(Z_{\mathcal{T}}\subseteq\mathcal{T}\) such that \(|Z_{V}|+|Z_{\mathcal{T}}|\leq k+1\) and every connected component of \(G-X_{V}\) intersects every triple in \(\mathcal{T}\setminus X_{\mathcal{T}}\) in at most one vertex? \\ \hline \end{tabular}
Note that this is a proper generalization of Vertex Multicut. On the one hand, for any cut request \(uv\) in a Vertex Multicut instance we can create a triple \(uvz\) for an auxiliary vertex \(z\) not connected to the rest of the graph. On the other hand, there is no apparent way to implement triples \(uvw\) in Vertex Multicut with the condition that the whole triple can be ignored at unit cost. We show that Triple Multicut is FPT.
**Theorem 2** (Theorem 42).: Triple Multicut_is fixed-parameter tractable._
For the FPT cases of MinCSP\((\Gamma)\), let \(\mathtt{NEQ}_{3}\) be the ternary relation which contains all tuples with three distinct values, and let a _split_ constraint be a constraint \(R\) of some arity \(p+q\) for \(p,q\geq 0\), defined (up to argument order) by
\[R(x_{1},\ldots,x_{p},y_{1},\ldots,y_{q})\equiv\bigwedge_{i,j\in[p]}(x_{i}=x_{j} )\wedge\bigwedge_{i\in[p],j\in[q]}(x_{i}\neq y_{j}).\]
We note that MinCSP\((\Gamma)\) with split constraints reduces to Vertex Multicut. A split constraint \(R(u_{1},\ldots,u_{p},v_{1},\ldots,v_{q})\) can be represented by introducing a new vertex \(c\), adding edges \(cu_{i}\) for every \(i\in[p]\) and cut requests \(cv_{j}\) for every \(i\in[q]\). Furthermore, a constraint \(\mathtt{NEQ}_{3}(u,v,w)\) naturally corresponds to a triple \(uvw\in\mathcal{T}\). Thus MinCSP\((\Gamma)\) reduces to Triple Multicut if every relation is either split or \(\mathtt{NEQ}_{3}\), and hence is FPT. We show that all other cases are W[1]-hard and get the following.
**Theorem 3** (from Theorem 20).: _Let \(\Gamma\) be an equality constraint language that is not constant or strictly negative. Then MinCSP\((\Gamma)\) is FPT if every relation in \(\Gamma\) is either split or \(\mathtt{NEQ}_{3}\), and W[1]-hard otherwise._
Next, we describe the cases with constant-factor FPT-approximations. Again, we introduce a new problem to capture this. Let \(G\) be a graph. A subset \(L\subseteq\binom{V(G)}{2}\) of pairs is a _request list_, and a set of vertices \(X\subseteq V(G)\)_satisfies_\(L\) if there is a pair \(st\in L\) separated by \(X\). For a graph \(G\) and a collection of request lists \(\mathcal{L}\), let \(\operatorname{cost}(G,\mathcal{L})\) be the minimum size of a set \(X\subseteq V(G)\) that satisfies all lists in \(\mathcal{L}\).
\begin{tabular}{|l l|} \hline Disjunctive Multicut & \\ Instance: & A graph \(G\), a collection \(\mathcal{L}\) of request lists, each of size at most \(d\), and an integer \(k\). \\ Parameter: & \(k\). \\ Question: & Is \(\operatorname{cost}(G,\mathcal{L})\leq k\)? \\ \hline \end{tabular}
Note that Steiner Multicut is the special case of Disjunctive Multicut where each request list is \(L=\binom{T_{i}}{2}\) for some terminal set \(T_{i}\). Our main algorithmic contribution is an FPT-approximation for Disjunctive Multicut.
**Theorem 4** (Theorems 44 and 37).: _Let \(d\in\mathbb{N}\) be a constant. Disjunctive Multicut with request lists of length at most \(d\) has a constant-factor FPT-approximation parameterized by \(k\). Steiner Multicut where every terminal set \(T_{i}\) has \([T_{i}]\leq d\) has an \(O^{*}(2^{O(k)})\)-time 2-approximation._
This precisely describes the FPT-approximable cases of MinCSP\((\Gamma)\): For every equality constraint language \(\Gamma\) such that CSP\((\Gamma)\) is in P, either MinCSP\((\Gamma)\) reduces to Disjunctive Multicut in an immediate way (up to a constant-factor approximation loss), implying a constant-factor FPT-approximation, or there is a cost-preserving reduction from Hitting Set to MinCSP\((\Gamma)\). We refer to the latter as MinCSP\((\Gamma)\) being Hitting Set-hard.
**Theorem 5** (from Theorem 33).: _Let \(\Gamma\) be an equality constraint language such that CSP\((\Gamma)\) is in \(P\). Then either MinCSP\((\Gamma)\) reduces to Disjunctive Multicut and has a constant-factor FPT-approximation, or MinCSP\((\Gamma)\) is Hitting Set-hard._
Singleton expansion.In addition to the above (main) results, we also investigate the effect of adding constants to an equality language motivated by the problem Multiway Cut. More precisely, for an equality language \(\Gamma\), we investigate the effect of adding some number of unary singleton relations \(\{(i)\}\) to \(\Gamma\). This is equivalent to allowing "assignment constraints" (\(x=i\)) in MinCSP\((\Gamma)\). We consider adding either a finite number of singletons, or every singleton relation. For an equality language \(\Gamma\) and an integer \(c\in\mathbb{N}\), \(c\geq 1\), we define \(\Gamma^{+}_{c}=\Gamma\cup\{(i)\mid i\in[c]\}\) as the language \(\Gamma\) with \(c\) different singletons added, and let \(\Gamma^{+}\) denote \(\Gamma\) with every singleton \(\{(i)\}\), \(i\in\mathbb{N}\) added. Edge Multiway Cut corresponds to MinCSP\((\Gamma^{+})\) over the language \(\Gamma=\{=\}\), and \(s\)-Edge Multiway Cut, the special case with \(s\) terminals, corresponds to MinCSP\((\Gamma^{+}_{s})\). By a _singleton expansion of \(\Gamma\)_ we refer to either the language \(\Gamma^{\prime}=\Gamma^{+}\) or \(\Gamma^{\prime}=\Gamma^{+}_{c}\) for some \(c\in\mathbb{N}\).
As the first step of the characterization, we observe that if \(\Gamma\) can express \(=\) and \(\neq\), then the singleton expansion adds no power, i.e., MinCSP\((\Gamma^{+})\) reduces back to MinCSP\((\Gamma)\) by introducing variables \(c_{1},\ldots,c_{m}\) for arbitrarily many constants, adding constraints \(c_{i}\neq c_{j}\) whenever \(i\neq j\), and using constraints \(x=c_{i}\) in place of assignments \(x=i\). For the rest of the characterization, we thus study the cases that either cannot express equality, or cannot express disequality. We defer the explicit characterization of the cases to the main text, but in summary we get the following result. We say that a language is _positive conjunctive_ if every relation \(R\in\Gamma\) can be defined as a conjunction of clauses (\(x_{i}=x_{j}\)). In the below, _is equivalent to_ refers to the there being cost-preserving reductions in both directions (see Section 2).
**Theorem 6**.: _Let \(\Gamma\) be an equality constraint language and let \(\Gamma^{\prime}\) be a singleton expansion of \(\Gamma\). Then one of the following cases applies._
* MinCSP(\(\Gamma^{\prime}\)) _is equivalent to_ MinCSP(\(\Gamma\))__
* MinCSP(\(\Gamma^{\prime}\)) _is trivial, i.e., always satisfiable_
* MinCSP(\(\Gamma^{\prime}\)) _is equivalent to the MinCSP over a singleton expansion of the empty language_ \(\Delta=\emptyset\)_, in which case_ MinCSP(\(\Gamma^{\prime}\)) _is in_ \(P\)__
* MinCSP(\(\Gamma^{\prime}\)) _is equivalent to_ MinCSP(\(\Delta\)) _for a Boolean language_ \(\Delta\)__
* \(\Gamma\) _is strictly negative,_ MinCSP(\(\Gamma^{\prime}\)) _is NP-hard but FPT and has a constant-factor approximation_
* MinCSP(\(\Gamma^{\prime}\)) _is equivalent to_ MinCSP(\(\Delta^{\prime}\)) _where_ \(\Delta^{\prime}\) _is a singleton expansion of a positive conjunctive language_ \(\Delta\)__
* MinCSP(\(\Gamma^{\prime}\)) _is_ Hitting Set_-hard,_ \(\Gamma\) _is Horn and_ CSP(\(\Gamma^{\prime}\)) _is in_ \(P\)__
* CSP(\(\Gamma^{\prime}\)) _is NP-hard_
Note the distinction between \(\Gamma\)_is positive conjunctive_ and MinCSP(\(\Gamma^{\prime}\)) _is equivalent to_ MinCSP(\(\Delta^{\prime}\)) _where_ \(\Delta\) _is positive conjunctive._ This distinction is the main subtlety of the result. Consider a relation \(R(x_{1},\ldots,x_{r})\equiv(|\{x_{1},\ldots,x_{r}\}|\neq r)\). For \(\{R\}_{c}^{+}\) with \(c<r\) there is never a need to use more than \(c\) distinct values in an assignment, hence \(R\) becomes "effectively trivial". But the language \(\{R\}_{c}^{+}\) for \(c\geq r\) is intractable. Finally, we note the cases of singleton expansions of a positive conjunctive language. In particular, every such case reduces to Multiway Cut up to a constant-factor loss.
**Theorem 7**.: _Let \(\Gamma\) be a positive conjunctive language and \(\Gamma^{\prime}\) a singleton expansion of \(\Gamma\) with at least three added singleton relations. Then MinCSP(\(\Gamma^{\prime}\)) is NP-hard but has a constant-factor approximation. Furthermore, MinCSP(\(\Gamma^{\prime}\)) is FPT if \(\Gamma\) is split, otherwise W[1]-hard._
Roadmap.Section 2 contains technical preliminaries. Section 3 properly defines equality constraint languages and gives the classification with proofs deferred to later sections. Section 4 contains hardness reductions and reductions from MinCSP(\(\Gamma\)) to various cut problems. Section 5 gives the FPT algorithm for Triple Multicut. Section 6 gives the FPT approximation algorithms. Section 7 contains the classification for singleton expansions. Section 8 concludes the paper.
## 2 Preliminaries
Graph Separation.Let \(G\) be an undirected graph. Denote the vertex set of \(G\) by \(V(G)\) and the edge set by \(E(G)\). For a subset of edges/vertices \(X\) in \(G\), let \(G-X\) denote the graph obtained by removing the elements of \(X\) from \(G\), i.e. \(G-X=(V(G),E(G)\setminus X)\) if \(X\subseteq E(G)\) and \(G-X=G[V(G)\setminus X]\) if \(X\subseteq V(G)\). A _cut request_ is a pair of vertices \(st\in\binom{V(G)}{2}\), and an \(st\)-cut/\(st\)-separator is a subset of edges/vertices \(X\) such that \(G-X\) contains no path connecting \(s\) and \(t\). We write that \(X\)_fulfills_\(st\) if \(X\) is an \(st\)-cut/\(st\)-separator. We implicitly allows the inputs to cut problems such as Multiway Cut and Multicut to contain undelelatable edges/vertices: for edges, with a parameter of \(k\), we can include \(k+1\) parallel copies; for vertices, we can replace a vertex \(v\) with a clique of size \(k+1\), where every member of the clique has the same neighbourhood as \(v\).
Parameterized Deletion.A _parameterized_ problem is a subset of \(\Sigma^{*}\times\mathbb{N}\), where \(\Sigma\) is the input alphabet. The parameterized complexity class FPT contains problems decidable in \(f(k)\cdot n^{O(1)}\) time, where \(f\) is a computable function and \(n\) is the bit-size of the instance. Let \(L_{1},L_{2}\subseteq\Sigma^{*}\times\mathbb{N}\) be two parameterized problems. A mapping \(F:\Sigma^{*}\times\mathbb{N}\rightarrow\Sigma^{*}\times\mathbb{N}\) is an _FPT-reduction_ from \(L_{1}\) to \(L_{2}\) if
* \((x,k)\in L_{1}\) if and only if \(F((x,k))\in L_{2}\),
* the mapping can be computed in \(f(k)\cdot n^{O(1)}\) time for some computable function \(f\), and
* there is a computable function \(g:\mathbb{N}\rightarrow\mathbb{N}\) such that for all \((x,k)\in\Sigma^{*}\times\mathbb{N}\), if \((x^{\prime},k^{\prime})=F((x,k))\), then \(k^{\prime}\leq g(k)\).
The classes W[1] and W[2] contains all problems that are FPT-reducible to Clique and Hitting Set, respectively, parameterized by the solution size. These problems are not in FPT under the standard assumptions FPT\(\neq\)W[1] and FPT\(\neq\)W[2]. For a thorough treatment of parameterized complexity we refer to [24].
Constraint Satisfaction.Fix a _domain_\(D\). A relation \(R\) of _arity_\(r\) is a subset of tuples in \(D^{r}\), i.e. \(R\subseteq D^{r}\). We write \(=\) and \(\neq\) to denote the binary equality and disequality relations over \(D\), i.e. \(\{(a,b)\in D^{2}:a=b\}\) and \(\{(a,b)\in D^{2}:a\neq b\}\), respectively. A _constraint language_\(\Gamma\) is a set of relations over a domain \(D\). A _constraint_ is defined by a relation \(R\) and a tuple of variables \(\mathbf{x}=(x_{1},\ldots,x_{r})\), where \(r\) is the arity of \(R\). It is often written as \(R(\mathbf{x})\) or \(R(x_{1},\ldots,x_{r})\). An assignment \(\alpha:\{x_{1},\ldots,x_{r}\}\to D\)_satisfies_ the constraint if \(\alpha(\mathbf{x})=(\alpha(x_{1}),\ldots,\alpha(x_{r}))\in R\), and _violates_ the constraint if \(\alpha(\mathbf{x})\notin R\).
Constraint Satisfaction Problem for \(\Gamma\) (\(\mathrm{CSP}(\Gamma)\))
MinCSP is an optimization version of the problem seeking an assignment that minimizes the number of violated constraints. In this constraints are allowed to be _crisp_ and _soft_. The _cost of assignment_\(\alpha\) in an instance \(I\) of CSP is infinite if it violates a crisp constraint, and equals the number of violated soft constraints otherwise. The _cost of an instance_\(I\) denoted by \(\mathrm{cost}(I)\) is the minimum cost of any assignment to \(I\).
MinCSP(\(\Gamma\))
Next, we recall a useful notion that captures local reductions between CSPs.
**Definition 8**.: Let \(\Gamma\) be a constraint language over \(D\) and \(R\subseteq D^{r}\) be a relation. A _primitive positive definition (pp-definition)_ of \(R\) in \(\Gamma\) is an instance \(C_{R}\) of \(\mathrm{CSP}(\Gamma,=)\) with primary variables \(\mathbf{x}\), auxiliary variables \(\mathbf{y}\) and the following properties:
1. if \(\alpha\) satisfies \(C_{R}\), then it satisfies \(R(\mathbf{x})\),
2. if \(\alpha\) satisfies \(R(\mathbf{x})\), then there exists an extension of \(\alpha\) to \(\mathbf{y}\) that satisfies \(C_{R}\).
Informally, pp-definitions can be used to simulate \(R\) using the relations available in \(\Gamma\) and equality: every constraint using \(R\) can be replaced by a gadget based on the pp-definition, resulting in an equivalent instance. The type of reductions captured by pp-definitions is however incompatible with MinCSP because the reductions do not preserve assignment costs. For example, consider the double-equality relation \(R=\{(a,a,b,b):a,b\in D\}\) and its pp-definition \(C_{R}=\{x_{1}=x_{2},x_{3}=x_{4}\}\). The cost of assignment \((1,2,1,2)\) is one in \(R(x_{1},x_{2},x_{3},x_{4})\) but two in \(C_{R}\). This motivates the following definition.
**Definition 9**.: Let \(\Gamma\) be a constraint language over \(D\) and \(R\subseteq D^{r}\) be a relation. An _implementation_ of \(R\) in \(\Gamma\) is a pp-definition of \(R\) with primary variables \(\mathbf{x}\), auxiliary variables \(\mathbf{y}\) and an additional property: if \(\alpha\) violates \(R(\mathbf{x})\), there exists an extension of \(\alpha\) to \(\mathbf{y}\) of cost one.
Although pp-definitions do not preserve costs, they can be used to simulate crisp constraints in MinCSP instances.
**Proposition 10** (Proposition 5.2 in [34]).: _Let \(\Gamma\) be a constraint language over a domain \(D\) and \(R\) be a relation over \(D\). Then the following hold._
1. _If_ \(\Gamma\) _pp-defines_ \(R\)_, then there is an FPT-reduction from_ \(\textsc{MinCSP}(\Gamma,R)\) _restricted to instances with only crisp_ \(R\)_-constraints to_ \(\textsc{MinCSP}(\Gamma,=)\)_._
2. _If_ \(\Gamma\) _implements_ \(R\)_, then there is an FPT-reduction from_ \(\textsc{MinCSP}(\Gamma,R)\) _to_ \(\textsc{MinCSP}(\Gamma,=)\)_._
Approximation.A minimization problem over an alphabet \(\Sigma\) is a triple \((\mathcal{I},\mathrm{sol},\mathrm{cost})\), where
* \(\mathcal{I}\subseteq\Sigma^{*}\) is the set of instances,
* \(\mathrm{sol}:\mathcal{I}\rightarrow\Sigma^{*}\) is a function such that maps instances \(I\in\mathcal{I}\) to the sets of solutions \(\mathrm{sol}(I)\), and
* \(\mathrm{cost}:\mathcal{I}\times\Sigma^{*}\rightarrow\mathbb{Z}_{\geq 0}\) is a function that takes an instance \(I\in\mathcal{I}\) and a solution \(X\in\mathrm{sol}(I)\) as input, and returns a non-negative integer cost of the solution.
Define \(\mathrm{cost}(I):=\min\{\mathrm{cost}(I,X):X\in\mathrm{cost}(I)\}\). A constant-factor approximation algorithm with factor \(c\geq 1\) takes an instance \(x\in\mathcal{I}\) and an integer \(k\in\mathbb{N}\), and returns 'yes' if \(\mathrm{cost}(I)\leq k\) and 'no' if \(\mathrm{cost}(I)>c\cdot k\). A _cost-preserving reduction_ from a problem \(A=(\mathcal{I}_{A},\mathrm{sol}_{A},\mathrm{cost}_{A})\) to \(B=(\mathcal{I}_{B},\mathrm{sol}_{B},\mathrm{cost}_{B})\) is a pair of functions polynomial-time computable functions \(F\) and \(G\) such that
* for every \(I\in\mathcal{I}_{A}\), we have \(F(I)\in\mathcal{I}_{B}\) with \(\mathrm{cost}_{A}(I)=\mathrm{cost}_{B}(F(I))\), and
* for every \(I\in\mathcal{I}_{A}\) and \(Y\in\mathrm{sol}(F(I))\), we have \(G(I,Y)\in\mathrm{sol}(I)\), and \(\mathrm{cost}_{A}(I,G(I,Y))\leq\mathrm{cost}_{B}(F(I),Y)\).
If there is a cost-preserving reduction from \(A\) to \(B\), and \(B\) admits a constant-factor polynomial-time/fpt approximation algorithm, then \(A\) also admits a constant-factor polynomial-time/fpt approximation algorithm.
## 3 Classifications for Equality Constraint Languages
This section offers a bird's eye view of complexity classifications of MinCSP over equality constraint languages, including our main results - parameterized complexity and parameterized approximation classifications for equality constraint languages and their singleton expansions. The aim is to develop introduce necessary definitions, present some basic observations, and give an overview of the proof strategies. All technical proofs are delegated to subsequent sections.
Fix an infinite (countable) domain, e.g. the set of natural numbers \(\mathbb{N}\). A set of relations \(\Gamma\) over \(\mathbb{N}\) is an _equality constraint language_ if all its relations are preserved by every permutation of the domain. Syntactically, equality constraint relations can be defined by Boolean formulas using the equality relation, conjunction, disjunction, and negation, e.g.
\[R(x_{1},x_{2},x_{3})\equiv(x_{1}=x_{2}\wedge x_{2}\neq x_{3})\vee(x_{2}=x_{3} \wedge x_{1}\neq x_{2})\vee(x_{1}=x_{3}\wedge x_{2}\neq x_{3}).\]
is an equality constraint relation. Atomic formulas \(x_{i}=x_{j}\) and \(x_{i}\neq x_{j}\) are referred to as _positive_ and _negative literals_, respectively. We only consider proper relations, i.e. relations that are neither empty nor complete. Moreover, we assume that relations do not have redundant arguments, i.e. if the arity of \(R\) is \(r\), every definition of \(R\) is a formula on \(r\) variables. The relation-defining formulas can be converted into conjunctive normal form (CNF), and we refer to the conjunctions in CNF formulas as _clauses_.
There is another way to define equality constraint relations. Recall that these relations are closed under all automorphisms of \(\mathbb{N}\). One can think of the orbits of tuples in a relation under the action of automorphisms as partitions of indices, and define the relation by the list of non-isomorphic tuples in it. With this in mind, we invoke a definition.
**Definition 11**.: Let \(\mathbf{a},\mathbf{b}\in\mathbb{N}^{r}\) be two tuples. We say that \(\mathbf{a}\)_refines_\(\mathbf{b}\) if \(\mathbf{a}_{i}=\mathbf{a}_{j}\) implies \(\mathbf{b}_{i}=\mathbf{b}_{j}\) for all \(i,j\in\{1,\ldots,r\}\). Furthermore, if there exist indices \(i,j\in\{1,\ldots,r\}\) such that \(\mathbf{a}_{i}\neq\mathbf{a}_{j}\) and \(\mathbf{b}_{i}=\mathbf{b}_{j}\), then \(\mathbf{a}\)_strictly refines_\(\mathbf{b}\).
For example, tuple \((1,1,2,3,4)\) strictly refines \((5,5,5,6,7)\). Refinement is a partial order on the tuples.
### Polynomial-Time Complexity of Equality CSP
Polynomial-time complexity of CSP for equality constraint languages was classified by Bodirsky and Kara [7]. To describe the dividing line between tractable and NP-hard problems, we recall the necessary definitions.
**Definition 12**.: Let \(R\) be an equality constraint relation.
* \(R\) is _constant_ if it contains the tuple \((1,\ldots,1)\).
* \(R\) is _Horn_ if it is definable by a CNF formula with at most one positive literal in each clause.
An equality constraint language is constant/Horn if all its relations are constant/Horn, respectively.
**Example 13**.: Consider two relations defined by the formulas \((x_{1}=x_{2}\lor x_{3}=x_{4})\) and \((x_{1}=x_{2}\lor x_{3}\neq x_{4})\wedge(x_{1}\neq x_{3})\). The first formula is not Horn since it has two positive literals in a single clause, while the second relation is Horn. It is not hard to show that the relation defined by the first formula is in fact not Horn, i.e. it admits no equivalent Horn formulation. The first formula defines a constant relation, while the second does not.
Every constraint using a constant relation is satisfies by setting all variables to the same value, so every instance of CSP for constant languages is trivially consistent. To check if an instance of CSP over a Horn language is consistent, one can first propagate positive unit clauses (\(x=y\)) by identifying variables \(x\) and \(y\), removing falsified literals \(x\neq x\) from every clause, and repeating the procedure until either an empty clause is derived or there are no more positive unit clauses to propagate. If an empty clause is derived, then we have a no-instance. Otherwise, we obtain a Horn instance without positive unit clauses, which is satisfiable by any assignment of distinct values to all variables. Bodirsky and Kara [7] proved that constant and Horn are the only polynomial-time solvable cases.
**Theorem 14** (Theorem 1 in [7]).: _Let \(\Gamma\) be an equality constraint language. Then \(\mathrm{CSP}(\Gamma)\) is solvable in polynomial time if \(\Gamma\) is either constant or Horn, and is NP-complete otherwise._
### Polynomial-Time Complexity of Equality MinCSP
MinCSP for equality constraint languages is NP-hard even for very simple languages: indeed, MinCSP(\(=,\neq\)) is NP-hard by a simple reduction from Edge Multicut. As it turns out, every MinCSP for an equality constraint language is either solvable in polynomial time by a trivial algorithm or NP-hard. Towards proving this, we define another class of equality languages.
**Definition 15**.: An equality constraint relation is _strictly negative_ if it admits a CNF definition with only negative literals. An equality constraint language is strictly negative if all its relations are strictly negative.
Strictly negative relations are a special case of Horn relations. The following lemma implies NP-hardness of MinCSP for almost all equality constraint languages.
**Lemma 16** (Section 4.1).: _Let \(R\) be an equality constraint relation._
1. _If_ \(R\) _is not constant, then_ \(R\) _implements_ \(x_{1}\neq x_{2}\)_._
2. _If_ \(R\) _is Horn but not strictly negative, then_ \(R\) _implements_ \(x_{1}=x_{2}\)_._
We use Lemma 16 to prove NP-hardness. In fact, we prove a slightly stronger result.
**Lemma 17** (Section 4.4).: _Let \(\Gamma\) be an equality constraint language that implements binary equality and pp-defines binary disequality relations. Then there is a cost-preserving reduction from Edge Multicut to MinCSP(\(\Gamma\))._
The classification follows by observing that if \(\Gamma\) is neither Horn nor constant, then even CSP(\(\Gamma\)) is NP-hard.
**Theorem 18**.: _Let \(\Gamma\) be an equality constraint language. MinCSP(\(\Gamma\)) is solvable in polynomial time if \(\Gamma\) is constant or strictly negative, and is NP-hard otherwise._
### Parameterized Complexity of Equality MinCSP
We classify parameterized complexity of MinCSP(\(\Gamma\)) for finite equality constraint language \(\Gamma\). By Theorem 18, we may focus on Horn equality languages \(\Gamma\) that are neither constant nor strictly negative. To present the results, we need more definitions.
**Definition 19**.: Let \(R\) be an equality constraint relation.
* \(R\) is _negative_ if it is definable by a CNF formula with positive literals occurring only in singleton clauses.
* \(R\) is _conjunctive_ if it is definable by a CNF formula without disjunction.
* \(R\) is _split_ if it is definable by a CNF formula \(\bigwedge_{p,p^{\prime}\in P}(x_{p}=x_{p^{\prime}})\wedge\bigwedge_{p\in P,q \in Q}(x_{p}\neq x_{q})\), where \(R\) is a relation of arity \(p+q\) and \(P\uplus Q\) is a partition of the indices \(\{1,\ldots,p+q\}\).
We remark that negative relations are Horn, conjunctive relations are negative, and split relations are conjunctive. We also define several important relations, see Table 1. In particular, we need the 'not-all-equal' relation \(\mathsf{NAE}_{3}(x_{1},x_{2},x_{3})\equiv(x_{1}\neq x_{2}\lor x_{2}\neq x_{3})\) to state the main theorem.
**Theorem 20**.: _Let \(\Gamma\) be a finite Horn equality constraint language. Assume \(\Gamma\) is neither constant nor strictly negative._
1. _If_ \(\Gamma\) _is not negative, then_ MinCSP__\((\Gamma)\) _is W[2]-hard._
2. _If_ \(\Gamma\) _is negative and contains any relation that is neither split nor_ \(\mathsf{NEQ}_{3}\)_, then_ MinCSP__\((\Gamma)\) _is W[1]-hard._
3. _If_ \(\Gamma\) _contains only split relations and_ \(\mathsf{NEQ}_{3}\)_, then_ MinCSP__\((\Gamma)\) _is in FPT._
We present the proof of Theorem 20 in four parts, one for part 1, two parts for 2, and one part for 3, The second point is split into two cases based on whether \(\Gamma\) is conjunctive or not. Throughout the section, we assume by Lemma 16 that \(\Gamma\) implements \(=\) and \(\neq\)
Case 1: Not negative.For non-negative languages, we recall a result of [5].
**Theorem 21** (Theorem 67 of [5]).: _If \(R\) is a Horn equality constraint relation that is not negative, then \(\textsc{MinCSP}(R,=,\neq)\) pp-defines \(\mathsf{ODD}_{3}\)._
Combined with the following lemma and the fact that Hitting Set is W[2]-hard, this yields Theorem 20.1.
**Lemma 22** (Section 4.3).: _There is a polynomial-time reduction that takes an instance \((V,\mathcal{E},k)\) of Hitting Set, and produces an instance \((I,k)\) of \(\textsc{MinCSP}(\mathsf{ODD}_{3},=,\neq)\) where every \(\mathsf{ODD}_{3}\)-constraint is crisp. Furthermore, \((V,\mathcal{E},k)\) is a yes-instance if and only if \((I,k)\) is a yes-instance._
Proof Sketch.: Let \(V=\{1,\ldots,n\}\), and construct an instance \((I,k)\) of \(\textsc{MinCSP}(\mathsf{ODD}_{3},=,\neq)\) by introducing variables \(x_{1},\ldots,x_{n}\) and \(z\), and add soft constraints \(x_{i}=z\) for all \(i\in[n]\). For every subset \(e=\{a_{1},\ldots,a_{\ell}\}\in\mathcal{E}\), introduce auxiliary variables \(y_{2},\ldots,y_{\ell}\) and crisp constraints \(\mathsf{ODD}_{3}(x_{a_{1}},x_{a_{2}},y_{2})\), \(\mathsf{ODD}_{3}(y_{i-1},x_{a_{i}},y_{i})\) for all \(3\leq\ell\), and \(x_{a_{1}}\neq y_{\ell}\). Correctness follows by observing that crisp constraints introduced for each \(e\in\mathcal{E}\) imply that not all variables \(x_{a_{i}},\ldots,x_{a_{\ell}}\) are equal. Moreover, to satisfy these constraints, it is sufficient to break a soft \(x_{a_{i}}=z\) constraint. Thus, \(X\subseteq V\) is a hitting set if and only if \(I-\{x_{i}:i\in X\}\) is consistent.
Case 2a: Negative, Not ConjunctiveRecall ternary 'not-all-equals' \(\mathsf{NAE}_{3}\) and the relation \(R^{\vee}_{\neq,\neq}\) from Table 1. Our hardness results rely on the following two lemmas.
**Lemma 23** (Section 4.2).: \(\textsc{MinCSP}(R^{\vee}_{\neq,\neq},=)\) _is W[1]-hard even restricted to instances with only crisp \(R^{\vee}_{\neq,\neq}\)-constraints._
**Lemma 24** (Section 4.3).: \(\textsc{MinCSP}(\mathsf{NAE}_{3},=)\) _is W[1]-hard even restricted to instances with only crisp \(\mathsf{NAE}_{3}\)-constraints._
Armed with Lemmas 25 and 24, we show that every negative non-conjunctive language pp-defines \(\mathsf{NAE}_{3}\) or \(R^{\vee}_{\neq,\neq}\), thus proving Theorem 20.2 for all negative non-conjunctive languages.
**Lemma 25** (Section 4.1).: _Let \(R\) be a negative non-conjunctive equality constraint relation. Then \(\{R,=,\neq\}\) pp-defines \(R^{\vee}_{\neq,\neq}\) or \(\mathsf{NAE}_{3}\)._
Case 2b: negative Conjunctive, neither Split nor \(\mathsf{NEQ}_{3}\)Let \(R\) be a conjunctive relation of arity \(r\). Define edge-coloured graph \(G_{R}\) with vertices \(\{1,\ldots,r\}\). Add a blue edge \(ij\) whenever \(R\) implies \(x_{i}=x_{j}\), and a red edge \(ij\) whenever \(R\) implies \(x_{i}\neq x_{j}\). Note that blue edges in \(G_{R}\) form cliques because equality relation is transitive, and red edges connect all members of one clique to all members of another. The graph \(G_{R}\) for a split relation \(R\) with indices \(\{1,\ldots,r\}\) partitioned into \(P\uplus Q\) consists of a clique of blue edges on \(P\) and a biclique of red edges on \((P,Q)\). The graph \(G_{R}\) for \(R=\mathsf{NEQ}_{3}\) is a red triangle. If \(R\) is neither split nor \(\mathsf{NEQ}_{3}\)
\begin{table}
\begin{tabular}{|c|l|l|l|} \hline Name & CNF Formula & Tuples & Complexity \\ \hline \hline \(\mathsf{EQ}_{3}\) & \((x_{1}=x_{2})\wedge(x_{2}=x_{3})\wedge(x_{1}=x_{3})\) & \((1,1,1)\) & FPT \\ \hline — & \((x_{1}=x_{2})\) & \((1,1,1),(1,1,2)\) & FPT \\ \hline — & \((x_{1}\neq x_{3})\wedge(x_{2}\neq x_{3})\) & \((1,1,2),(1,2,3)\) & FPT \\ \hline \(\mathsf{NEQ}_{3}\) & \((x_{1}\neq x_{2})\wedge(x_{2}\neq x_{3})\wedge(x_{1}\neq x_{3})\) & \((1,2,3)\) & FPT \\ \hline — & \((x_{2}\neq x_{3})\) & \((1,1,2),(1,2,1),(1,2,3)\) & FPT \\ \hline — & \((x_{1}=x_{2})\wedge(x_{1}\neq x_{3})\wedge(x_{2}\neq x_{3})\) & \((1,1,2)\) & FPT \\ \hline \(\mathsf{ODD}_{3}\) & \((x_{1}=x_{2}\lor x_{1}\neq x_{3})\wedge(x_{1}=x_{2}\lor\) & \((1,1,1),(1,2,3)\) & Hitting Set-hard \\ & \(x_{2}\neq x_{3})\wedge(x_{1}\neq x_{2}\lor x_{2}\neq x_{3})\) & \((1,1,1),(1,1,2),(1,2,3)\) & Hitting Set-hard \\ \hline — & \((x_{1}=x_{2}\lor x_{1}\neq x_{3})\wedge(x_{1}=x_{2}\lor\) & \((1,1,1),(1,1,2),(1,2,3)\) & Hitting Set-hard \\ & \(x_{2}\neq x_{3})\) & \((1,1,1),(1,1,2),(1,2,3)\) & Hitting Set-hard \\ \hline — & \((x_{1}\neq x_{2}\lor x_{2}=x_{3})\) & excludes \((1,1,2)\) & Hitting Set-hard \\ \hline \(\mathsf{NAE}_{3}\) & \((x_{1}\neq x_{2}\lor x_{2}\neq x_{3})\) & excludes \((1,1,1)\) & W[1]-hard, FPA \\ \hline \(R^{\vee}_{\neq,\neq}\) & \((x_{1}\neq x_{2}\lor x_{3}\neq x_{4})\wedge(x_{1}\neq x_{3})\wedge(x_{2}\neq x_{4})\) & \((1,2,3,3)\), \((1,1,2,3)\), \((1,2,3,4)\) & W[1]-hard, FPA \\ \hline \(R^{\vee}_{\neq,\neq}\) & \((x_{1}\neq x_{2})\wedge(x_{3}\neq x_{4})\) & _– too many too list here –_ & W[1]-hard, FPA \\ \hline \(R^{\vee}_{\neq,\neq}\) & \((x_{1}=x_{2})\wedge(x_{3}\neq x_{4})\) & \((1,1,1,2),(1,1,2,1),(1,1,2,3)\) & W[1]-hard, FPA \\ \hline \end{tabular}
\end{table}
Table 1: Several Horn relations \(R\) and the complexity of \(\textsc{MinCSP}(R,=,\neq)\). The first part of the tables contains all ternary Horn relations up to permutation of indices.
then \(G_{R}\) contains two independent edges, which we assume are \(\{1,2\}\) and \(\{3,4\}\) without loss of generality, and every edge \(uv\in E(G_{R})\) with \(u\in\{1,2\}\) and \(v\in\{3,4\}\) is red. By considering only \(x_{1},x_{2},x_{3},x_{4}\) as primary variables and depending on the colours of edges \(\{1,2\}\) and \(\{3,4\}\), the projection of \(R\) onto \(\{1,2,3,4\}\) is one of the relations
\[(x_{1}=x_{2})\wedge(x_{3}=x_{4})\wedge\bigwedge_{u\in A,j\in B}(x_ {i}\neq x_{j}),\] \[(x_{1}=x_{2})\wedge(x_{3}\neq x_{4})\wedge\bigwedge_{u\in A,j\in B }(x_{i}\neq x_{j}),\text{or}\] \[(x_{1}\neq x_{2})\wedge(x_{3}\neq x_{4})\wedge\bigwedge_{u\in A,j \in B}(x_{i}\neq x_{j})\]
for some \(A\subseteq\{1,2\}\) and \(B\subseteq\{3,4\}\). We refer to the relations defined by the formulas above as \((=,=)\)_-relations_, \((=,\neq)\)_-relations_ and \((\neq,\neq)\)_-relations_, respectively. Examples of such relations with \(A=B=\emptyset\) are \(R^{\wedge}_{\sqsubseteq,}\), \(R^{\wedge}_{\sqsubseteq,\neq}\) and \(R^{\wedge}_{\neq,\neq}\) from Table 1.
**Observation 26**.: _If \(R\) is a conjunctive equality constraint relation, and \(R\) is neither split nor \(\mathsf{NEQ}_{3}\), then \(R\) implements an \((=,=)\)-relation, an \((=,\neq)\)-relation or a \((\neq,\neq)\)-relation._
We prove that \(\textsc{MinCSP}(R,=,\neq)\) is W[1]-hard if \(R\) is either an \((=,=)\)-relation, an \((=,\neq)\)-relation or a \((\neq,\neq)\)-relation. The reductions in all three cases are from Split Paired Cut (defined in Section 4), and we reuse some gadgets in the proofs, but the results are better presented as three separate lemmas.
**Lemma 27** (Section 4.2).: _If \(R\) is an \((=,=)\)-relation, then \(\textsc{MinCSP}(R,=,\neq)\) is W[1]-hard._
**Lemma 28** (Section 4.2).: _If \(R\) is an \((=,\neq)\)-relation, then \(\textsc{MinCSP}(R,=,\neq)\) is W[1]-hard._
**Lemma 29** (Section 4.2).: _If \(R\) is a \((\neq,\neq)\)-relation, then \(\textsc{MinCSP}(R,=,\neq)\) is W[1]-hard._
Observation 26 and Lemmas 27, 28 and 29 complete the proof of Theorem 20.2.
Case 3: Split and \(\mathsf{NEQ}_{3}\)If \(\Gamma\) only contains split relations and \(\mathsf{NEQ}_{3}\), we show that the problem is in FPT via a reduction from \(\textsc{MinCSP}(\Gamma)\) to Triple Multicut.
**Lemma 30** (Section 4.4).: _Let \(\Gamma\) be an equality constraint language where every relation is either split or \(\mathsf{NEQ}_{3}\). There is a polynomial-time reduction that takes an instance \((I,k)\) of \(\textsc{MinCSP}(\Gamma)\) and produces an instance \((G,\mathcal{T},k)\) of Triple Multicut such that \((I,k)\) is a yes-instance if and only if \((G,\mathcal{T},k)\) is a yes-instance._
We solve Triple Multicut by reducing it to \(\textsc{MinCSP}(\Gamma^{\prime})\) for a certain Boolean constraint language \(\Delta\) (i.e. the domain of \(\Delta\) is {0,1}). Then we show that \(\textsc{MinCSP}(\Delta)\) is fixed-parameter tractable using the full classification by [33].
**Theorem 31** (Theorem 42 in Section 5).: Triple Multicut _is fixed-parameter tractable._
Combining Lemma 30 with Theorem 42 completes the proof of Theorem 20.3.
### Approximation of Equality MinCSP
Under the Unique Games Conjecture (UGC) of Khot [31], Edge Multicut is NP-hard to approximate within any constant [19]. By Lemmas 16 and 17, there is a cost-preserving reduction from Edge Multicut to \(\textsc{MinCSP}(\Gamma)\) whenever \(\Gamma\) is Horn and \(\textsc{MinCSP}(\Gamma)\) is NP-hard.
**Corollary 32**.: _Let \(\Gamma\) be a Horn equality constraint language. If \(\textsc{MinCSP}(\Gamma)\) is NP-hard, then, assuming UGC, it is NP-hard to approximate \(\textsc{MinCSP}(\Gamma)\) in polynomial time within any constant._
Motivated by this hardness result, we study constant-factor approximation algorithms running in fpt time. Let \(\Gamma\) be a Horn equality constraint language. By Lemma 22, if \(\Gamma\) is not negative, then it admits a cost-preserving reduction from Hitting Set. Unless FPT=W[2], Hitting Set does not admit a constant-factor fpt approximation [39], so there is little hope for obtaining constant-factor fpt approximation algorithms for \(\textsc{MinCSP}(\Gamma)\) when \(\Gamma\) is not negative. It turns out that the hardness result is tight for equality \(\textsc{MinCSP}(\Gamma)\).
**Theorem 33**.: _Let \(\Gamma\) be a Horn equality constraint language. \(\textsc{MinCSP}(\Gamma)\) admits a constant-factor approximation in fpt time if \(\Gamma\) is negative, and is Hitting Set-hard otherwise._
We remark that the approximation factor depends on the language \(\Gamma\). The following fact is used to obtain the approximation algorithm.
**Lemma 34** (See e.g. Lemma 10 of [13]).: _Let \(\Gamma\) be a constraint language that pp-defines a relation \(R\). If \(\textsc{MinCSP}(\Gamma,=)\) admits constant-factor fpt-approximation, then \(\textsc{MinCSP}(\Gamma,R)\) also admits constant-factor fpt-approximation._
The final approximation factor in the lemma above depends on the number of constraints in the pp-definition of \(R\) in \(\Gamma\). Define relations \(R_{d}^{\neq}(x_{1},y_{1},\ldots,x_{d},y_{d})\equiv\int_{i=1}^{d}x_{i}\neq y_{i}\) for all \(d\in\mathbb{N}\). Observe that every negative relation admits a (quantifier-free) pp-definition in \(\{R_{d}^{\neq},=\}\) for some \(d\in\mathbb{N}\): the pp-definition contains a constraint for each clause, and \(d\) is upper-bounded by the number of literals in a largest clause. By Lemma 34, showing an fpt-approximation algorithm for MinCSP\((R_{d}^{\neq},=)\) is sufficient to prove Theorem 33.
We solve MinCSP\((R_{d}^{\neq},=)\) by a cost-preserving reduction to Disjunctive Multicut.
**Lemma 35** (Section 4.4).: _There is a polynomial-time algorithm that takes an instance \(I\) of CSP\((R_{d}^{\neq},=)\) as input, and produces a graph \(G\) and a collection of request lists \(\mathcal{L}\) such that \(\max_{L\in\mathcal{L}}|L|=d+1\) and \(\operatorname{cost}(I)=\operatorname{cost}(G,\mathcal{L})\)._
We prove that Disjunctive Multicut has a constant-factor fpt-approximation.
**Theorem 36** (Theorem 44 in Section 6).: _For every constant \(d\), Disjunctive Multicut with request lists of length at most \(d\) and admits an \(f(d)\)-factor fpt-approximation algorithm for some function \(f\)._
We remark that we do not optimize the function \(f\) or the running time. Note that Steiner Multicut is a special case of Disjunctive Multicut where each list is a clique. This additional structure allows us to obtain a much simpler algorithm for this case.
**Theorem 37** (Section 6).: Steiner Multicut _with requests of constant size is \(2\)-approximable in \(O^{*}(2^{O(k)})\) time._
## 4 Reductions
We present several reductions grouped into four parts. Section 4.1 contains some pp-definition and implementation results. proving Lemmas 16 and 25. Section 4.2 contains W[1]-hardness proofs by reduction from Split Paired Cut to MinCSP\((R,=,\neq)\) where \(R\) is an \((=,=)\)-relation (Lemma 27), \(R\) is a \((\neq,\neq)\)-relation (Lemma 29), \(R\) is an \((=,\neq)\)-relation (Lemma 29), and \(R=R_{\neq,\neq}^{\vee}\) (Lemma 23). Section 4.3 provides W[1]- and W[2]-hardness proofs for MinCSP\((\text{\sc{NAE}}_{3},=,\neq)\) and MinCSP\((\text{\sc{ODD}}_{3},=,\neq)\), respectively, proving Lemmas 24 and 22. Finally, in Section 4.4 we present a reduction from Edge Multicut to MinCSP\((=,\neq)\) (Lemma 17) and reduction from MinCSP problems to Triple Multicut and Disjunctive Multicut, supporting the positive results in the classification of exact fpt and fpt-approximation complexity (specifically, Lemmas 30 and 35).
### Expressive Power of Some Relations
Let \(R\) be an equality constraint relations, and \(\phi_{R}\) be a CNF formula that defines \(R\). We say that \(\phi_{R}\) is _reduced_ if removing any clause or literal alters the defined relation.
**Observation 38**.: _Let \(\phi_{R}\) be a reduced definition of an equality constraint relation \(R\). Suppose \(\phi_{R}\) contains clause \(C=\bigvee_{s=1}^{t}x_{i_{s}}\odot_{s}x_{j_{s}}\) where \(\odot_{s}\in\{=,\neq\}\). If \(\phi_{R}\) is reduced, then for every \(1\leq u\leq t\), there is a tuple in \(R\) that satisfies literal \(x_{i_{u}}\odot_{u}x_{j_{s}}\), and violates \(x_{i_{s}}\odot_{s}x_{j_{s}}\) for all \(s\neq u\)._
We can now prove Lemma 16 which states that a non-constant language implements \(x_{1}\neq x_{2}\), and a Horn, but not strictly negative language implements \(x_{1}=x_{2}\).
Proof of Lemma 16.: First, suppose \(R\in\mathbb{N}^{r}\) is not constant and let \(\mathbf{a}=(\mathbf{a}_{1},\ldots,\mathbf{a}_{r})\) be a least refined tuple in \(R\), i.e. \(\mathbf{a}\) does not strictly refine any tuple in \(R\). Since \(R\) is not constant, the number of distinct entries in \(\mathbf{a}\) is at least two. By permuting indices, assume that \(\mathbf{a}_{1}\neq\mathbf{a}_{2}\) and consider the constraint \(R(x_{\mathbf{a}_{1}},\ldots,x_{\mathbf{a}_{r}})\). For example, if \(\mathbf{a}=(1,2,2,3,2,3)\), then we consider \(R(x_{1},x_{2},x_{2},x_{3},x_{2},x_{3})\). Suppose \(\mathbf{b}\) is a tuple of values that satisfies the constraint. Observe that \(\mathbf{a}_{i}=\mathbf{a}_{j}\) implies \(\mathbf{b}_{i}=\mathbf{b}_{j}\) since \(x_{\mathbf{a}_{i}}\) and \(x_{\mathbf{a}_{j}}\) denote the same variable. Hence, \(\mathbf{a}\) refines \(\mathbf{b}\). By the choice of \(\mathbf{a}\), the refinement is not strict, hence \(\mathbf{a}_{1}\neq\mathbf{a}_{2}\) implies \(\mathbf{b}_{1}\neq\mathbf{b}_{2}\).
Now suppose \(R\) is Horn and not strictly negative, and \(\phi_{R}\) is a reduced CNF definition of \(R\). Since \(R\) is not strictly negative, \(\phi_{R}\) contains a clause \(C\) with a positive literal. By permuting indices, assume \(C\) is \((x_{1}=x_{2}\vee\bigvee_{s=1}^{t}x_{i_{s}}\neq x_{j_{s}})\). By Observation 38, there is a tuple \(\mathbf{a}\in R\) with \(\mathbf{a}_{1}=\mathbf{a}_{2}\) and \(\mathbf{a}_{i_{s}}=\mathbf{a}_{j_{s}}\) for all \(1\leq s\leq t\). Consider an instance \(R(\mathbf{x})\), where \(\mathbf{x}\) is a tuple of variables such that \(\mathbf{x}_{i_{s}}=\mathbf{x}_{j_{s}}\) for all \(1\leq s\leq t\), while all other variables are distinct. Let \(\mathbf{x}_{1},\mathbf{x}_{2}\) be the primary variables, and all remaining variables be auxiliary. Since \(\mathbf{a}\in R\), constraint \(R(\mathbf{x})\) is consistent. Moreover, by identifying \(\mathbf{x}_{i_{s}}\) and \(\mathbf{x}_{j_{s}}\) for all \(s\), we falsify every literal in \(C\) except for \(\mathbf{x}_{1}=\mathbf{x}_{2}\), hence \(R(\mathbf{x})\) implies \(\mathbf{x}_{1}=\mathbf{x}_{2}\).
We use a consequence of Proposition 68 of [5].
**Proposition 39**.: _Let \(R\) be a negative relation of arity \(r\), and let \(1\leq i_{1},\ldots,i_{t}\leq r\) be a set of indices. The projection of \(R\) onto indices \(i_{1},\ldots,i_{r}\) is a negative relation._
Proof of Lemma 25.: Let \(\phi_{R}\) be a CNF definition of \(R\) with the minimum number of literals. Then \(\phi_{R}\) contains a clause \(C=\bigvee_{s=1}^{t}x_{i_{s}}\neq x_{j_{s}}\) with \(t\geq 2\). Define formula \(\phi^{\prime}=\phi_{R}\wedge\bigwedge_{s=3}^{t}(x_{i_{s}}=x_{j_{s}})\), and relation \(R^{\prime}\) obtained by projecting all tuples satisfying \(\phi^{\prime}\) onto \(i_{1},j_{1},i_{2},j_{2}\). By Proposition 39, \(R^{\prime}\) is essentially negative. Note that \(\phi^{\prime}\) implies \(x_{i_{1}}\neq x_{j_{1}}\lor x_{i_{2}}\neq x_{j_{2}}\). Furthermore, \(\phi_{R}\) implies \(C^{\prime}=R^{\prime}(x_{i_{1}},x_{i_{2}},x_{j_{1}},x_{j_{2}})\vee\bigvee_{s=3 }^{t}x_{i_{s}}\neq x_{j_{s}}\). By minimality of \(\phi_{R}\), no clause of the formula \(C^{\prime}\) subsumes the clause \(C\), so \(R^{\prime}(x_{i_{1}},x_{i_{2}},x_{j_{1}},x_{j_{2}})\) implies neither \(x_{i_{1}}\neq x_{j_{1}}\) nor \(x_{i_{2}}\neq x_{j_{2}}\). We proceed with two cases based on the cardinality of \(\{i_{1},j_{1},i_{2},j_{2}\}\).
If \(|\{i_{1},j_{1},i_{2},j_{2}\}|=3\), then \(R^{\prime}\) is an essentially ternary relation. Without loss of generality, assume \(j_{1}=i_{2}\) and note that \((1,1,1)\notin R^{\prime}\). Since \(R^{\prime}\) is negative, \(R^{\prime}(x_{1},x_{2},x_{3})\) implies \(x_{1}\neq x_{2}\), \(x_{2}\neq x_{3}\), or \(\mathsf{NAE}_{3}(x_{1},x_{2},x_{3})\). The first two formulas are ruled out by minimality of \(\phi_{R}\), hence \(R=\mathsf{NAE}_{3}\).
If \(|\{i_{1},j_{1},i_{2},j_{2}\}|=4\), let indices \(p\) and \(q\) range over \(\{i_{1},j_{1}\}\) and \(\{i_{2},j_{2}\}\), respectively. If \(\phi_{R}\) implies \(x_{p}=x_{q}\) for some \(p\) and \(q\), then we reduce to the previous case since \(R^{\prime}\) is an essentially ternary relation. Otherwise, note that \((1,1,2,2)\notin R^{\prime}\), \(R^{\prime}\) is negative and \(\phi_{R}\) does not imply \(x_{p}=x_{q}\) for any \(p,q\), hence \(R(x_{1},x_{2},x_{3},x_{4})\) implies \(x_{1}\neq x_{2}\), \(x_{3}\neq x_{4}\) or \(x_{1}\neq x_{2}\lor x_{3}\neq x_{4}\). The first two formulas are ruled out by minimality of \(\phi_{R}\). Thus, \(R(x_{1},x_{2},x_{3},x_{4})\wedge\bigwedge_{p,q}(x_{p}\neq x_{q})\) is a pp-definition of \(R^{\vee}_{\neq,\neq}\).
### Hardness from Split Paired Cut
We start with the following problem.
Split Paired Cut
Instance:
Split Paired Cut is W[1]-hard (see Lemma 6.1 in [26]). There is a simple reduction from Split Paired Cut to \(\textsc{MinCSP}(R,=,\neq)\) where \(R\) is a \((=,=)\)-relation.
Proof of Lemma 27.: Let \((G_{1},G_{2},s_{1},t_{1},s_{2},t_{2},\mathcal{P},k)\) be an instance of Split Paired Cut. Construct an instance \((I,k)\) of \(\textsc{MinCSP}(R,=,\neq)\) as follows. Let \(V(I)=V(G_{1})\cup V(G_{2})\). For every edge \(uv\in E(G_{1})\cup E(G_{2})\), add crisp constraint \(u=v\) to \(I\). Add crisp constraints \(s_{1}\neq t_{1}\) and \(s_{2}\neq t_{2}\) to \(I\). Finally, pair up equality constraints according to the pairs in \(\mathcal{P}\). For every pair \(\{u_{1}v_{1},u_{2}v_{2}\}\in\mathcal{P}\), remove constraints \(u_{1}=v_{1}\) and \(u_{2}=v_{2}\) from \(I\) and add a soft constraint \(R(u_{1},v_{1},u_{2},v_{2})\). This completes the construction. Note that all unpaired edges correspond to crisp equality constraints in \(I\). We proceed with the correctness proof.
For one direction, assume \(X\subseteq\mathcal{P}\) is a solution to \((G_{1},G_{2},s_{1},t_{1},s_{2},t_{2},\mathcal{P},k)\). Define \(X^{\prime}\subseteq C(I)\) that contains \(R(u_{1},v_{1},u_{2},v_{2})\) for every pair \(\{u_{1}v_{1},u_{2}v_{2}\}\) in \(X\). Note that \(|X^{\prime}|=|X|\leq k\). We claim that \(I-X^{\prime}\) is satisfied by assignment \(\alpha\) defined as follows. Let \(\alpha(s_{1})=1\), \(\alpha(t_{1})=2\), \(\alpha(s_{2})=4\) and \(\alpha(t_{2})=5\). Propagate these values to variables connected to \(s_{1},t_{1},s_{2},t_{2}\) by equality constraints; for the remaining variables \(v\), set \(\alpha(v)=3\) if \(v\in V(G_{1})\) and \(\alpha(v)=6\) if \(v\in V(G_{2})\). Since \(\bigcup X\) is an \(s_{1}\)-cut and an \(s_{2}\)-cut, assignment \(\alpha\) is well-defined. It satisfies \(\alpha(v_{1})\neq\alpha(v_{2})\) for all \(v_{1}\in V(G_{1})\) and \(v_{2}\in V(G_{2})\). Furthermore, it satisfies crisp constraints \(s_{1}\neq t_{1}\) and \(s_{2}\neq t_{2}\), hence \(I-X^{\prime}\) is consistent.
For the other direction, let \(Z\) be a solution to \((I,k)\). By construction, only \(R\)-constraints are soft in \(I\), hence \(Z\) only contains \(R\)-constraints. Define \(Z^{\prime}\subseteq\mathcal{P}\) containing \(\{u_{1}v_{1},u_{2}v_{2}\}\) for all \(R(u_{1},v_{1},u_{2},v_{2})\) in \(Z\). Note that \(|Z^{\prime}|=|Z|\leq k\). Since \(s_{1}\neq t_{1}\) and \(s_{2}\neq t_{2}\) are crisp in \(I\), \(s_{i}\) and \(t_{i}\) are not connected by equality constraints in \(I-Z\), thus \(\bigcup Z^{\prime}\) is an \(s_{i}t_{i}\)-cut in \(G_{i}\), and it is a union of \(k\) pairs in \(\mathcal{P}\) by definition. Hence, \(Z^{\prime}\) is a solution to the instance of Split Paired Cut.
Further reductions in this section share a choice gadget. Let \(S=\{s_{1},\ldots,s_{t}\}\) be a set. Define an instance \(W(S)\) of \(\mathrm{CSP}(=,\neq)\) as follows. Introduce \(2t+1\) variables \(v_{0},\ldots,v_{2t}\). In what follows, indices are identified modulo \(2t+1\), e.g. \(v_{0}=v_{2t+1}\). Connect variables in a double-cycle of equalities, i.e. add two copies of soft constraint \(v_{i}=v_{i+1}\) for all \(0\leq i\leq 2t\). We regard \(v_{i}=v_{i+1}\) as single constraints as having cost two. The _forward partner_ of a variable \(v_{i}\) is \(f(v_{i}):=v_{i+t}\), i.e. the variable that is \(t\) steps ahead of \(v_{i}\) on the cycle. Add constraints \(v_{i}\neq f(v_{i})\) for all \(0\leq i\leq 2t\), making them soft if \(1\leq i\leq t\) and crisp otherwise. Note that \(v_{0}\neq v_{t}\) is crisp. See Figure 1 for an illustration.
**Lemma 40**.: _Let \(S\) be a set of size at least two and \(W(S)\) be the choice gadget. Then \(\mathrm{cost}(W(S))=5\) and every optimal solution deletes \(v_{i-1}=v_{i}\), \(v_{i}\neq f(v_{i})\) and \(f(v_{i})=f(v_{i+1})\) for some \(i\in\{1,\ldots,t\}\)._
Proof.: First, we claim that every optimal solution consists of two equality constraints and a disequality constraint. Note that \(v_{0}\) and \(v_{t}\) are connected by two disjoint paths of equality constraints, so one constraint of cost two has to be deleted from each. After this, the cycle is split into two paths, and the longest of them has
at least \(\lceil\frac{2t+1-2}{2}\rceil=t\) edges. Then there is a pair of variables \(v_{i}\) and \(f(v_{i})\) still connected on the longer path. To see that deleting two equalities and a disequality suffices, note that \(\{v_{i-1}=v_{i},v_{i}\neq f(v_{i}),f(v_{i})=f(v_{i+1})\}\) for any \(1\leq i\leq t\) is a solution.
To show every optimal solution is of the form above, observe that the crisp constraint \(v_{0}\neq v_{t}\) implies that every solution has to delete \(v_{j-1}=v_{j}\) for some \(1\leq j\leq t\). By construction, there are crisp constraints \(f(v_{j})\neq v_{j-1}\) and \(f(v_{j+1})\neq v_{j}\) in \(W(S)\), and there are paths of equality constraints connecting \(f(v_{j}),v_{j-1}\) and \(f(v_{j+1}),v_{j}\) in \(W(S)-\{v_{j-1}=v_{j}\}\) which intersect only in \(f(v_{j})=f(v_{j+1})\), so this constraint has to be deleted. Now \(W(S)-\{v_{j-1}=v_{j},f(v_{j})=f(v_{j+1})\}\) contains a path connecting \(v_{j}\) and \(f(v_{j})\), and the remaining budget of one is only sufficient to delete the soft constraint \(v_{j}\neq f(v_{j})\).
We interpret deleting \(v_{i-1}=v_{i}\), \(v_{i}\neq f(v_{i})\) and \(f(v_{i})=f(v_{i+1})\) from \(W(S)\) as choosing element \(s_{i}\) from the set \(S\). For the next proofs, we remark that by the construction in Lemma 5.7 of [32], we may assume that graphs \((G_{1},G_{2})\) in an instance of Split Paired Cut come with two maxflows \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) that partition \(E(G_{1})\) and \(E(G_{2})\), respectively, into \(k\) pairwise edge-disjoint paths. We are now ready to show that MinCSP\((R,=,\neq)\) is W[1]-hard if \(R\) is a \((\neq,\neq)\)-relation.
Proof of Lemma 29.: Let \((G_{1},G_{2},s_{1},t_{1},s_{2},t_{2},\mathcal{P},k)\) be an instance of Split Paired Cut. Assume \(k=2\ell\), and \(\mathcal{F}_{i}\) for \(i\in\{1,2\}\) are \(s_{i}t_{i}\)-maxflows in \(G_{i}\) partitioning \(E(G_{i})\) into \(k\) pairwise edge-disjoint paths. Construct an instance \((I,k^{\prime})\) of MinCSP\((R,=,\neq)\) with \(k^{\prime}=9\ell\) as follows. Start by creating a variable for every vertex in \(V(G_{1})\cup V(G_{2})\) with the same name. For each \(i\in\{1,2\}\), consider a path \(P\in\mathcal{F}_{i}\), and let \(p\) be the number of edges on \(P\). Create a choice gadget \(W(P)\) for every \(P\) with variables \(v_{0}^{P},\ldots,v_{p}^{P}\) following the path, and fresh variables \(v_{j}^{P}\) for \(p<j\leq 2p\) added to the instance. Observe that variables may appear on several paths in \(\mathcal{F}_{i}\). In particular, \(v_{0}^{P}=s_{i}\) and \(v_{p}^{P}=t_{i}\) for every \(P\in\mathcal{F}_{i}\), so we have crisp constraints \(s_{i}\neq t_{i}\). Furthermore, since \(\mathcal{F}_{i}\) partitions \(E(G_{i})\), the construction contains a copy of graphs \(G_{1}\) and \(G_{2}\) with equality constraints for edges. Now we pair up edges according to \(\mathcal{P}\). For every pair \(\{e_{1},e_{2}\}\in\mathcal{P}\), let \(P\in\mathcal{F}_{1}\) and \(Q\in\mathcal{F}_{2}\) be the paths such that \(e_{1}\in P\) and \(e_{2}\in Q\), and suppose \(e_{1}=v_{i-1}^{P}v_{i}^{P}\) and \(e_{2}=v_{j-1}^{Q}v_{j}^{Q}\). Pair up soft constraints \(v_{i}^{P}\neq f(v_{i}^{P})\) and \(v_{j}^{Q}\neq f(v_{j}^{Q})\), i.e. replace individual constraints with one soft constraint
\[R(v_{i}^{P},f(v^{P}),v_{j}^{Q},f(v_{j}^{Q})).\]
Finally, if an edge \(uv\in E(G)\) does not appear in any pair of \(\mathcal{P}\), make constraint \(u=v\) crisp in \(I\). This completes the construction.
For one direction, suppose \(X\subseteq\mathcal{P}\) is a solution to \((G_{1},G_{2},s_{1},t_{1},s_{2},t_{2},\mathcal{P},k)\). Define \(X^{\prime}\subseteq C(I)\) as follows. For every pair \(\{e_{1},e_{2}\}\in X\), let \(P\in\mathcal{F}_{1}\) and \(Q\in\mathcal{F}_{2}\) be the paths such that \(e_{1}\in P\) and \(e_{2}\in Q\), and suppose \(e_{1}=v_{i-1}^{P}v_{i}^{P}\) and \(e_{2}=v_{j-1}^{Q},v_{j}^{Q}\). Add constraints \(v_{i-1}^{P}=v_{i}^{P}\), \(f(v_{i+1}^{P})\), \(v_{j-1}^{Q}=v_{j}^{Q}\), \(f(v_{j}^{Q})=f(v_{j+1}^{Q})\) and \(R(v_{i}^{P},f(v_{i}^{P}),v_{j}^{Q}f(v_{j}^{Q}))\) to \(X^{\prime}\). Note that \(X^{\prime}\) only contains soft constraints, and \(|X|=2\ell\) implies \(|X^{\prime}|=(2\cdot 4+1)\ell=9\ell\). Moreover, by construction of \(X^{\prime}\) and Lemma 40, instances \(W(P)-X^{\prime}\) are consistent for every \(P\in\mathcal{F}_{1}\cup\mathcal{F}_{2}\). We claim that \(I-X^{\prime}\) is consistent. Consider the graph \(G^{\prime}=G-\bigcup X\). Observe that \(X_{i}=\{e_{i}:\{e_{1},e_{2}\}\in X\}\) is an \(s_{i}t_{i}\)-cut, \(|X_{i}|=|X|=2\ell\) and \(s_{i}t_{i}\)-maxflow in \(G_{i}\) equals \(2\ell\). Hence, \(G^{\prime}\) contains exactly four components which contain \(s_{1}\), \(t_{1}\), \(s_{2}\) and \(t_{2}\), respectively. Define assignment that maps the components of \(G^{\prime}\) to distinct values. This assignment clearly satisfies all crisp constraints. It also satisfies constraints \(R(v_{1}^{P},f(v_{i}^{P}),v_{j}^{Q}f(v_{j}^{Q}))\) that remain in \(I-X^{\prime}\) for any \((\neq,\neq)\)-relation \(R\), since it satisfies \(v_{i}^{P}\neq v_{j}^{Q}\), \(f(v_{i}^{P})\neq v_{j}^{Q}\), \(v_{i}^{P}\neq f(v_{j}^{Q})\) and \(f(v_{i}^{P})\neq f(v_{j}^{Q})\). We conclude that \(I-X^{\prime}\) is consistent.
For the opposite direction, suppose \(Z\subseteq C(I)\) is a solution to \((I,k^{\prime})\). Define the set of edges \(Z^{\prime}\subseteq E(G)\) with an edge \(v_{i-1}^{P}v_{i}^{P}\) for every constraint \(v_{i}^{P}\neq f(v_{i}^{P})\) in \(Z\). We claim that \(Z^{\prime}\) is a union of \(\ell\) pairs. We have \(2\ell\)
Figure 1: An illustration of a choice gadget for \(t=3\). Black arcs represent equality constraints of cost \(2\), dashed red edges – soft disequality constraints, and bold red edges – crisp disequality constraints.
gadgets \(W(P)\) in \(I\) for each path \(P\in\mathcal{F}_{1}\cup\mathcal{F}_{2}\), so by Lemma 40, \(Z\) contains two equality constraints from every gadget \(W(P)\), and deleting each costs two. Since the gadgets are constraint-disjoint, this amounts to \(8\ell\) equality constraints in total. The remaining budget is \(\ell\), so the remaining constraints in \(Z\) are \((\neq,\neq)\)-constraints. By construction, \(Z^{\prime}\) is then a union of \(\ell\) pairs. Finally, note that \(\{e_{i}:\{e_{1},e_{2}\}\in Z^{\prime}\}\) are \(s_{i}t_{i}\)-cuts because \(s_{i}\neq t_{i}\) are crisp constraints in \(I\), hence \(Z^{\prime}\) is a solution to the instance of Split Paired Cut.
Now we prove that MinCSP\((R,=,\neq)\) for an \((=,\neq)\)-relation \(R\) is W[1]-hard by providing a hybrid of the previous two reductions.
Proof of Lemma 28.: Let \((G_{1},G_{2},s_{1},t_{1},s_{2},t_{2},\mathcal{P},k)\) be an instance of Split Paired Cut. Assume \(k=2\ell\), and \(\mathcal{F}_{2}\) is an \(s_{2}t_{2}\)-maxflow in \(G_{2}\) that partitions \(E(G_{2})\) into \(k\) pairwise edge-disjoint paths. Construct an instance \((I,k^{\prime})\) of MinCSP\((R,=,\neq)\) as follows. Start by creating a variable for every vertex in \(V(G_{1})\cup V(G_{2})\) with the same name. For every edge \(uw\in E(G_{1})\), add constraint \(u=w\) in \(I\). For every path \(P\in\mathcal{F}_{2}\) with \(p\) edges, create a choice gadget \(W(P)\) with variables \(v^{p}_{0},\ldots,v^{p}_{p}\) following the path, and fresh variables \(v^{p}_{j}\) for \(p<j\leq 2p\) added to \(I\). Now we pair up edges according to \(\mathcal{P}\). For every pair \(\{e_{1},e_{2}\}\in\mathcal{P}\), let \(e_{1}=uv\) and \(e_{2}=v^{p}_{j-1}v^{p}_{j}\), where \(P\) is the path in \(\mathcal{F}_{2}\) such that \(e_{2}\in P\). Pair up constraints \(u=w\) and \(v^{p}_{j}\neq f(v^{p}_{j})\), i.e. remove individual constraints and add one soft constraint \(R(u,w,v^{Q}_{j},f(v^{Q}_{j}))\). Finally, if an edge \(uv\in E(G)\) does not appear in any pair of \(\mathcal{P}\), make constraint \(u=v\) crisp in \(I\), and set \(k^{\prime}=5\ell\). This completes the construction.
Correctness proof is analogous to the proofs of Lemmas 27 and 29 and follows from the observation that satisfying all choice gadgets \(W(P)\) for \(P\in\mathcal{F}_{2}\) requires deleting \(5\ell\) constraints, and the choices exactly determine which \(R\)-constraints are picked.
Finally, we prove that MinCSP\((R^{\vee}_{\neq,\neq},=,\neq)\) is W[1]-hard by reduction from the following problem.
\begin{tabular}{|l l|} \hline \multicolumn{2}{|l|}{Multicoloured Independent Set (MIS)} \\ Instance: & Graph \(G\), partition of \(V(G)=V_{1}\uplus\cdots\uplus V_{k}\), and integer \(k\). \\ Parameter: & \(k\). \\ Question: & Is there an independent set in \(G\) with one vertex from each \(V_{i}\)? \\ \hline \end{tabular}
Proof of Lemma 23.: Let \((G,V_{1}\uplus\cdots\uplus V_{k},k)\) be an instance of MIS. Enumerate vertices in each set \(V_{i}\). Construct an instance \((I,k^{\prime})\) of MinCSP\((R^{\vee}_{\neq,\neq},=,\neq)\) as follows. Create a choice gadget \(W_{i}=W(V_{i})\) for each \(i\in[k]\). Let the variables in the gadget be \(x^{i}_{0},\ldots,x^{i}_{2|V_{i}|}\). Add crisp disequality constraint \(x^{i}_{0}\neq x^{i}_{|V_{i}|}\). For every edge \(uv\in E(G)\), assume \(u\) is vertex \(j\) of \(V_{i}\), \(v\) is vertex \(j^{\prime}\) of \(V_{i^{\prime}}\), and add a crisp constraint
\[R^{\vee}_{\neq,\neq}(x^{i}_{j},f(x^{i}_{j}),x^{i^{\prime}}_{j^{\prime}},f(x^{i ^{\prime}}_{j^{\prime}})).\]
Finally, set the budget to \(k^{\prime}=5k\). This completes the reduction.
For one direction, suppose \(X\subseteq V(G)\) is a solution to \((G,V_{1}\uplus\cdots\uplus,V_{k},k)\). Construct a subset \(X^{\prime}\) of constraints in \(I\) by adding \(x^{i}_{j-1}=x^{i}_{j}\), \(f(x^{i}_{j})=f(x^{i}_{j+1})\) and \(x^{i}_{j}\neq f(x^{i}_{j})\) to \(X^{\prime}\) whenever \(X\) contains vertex \(j\) from \(V_{i}\). Note that \(|X|=k\) implies \(|X^{\prime}|=4k\) because equalities are present in two copies. Since \(X^{\prime}\) separates \(x^{i}_{0}\) and \(x^{i}_{|V_{i}|}\) in every gadget, we can define an assignment that maps all vertices in \(\{x^{i}_{0},\,x^{i}_{|V_{i}|}:i\in[k]\}\) to distinct values, and propagate through equality constraints. We claim that this assignment satisfies \(I-X^{\prime}\). Observe that, by the choice of \(X^{\prime}\), equality constraints in \(W_{i}-X^{\prime}\) imply \(x^{i}_{j}=f(x^{i}_{j})\) for exactly one value of \(j\). Suppose for contradiction that a crisp constraint \(R^{\vee}_{\neq,\neq}(x^{i}_{j},f(x^{i}_{j}),x^{i^{\prime}}_{j^{\prime}},f(x^{i ^{\prime}}_{j^{\prime}}))\) is violated. Then we have \(x^{i}_{j}=f(x^{i}_{j})\) and \(x^{i^{\prime}}_{j^{\prime}}=f(x^{i^{\prime}}_{j^{\prime}})\), hence vertex \(j\) of \(V_{i}\) and vertex \(j^{\prime}\) of \(V_{i^{\prime}}\) are in \(X\). However, by construction of \(I\), these vertices are connected by an edge in \(G\), contradicting the fact that \(X\) is an independent set.
For the opposite direction, let \(Z\subseteq C(I)\) be a solution to \((I,k^{\prime})\). By Lemma 40, after an optimal solution is deleted from \(W(P)\), there remains exactly one pair \(x^{i}_{j}\) and \(f(x^{i}_{j})\) connected by a path of equalities in \(I-Z\). Construct \(Z^{\prime}\subseteq V(G)\) by adding vertex \(j\) of \(V_{i}\) to \(Z^{\prime}\) if \(x^{i}_{j},f(x^{i}_{j})\) is picked. Note that \(|Z|=5k\) implies \(|Z^{\prime}|=k\) by construction. It remains to show that \(Z^{\prime}\) is an independent set. For the sake of contradiction, suppose \(u,v\in Z^{\prime}\), \(u\) is vertex \(j^{\prime}\) of \(V_{i^{\prime}}\), \(v\) is vertex \(j\) of \(V_{i}\), and \(uv\in E(G)\). By construction, \(I\) contains a crisp constraint \(R^{\vee}_{\neq,\neq}(x^{i}_{j},f(x^{i}_{j}),x^{i^{\prime}}_{j^{\prime}},f(x^{i ^{\prime}}_{j^{\prime}}))\). However, \(I-Z\) implies \(x^{i}_{j}=f(x^{i}_{j})\) and \(x^{i^{\prime}}_{j^{\prime}}=f(x^{i}_{j^{\prime}})\), so we arrive at a contradiction.
### Hardness for \(\mathsf{ODD}_{3}\) and \(\mathsf{NAE}_{3}\)
We prove that if an equality constraint language \(\Gamma\) pp-defines \(\mathsf{ODD}_{3}\) or \(\mathsf{NAE}_{3}\), then MinCSP\((\Gamma,=,\neq)\) is W[2]- and W[1]-hard, respectively. The reductions are from Hitting Set and Steiner Multicut. We start with a cost-preserving reduction that takes an instance of Hitting Set and produces in polynomial time an instance of MinCSP\((\mathsf{ODD}_{3},=,\neq)\) with strict \(\mathsf{ODD}_{3}\)-constraints.
Proof of Lemma 22.: Let \((V,\mathcal{E},k)\) be an instance of Hitting Set. Assume \(V=\{1,\ldots,n\}\). Construct an instance \((I,k)\) of MinCSP\((\textsf{ODD}_{3},=,\neq)\) as follows. First, introduce variables \(x_{1},\ldots,x_{n}\) and \(z\), and add soft constraints \(x_{i}=z\) for all \(i\in[n]\). For every subset \(e=\{a_{1},\ldots,a_{\ell}\}\in\mathcal{E}\), introduce auxiliary variables \(y_{2},\ldots,y_{\ell}\) and the following crisp constraints:
1. \(\textsf{ODD}_{3}(x_{a_{1}},x_{a_{2}},y_{2})\),
2. \(\textsf{ODD}_{3}(y_{i-1},x_{a_{i}},y_{i})\) for all \(3\leq i\leq\ell\), and
3. \(x_{a_{1}}\neq y_{\ell}\).
This completes the reduction.
To show correctness, first assume \(X\) is a solution to \((I,k)\). Define a subset \(X^{\prime}\subseteq[n]\) containing all indices \(i\in[n]\) such that \(x_{i}=z\) is in \(X\). We claim that \(X^{\prime}\) intersects every set in \(\mathcal{E}\). For the sake of contradiction, assume \(e\cap X^{\prime}=\emptyset\) for some subset \(e=\{a_{1},\ldots,a_{\ell}\}\in\mathcal{E}\). Then \(I-X\) contains constraints \(x_{a_{i}}=z\) for all \(i\in[\ell]\), implying that \(x_{a_{i}}=x_{a_{j}}\) for all \(a_{i},a_{j}\in e\). Now consider the crisp constraints introduced for \(e\) in \(I\). Constraint \(\textsf{ODD}_{3}(x_{a_{1}},x_{a_{2}},y_{2})\) together with \(x_{a_{1}}=x_{a_{2}}=z\) implies that \(y_{2}=z\). Constraints \(\textsf{ODD}_{3}(y_{i-1},x_{a_{i}},y_{i})\) imply that \(y_{i}=z\) for \(3\leq i\leq\ell\). However, then \(x_{a_{1}}=z=y_{\ell}\), which is a contradiction.
For the other direction, assume \(Z\subseteq[n]\) intersects every subset in \(\mathcal{E}\). By renaming elements, we may assume that \(Z=\{1,\ldots,k^{\prime}\}\) for some \(k^{\prime}\leq k\). We define an assignment \(\alpha_{Z}:V(I)\rightarrow\mathbb{N}\) as follows. First, let \(\alpha_{Z}(x_{i})=i\) if \(1\leq i\leq k^{\prime}\), \(\alpha_{Z}(x_{j})=0\) if \(j>k^{\prime}\), and \(\alpha_{Z}(z)=0\). By definition, \(\alpha_{Z}\) satisfies all but \(k^{\prime}\) soft constraints \(x_{i}=z\). Now we extend \(\alpha_{Z}\) to auxiliary variables so that it satisfies all crisp constraints. Consider \(e=\{a_{1},\ldots,a_{\ell}\}\in\mathcal{E}\). Since \(Z\) intersects \(e\), there is an index \(1\leq i<\ell\) such that \(\alpha_{Z}(x_{a_{i}})\neq\alpha_{Z}(x_{a_{i+1}})\). Let \(i\) be the minimal such index, and notice that \(\alpha_{Z}(x_{1})=\cdots=\alpha_{Z}(x_{i})\). Set \(\alpha_{Z}(y_{j})=\alpha_{Z}(x_{j})\) for \(1\leq j\leq i\). For variables \(y_{j}\) with \(j>i\), we choose values that are pairwise distinct and different from those assigned to \(x_{1},\ldots,x_{n}\), for instance \(\alpha_{Z}(y_{j})=k+j\). To check that \(\alpha_{Z}\) satisfies all crisp constraints corresponding to subset \(e\), note that the constraints using \(\textsf{ODD}_{3}\) and containing variables \(y_{j}\) for \(1\leq j\leq i\) in the scope are satisfied because all variables are assigned to the same value, while in the remaining constraints all variables are assigned distinct values. Hence, the reduction is correct.
Now we show that if an equality constraint language \(\Gamma\) pp-defines \(\textsf{NAE}_{3}\), then there is a reduction from Steiner Multicut to MinCSP\((\Gamma,=,\neq)\).
Proof of Lemma 24.: Let \((G,\mathcal{T},k)\) be an instance of (Edge) Steiner Multicut, where \(\mathcal{T}=(T_{1},\ldots,T_{k})\) and \(|T_{i}|=3\) for all \(i\). Create an instance \((I,k)\) of MinCSP\((\textsf{NAE}_{3},=)\) as follows. Introduce a variable for every vertex in \(V(G)\). Add a soft binary equality constraint \(u=v\) for every edge \(\{u,v\}\in E(G)\), and a crisp constraint \(\textsf{NAE}_{3}(x_{i},y_{i},z_{i})\) for every subset \(T_{i}=\{x_{i},y_{i},z_{i}\}\). This completes the reduction.
To argue correctness, assume first that \(Z\) is a solution to \((I,k)\). Note that the only soft constraints in \(I\) are binary equalities, so the set \(S_{Z}\) of edges in \(G\) corresponding to constraints in \(Z\) is well-defined. We claim that \(S_{Z}\) is a solution for \((G,\mathcal{T},k)\). Consider a triple \(\{x,y,z\}\in\mathcal{T}\). If all vertices \(x,y,z\) are in the same connected component of \(G-S_{Z}\), then binary equality constraints in \(I-Z\) force \(x\), \(y\) and \(z\) to take the same value, violating a crisp constraint \(\textsf{NAE}_{3}(x,y,z)\), which leads to a contradiction.
For the other direction, assume \(S\) is a solution to \((G,\mathcal{T},k)\), and \(Z_{S}\) is the set of corresponding binary equality constraints in \(I\). Consider an assignment \(\alpha:V(G)\rightarrow\mathbb{N}\) that is constant on the connected components of \(G-S\) and assigns distinct values to distinct components. We claim that \(\alpha\) satisfies \(I-Z_{S}\), therefore \(I-Z_{S}\) is consistent. All binary equality constraints in \(I-Z_{S}\) are satisfied by \(\alpha\) since the assigned value is the same for any pair of connected variables. All constraints \(\textsf{NAE}_{3}(x,y,z)\) are satisfied because \(S\) separates at least one pair of variables in \(\{x,y,z\}\), and variables in that pair are assigned different values by \(\alpha\). Thus, the reduction is correct.
### Reductions and Multicut Variants
We start by showing that if \(=\) and \(\neq\) are available in an equality constraint language \(\Gamma\), then Edge Multicut reduces to MinCSP\((\Gamma)\).
Proof of Lemma 17.: By Lemma 16, \(\Gamma\) implements both \(=\) and \(\neq\). Let \((G,\mathcal{T},k)\) be an arbitrary instance of Edge Multicut, and create an instance \((I,k)\) of MinCSP\((=,\neq)\) with \(V(G)\) as the set of variables, soft constraints \(u=v\) for every edge \(uv\in E(G)\), and crisp constraints \(s\neq t\) for every cut request \(st\in\mathcal{T}\). Clearly, the reduction requires polynomial time, and leaves the parameter is unchanged. If \(X\) is a solution to \((G,\mathcal{T},k)\), let \(X^{\prime}=\{u=v:uv\in X\}\) and observe that an assignment mapping distinct connected components of \(G-X\) to distinct values satisfies \(I-X^{\prime}\). On the other hand, if \(Z\) is a solution to \(I\), let \(Z^{\prime}=\{uv:u=v\in Z\}\) and suppose there is path in \(G-Z^{\prime}\) contains no path connecting a terminal pair \(st\). Then \(I-Z\) contains a path of \(=\)-constraints connecting \(s\) and \(t\), and a crisp constraint \(s\neq t\), contradicting that \(I-Z\) is consistent.
For algorithmic purposes, we need reduction from MinCSP to several variants of Vertex Multicut. The first one reduces MinCSP\((\Gamma)\) with split and \(\mathsf{NEQ}_{3}\) relations to Triple Multicut.
Proof of Lemma 30.: Enumerate variables in \(V(I)\) as \(x_{1},\ldots,x_{n}\). Introduce a vertex \(v_{i}\) in \(G\) for every variable \(x_{i}\in V(I)\). Introduce a dummy vertex \(w\). Consider a constraint \(c\in C(I)\) that uses a split relation of arity \(r\), and let \(P\uplus Q\) be the partition of \(\{1,\ldots,r\}\). Introduce vertex \(z_{c}\) in \(G\), connect it by edges to \(x_{p}\) for all \(p\in P\), and add triples \(z_{c}v_{q}w\) to \(\mathcal{T}\) for all \(q\in Q\). Note that \(w\) is disconnected from all other vertices, so the triple only requests to separate \(z_{c}\) and \(x_{q}\). For every constraint of the form \(\mathsf{NEQ}_{3}(x_{h},x_{i},x_{j})\) in \(I\), add triple \(v_{h}v_{i}v_{j}\) to \(\mathcal{T}\). Finally, make vertices \(v_{i}\) undeletable by replacing each \(v_{i}\) with copies \(v_{i}^{(1)},\ldots,v_{i}^{(k+1)}\) that have the same neighbourhood \(N(v_{i})\). To avoid cumbersome notation, we will regard the copies \(v_{i}^{(1)},\ldots,v_{i}^{(k+1)}\) as a single undeletable vertex \(v_{i}\) in the remainder of the proof. Observe that, by construction, vertices \(v_{i}\) and \(v_{j}\) are connected in \(G\) if and only if the constraints in \(I\) imply \(x_{i}=x_{j}\).
For one direction, assume \((I,k)\) is a yes-instance, and let \(X\) be a solution. Let \(Z_{V}\) contain vertices \(z_{c}\) for all constraints \(c\in X\) that use a split relation, and let \(Z_{\mathcal{T}}\) contains triples \(v_{h}v_{i}v_{j}\) for all constraints of the form \(\mathsf{NEQ}_{3}(v_{h},v_{i},v_{j})\) in \(X\). Clearly, \(|Z_{V}|+|Z_{\mathcal{T}}|\leq|X|\leq k\). We claim that \((Z_{V},Z_{\mathcal{T}})\) is a solution to \((G,\mathcal{T},k)\), i.e. every connected component of \(G-Z_{V}\) intersects every triple in \(\mathcal{T}\setminus Z_{\mathcal{T}}\) in at most one vertex. Suppose for the sake of contradiction that there is a triple \(v_{h}v_{i}v_{j}\in\mathcal{T}\setminus Z_{\mathcal{T}}\) such that \(v_{i}\) and \(v_{j}\) are connected in \(G-Z_{V}\). Then the constraints in \(I-X\) imply \(x_{i}=x_{j}\). Moreover, since \(v_{h}v_{i}v_{j}\notin Z_{\mathcal{T}}\), constraint \(\mathsf{NEQ}_{3}(x_{h},x_{i},x_{j})\) is present in \(I-X\), leading to a contradiction.
For the opposite direction, assume \((G,\mathcal{T},k)\) is a yes-instance, and \((Z^{\prime}_{V},Z^{\prime}_{\mathcal{T}})\) is a solution. Note that \(|Z^{\prime}_{V}|\leq k\) implies that \(Z^{\prime}_{V}\) does not contain undelatable vertices \(v_{i}\) for any \(i\in[n]\) (or more precisely, for every undelatable vertex \(v_{i}\), at least one copy of \(v_{i}\) is untouched by \(Z^{\prime}_{V}\)). Moreover, we can assume that \(Z^{\prime}_{\mathcal{T}}\) does not contain triples involving the dummy variable \(w\): every such triple is of the form \(z_{c}v_{q}w\), so we can replace it by deleting \(z_{c}\) instead. In other words, \((Z^{\prime}_{V}\cup\{z_{c}\},Z^{\prime}_{\mathcal{T}}\setminus\{z_{c}v_{q}w\})\) is still a solution of the same size. Define \(X^{\prime}\subseteq C(I)\) as \(X^{\prime}=\{c:z_{c}\in Z^{\prime}_{V}\}\cup\{\mathsf{NEQ}_{3}(x_{h},x_{i},x_{ j}):v_{h}v_{i}v_{j}\in Z^{\prime}_{\mathcal{T}}\}\) and pick any assignment \(\alpha:V(I)\rightarrow\mathbb{N}\) such that \(\alpha(x_{i})=\alpha(x_{j})\) if and only \(v_{i}\) and \(v_{j}\) are connected in \(G-Z^{\prime}_{V}\). We claim that \(\alpha\) satisfies \(I-X^{\prime}\). Consider a constraint \(c\) in \(I-X^{\prime}\). First, suppose \(c\) uses a split relation. Since \(c\notin X^{\prime}\), we have \(z_{c}\in V(G)\setminus Z^{\prime}_{V}\). If \(c\) implies \(x_{i}=x_{j}\), then \(v_{i}\) and \(v_{j}\) are connected through \(z_{c}\) in \(G-Z^{\prime}_{V}\), hence \(\alpha(v_{i})=\alpha(v_{j})\). If \(c\) implies \(x_{i}\neq x_{j}\), assume by symmetry that \(z_{c}\) is connected to \(x_{i}\) in \(G-Z^{\prime}_{V}\), and there is a triple \(z_{c}x_{j}w\) in \(\mathcal{T}\). By our assumption, \(z_{c}x_{j}w\notin Z^{\prime}_{\mathcal{T}}\), hence \(z_{c}\) and \(x_{j}\) are disconnected and \(\alpha(x_{i})=\alpha(z_{c})\neq\alpha(x_{j})\). Finally, suppose \(c\) is of the form \(\mathsf{NEQ}_{3}(x_{h},x_{i},x_{j})\). Then \(v_{h}v_{i}v_{j}\) is in \(\mathcal{T}\setminus Z^{\prime}_{\mathcal{T}}\), so \(v_{h}\), \(v_{i}\) and \(v_{j}\) appear in distinct components of \(G-Z^{\prime}_{V}\), and \(\alpha\) assigns distinct values to them.
Another reduction used to obtain constant-factor fpt-approximation for negative equality constraint languages starts with an instance of CSP\((R^{\uplus}_{d},=)\), where \(R^{\uplus}_{d}(x_{1},y_{1},\ldots,x_{d},y_{d})\equiv\bigvee_{i=1}^{d}x_{i}\neq y_ {i}\), and produces an instance of Disjunctive Multicut with the same cost.
Proof of Lemma 35.: Introduce a crisp vertex \(x_{v}\) in \(G\) for every variable \(v\) in \(I\) and a soft vertex \(z_{c}\) for every constraint \(c\) in \(I\). For every constraint \(c\) in \(I\) of the form \(u=v\), add edges \(x_{u}z_{c}\) and \(x_{v}z_{c}\) in \(G\). For every constraint \(c\) in \(I\) of the form \(R^{\uplus}_{d}(u_{1},v_{1},\ldots,u_{d},v_{d})\), create a list request \(\{x_{u_{1}}x_{v_{1}},\ldots,x_{u_{d}}x_{v_{d}},z_{c}z_{c}\}\) in \(\mathcal{L}\). Observe that every list has \(d+1\) requests. This completes the construction. Clearly, it requires polynomial time.
To show that the reduction preserves costs, assume \(X\) is a solution to \(I\), and let \(X^{\prime}=\{z_{c}:c\in X\}\), noting that \(|X^{\prime}|=|X|\). We claim that \(X^{\prime}\) satisfies all lists in \(\mathcal{L}\). Observe that all edges in \(G\) correspond to equality constraints in \(I\), which implies that \(x_{u}\) and \(x_{v}\) are connected in \(G-X^{\prime}\) if and only if the constraints in \(I-X\) imply that \(u=v\). Now, consider a list request \(\{x_{u_{1}}x_{v_{1}},\ldots,x_{u_{d}}x_{v_{d}},z_{c}z_{c}\}\) in \(\mathcal{L}\). If the constraint \(c\) is in \(X\), then \(z_{c}\) is in \(X^{\prime}\) and \(X^{\prime}\) satisfies the request. Otherwise, there is \(i\) such that the constraints in \(I-X\) are consistent with \(u_{i}\neq v_{i}\), hence \(x_{u_{i}}\) and \(x_{v_{i}}\) are disconnected in \(G-X^{\prime}\).
For the opposite direction, assume \(Y\subseteq V^{1}(G)\) satisfies \(\mathcal{L}\) and let \(Y^{\prime}=\{c:z_{c}\in Y\}\). Observe that \(|Y|=|Y^{\prime}|\). We claim that \(I-Y^{\prime}\) is consistent. Similarly to the previous part of the proof, observe that \(x_{u}\) and \(x_{v}\) are connected in \(G-Y\) if and only if the constraints in \(I-Y^{\prime}\) imply \(u=v\). Then, for every request list \(\{x_{u_{1}}x_{v_{1}},\ldots,x_{u_{d}}x_{v_{d}},z_{c}z_{c}\}\) in \(\mathcal{L}\), either \(z_{c}\in Y\), in which case \(c\in Y^{\prime}\), or there is \(i\) such that \(x_{u_{i}}\) and \(x_{v_{i}}\) are disconnected in \(G-Y\), in which case \(I-Y^{\prime}\) is consistent with \(u_{i}\neq v_{i}\). Hence, \(I-Y^{\prime}\) is satisfied by any assignment that maps variables \(u\) and \(
\(r\) is the arity of \(R\), and edges \(ij\) for every pair of indices such that \(R(x_{1},\ldots,x_{r})\) implies a \(2\)-clause involving \(x_{i}\) and \(x_{j}\). A graph is \(2K_{2}\)-free if no four vertices induce a subgraph with two independent edges.
**Theorem 41** (Theorem 1.2 of [33]).: _Let \(\Delta\) be a finite bijunctive Boolean constraint language such that the Gaifman graphs of all relations in \(\Delta\) are \(2K_{2}\)-free. Then MinCSP\((\Delta)\) is in FPT._
We are ready to present the algorithm.
**Theorem 42**.: Triple Multicut _is fixed-parameter tractable._
Proof.: Let \((G,\mathcal{T},k)\) be an instance of Triple Multicut. By iterative compression, we obtain \(X_{V}\subseteq V(G)\) and \(X_{\mathcal{T}}\subseteq\mathcal{T}\) such that \(|X_{V}|+|X_{\mathcal{T}}|\leq k+1\) and all components of \(G-X_{V}\) intersect triples in \(\mathcal{T}\setminus X_{\mathcal{T}}\) in at most one vertex. Moreover, by branching on the intersection, we can assume that a hypothetical optimal solution \((Z_{V},Z_{\mathcal{T}})\) to \((G,\mathcal{T},k)\) is disjoint from \((X_{V},X_{\mathcal{T}})\). Let \(X=X_{V}\cup\bigcup_{uvw\in X_{\mathcal{T}}}\{u,v,w\}\) and guess the partition of the vertices in \(X\) into connected components of \(G-Z_{V}\). Identify vertices that belong to the same component, and enumerate them via the bijective mapping \(\alpha:X\rightarrow\{1,\ldots,d\}\). Observe that for every triple \(uvw\in X_{\mathcal{T}}\), values \(\alpha(u)\), \(\alpha(v)\) and \(\alpha(w)\) are distinct since \(X_{\mathcal{T}}\cap Z_{\mathcal{T}}=\emptyset\). Create an instance \(I_{\alpha}\) of Boolean MinCSP as follows.
1. Introduce variables \(v_{i}\) and \(\hat{v_{i}}\) for every \(v\in V(G)\) and \(i\in[d]\).
2. For every vertex \(v\in V(G)\), add soft constraint \(\bigwedge_{i<j}(\neg v_{i}\vee\neg v_{j})\land\bigwedge_{i}(v_{i}\rightarrow\hat {v}_{i})\).
3. For every vertex \(v\in X\), add crisp constraints \(v_{\alpha(v)}\), \(\hat{v}_{\alpha(v)}\), and \(\neg v_{j}\), \(\neg\hat{v}_{j}\) for all \(j\neq\alpha(v)\).
4. For every edge \(uv\in E(G)\) and \(i\in[d]\), add crisp constraints \(\hat{u_{i}}\to v_{i}\) and \(\hat{v_{i}}\to u_{i}\).
5. For every triple \(uvw\in\mathcal{T}\) and \(i\in[d]\), add soft constraints \((\neg\hat{u_{i}}\vee\neg\hat{v_{i}})\land(\neg\hat{v_{i}}\vee\neg\hat{w_{i}} )\land(\neg\hat{u_{i}}\vee\neg\hat{w_{i}})\).
This completes the reduction. Observe that all relations used in \(I_{\alpha}\) are bijunctive. Their Gaifman graphs are cliques with pendant edges attached to all vertices (case 2), edgeless (case 3), single edges (case 4), or triangles (case 5). These graphs are \(2K_{2}\)-free, therefore, we can decide whether \((I_{\alpha},k)\) is a yes-instance in fpt time by Theorem 41.
The intuitive idea behind the reduction is that deleting a vertex from the graph corresponds to deleting the constraint of type 2 for that vertex from the MinCSP instance. If the constraint for \(v\in V(G)\) is present, then \(s\) may reach at most one variable in \(\{\hat{v}_{i}:i\in[d]\}\) by a path of implications. We can interpret \(s\) reaching \(\hat{v}_{i}\) as placing \(v\) into the \(i\)th connected component of the resulting graph. Crisp constraints of type 3 ensure that the vertices of \(X\) are partitioned into components according to \(\alpha\). Crisp constraints of type 4 ensure that neighbouring vertices are placed into the same components. If \(s\) does not reach any variable in \(\{\hat{v}_{i}:i\in[d]\}\), then \(v\) ends up in the same component as in \(G-X_{V}\), and, by iterative compression, it is not part of any violated triple. On the other hand, if a constraint of type 2 for a vertex \(v\) is deleted, then the Boolean assignment mapping all \(v_{i}\) and 0 and all \(\hat{v}_{i}\) to 1 is compatible with all other constraints involving any of these variables. Finally, constraints of type 5 ensure that no pair of variables in a triple is assigned the same value.
To show correctness formally, we first assume that \((G,\mathcal{T},k)\) is a yes-instance and \((Z_{V},Z_{\mathcal{T}})\) is a solution that respects \(\alpha\), i.e. if \(\alpha(x)\neq\alpha(y)\), then \(Z_{V}\) disconnects \(x\) and \(y\) in \(G\). Define an assignment \(\varphi:V(I_{\alpha})\rightarrow\{0,1\}\) as follows. For all \(i\in[d]\) set \(\varphi(v_{i})=\varphi(\hat{v}_{i})=1\) if \(v\) is not in \(Z_{V}\) and is connected to \(\alpha^{-1}(i)\) in \(G-Z_{V}\), and let \(\varphi(v_{i})=\varphi(\hat{v}_{i})=0\) otherwise. For \(v\in Z_{V}\) and \(i\in[d]\), set \(\varphi(v_{i})=1\) and \(\varphi(\hat{v}_{i})=0\). We claim that \(\varphi\) satisfies all crisp constraints in \(I_{\alpha}\), and violates at most \(|Z_{V}|+|Z_{\mathcal{T}}|\leq k\) soft constraints. Constraints introduced in step 3 are satisfied because \((Z_{V},Z_{\mathcal{T}})\) agree with \(\alpha\). To see that constraints introduced in step 4 are satisfied, we need to consider several cases. If \(u,v\in V(G)\setminus Z_{V}\) and both are reachable in \(G-Z_{V}\) from \(\alpha^{-1}(i)\), then the constraints are satisfied because \(\varphi(u_{i})=\varphi(\hat{u}_{i})=1=\varphi(v_{i})=\varphi(\hat{v}_{i})\). If \(u,v\in V(G)\setminus Z_{V}\) and both are unreachable in \(G-Z_{V}\) from \(\alpha^{-1}(i)\), then the constraints are satisfied because \(\varphi(u_{i})=\varphi(\hat{u}_{i})=0=\varphi(v_{i})=\varphi(\hat{v}_{i})\). This completes the cases with \(u,v\in V(G)\setminus Z_{V}\) because \(uv\) is an edge in \(G\). Finally, if \(v\in Z_{V}\), then \(\varphi(v_{i})=1\) and \(\varphi(\hat{v}_{i})=0\), so \(\varphi\) satisfies all constraints. Now consider soft constraints, starting with step 2. These constraints are satisfied by \(\varphi\) for all \(v\in V(G)\setminus Z_{V}\): since \(Z_{V}\) respects \(\alpha\), \(v\) is reachable from \(\alpha^{-1}(i)\) for at most one \(i\in[d]\); moreover, \(v_{i}=\hat{v}_{i}\) is satisfied by \(\varphi\) by definition. Hence, \(\varphi\) violates at most \(|Z_{V}|\) such constraints. Further, consider a triple \(uvw\in\mathcal{T}\setminus Z_{\mathcal{T}}\). We claim that \(\varphi\) satisfies all constraints introduced in step 5 for \(uvw\). Suppose for the sake of contradiction that \(\varphi(\hat{u}_{i})=\varphi(\hat{v}_{i})=1\) for some \(i\in[d]\). Then, by definition of \(\varphi\), \(\alpha^{-1}(i)\) reaches both \(u\) and \(v\) in \(G-Z_{V}\), contradicting the fact that \((Z_{V},Z_{\mathcal{T}})\) is a solution. Finally, for every \(uvw\in Z_{\mathcal{T}}\) there can be at most one \(i\in[d]\) for which the introduced constraint is violated by \(\varphi\), since each such violation requires two out of the variables \(u_{i},v_{i},w_{i}\) to be assigned 1. Hence, \(\varphi\) violates at most \(|Z_{\mathcal{T}}|\) such constraints, and \(|Z_{V}|+|Z_{\mathcal{T}}|\) constraints in total.
Now suppose \(\varphi^{\prime}\) satisfies all crisp constraints and violates at most \(k\) soft constraints in \(I_{\alpha}\). Define \(Z^{\prime}_{V}\) as the set of variables \(v\in V(G)\) such that the constraint for \(v\) introduced in step 2 is violated \(\varphi\). Define \(Z^{\prime}_{\mathcal{T}}\) as the set of triples \(uvw\in\mathcal{T}\) such that a constraint for \(uvw\) introduced in step 5 is violated by \(\varphi^{\prime}\). Observe that \(|Z^{\prime}_{V}|+|Z^{\prime}_{\mathcal{T}}|\leq k\). We claim that \((Z^{\prime}_{V},Z^{\prime}_{\mathcal{T}})\) is a solution to \((G,\mathcal{T},k)\). First, observe that \((Z^{\prime}_{V},Z^{\prime}_{\mathcal{T}})\) respects \(\alpha\)
i.e. \(\alpha(u)\neq\alpha(v)\) implies that \(u\) and \(v\) are disconnected in \(G-Z^{\prime}_{V}\). Indeed, if this is not the case, then crisp constraints of \(I_{\alpha}\) imply that \(\varphi(\hat{u}_{\alpha}(v))=1\), which violates the crisp constraint \(\neq\hat{u}_{\alpha(v)}\). Now we show that for every triple \(uvw\in\mathcal{T}\setminus Z^{\prime}_{\mathcal{T}}\), vertices \(u\), \(v\) and \(w\) occur in distinct connected components of \(G-Z^{\prime}_{V}\). Suppose for the sake of contradiction that there is a triple \(uvw\in\mathcal{T}\setminus Z^{\prime}_{\mathcal{T}}\) such that \(u\) and \(v\) are connected in \(G-Z^{\prime}_{V}\). If there exists \(i\in[d]\) such that \(\alpha^{-1}(i)\) reaches \(u\) and \(v\), then the crisp constraints imply that \(\varphi^{\prime}(\hat{u}_{i})=\varphi^{\prime}(\hat{v}_{i})=1\), and the clause \((\neg\hat{u}_{i}\vee\neg\hat{v}_{i})\) is violated, which implies \(uvw\in Z^{\prime}_{\mathcal{T}}\), a contradiction. Otherwise, \(u\) and \(v\) are disconnected from \(X\) in \(G-Z^{\prime}_{V}\). Then, there is a path from \(u\) to \(v\) in \(G-(Z^{\prime}_{V}\cup X)\), and, consequently, in \(G-X\), which implies \(uvw\in X_{\mathcal{T}}\). However, values \(\alpha(u)\), \(\alpha(v)\) and \(\alpha(w)\) are distinct, and \(Z^{\prime}_{V}\) respects \(\alpha\), which is a contradiction. This completes the proof.
## 6 Disjunctive and Steiner Multicut
We show that two generalizations of Vertex Multicut, Disjunctive Multicut and Steiner Multicut, are constant-factor fpt-approximable, proving Theorems 44 and 37, respectively. Section 6.1 presents the main loop of the Disjunctive Multicut algorithm, while Section 6.2 is dedicated to the most technical subroutine of the algorithm that involves _randomized covering of shadow_[43]. In Section 6.3 we present a simpler and more efficient algorithm for Steiner Multicut that avoids shadow covering using the idea of [41].
### Main Loop of the Disjunctive Multicut Algorithm
Let \(G\) be a graph with vertices \(V(G)=V^{\infty}(G)\uplus V^{1}(G)\) partitioned into undeletable and deletable, respectively. A subset \(L\subseteq\big{(}\begin{smallmatrix}V(G)\\ 2\end{smallmatrix}\big{)}\) of pairs is a _request list_, and a set of vertices \(X\subseteq V(G)\)_satisfies_\(L\) if there is a pair \(st\in L\) separated by \(X\). This includes the possibility that \(s\in X\) or \(t\in X\). For a graph \(G\) and a collection of request lists \(\mathcal{L}\), we let \(\operatorname{cost}(G,\mathcal{L})\) be the minimum size of a set \(X\subseteq V^{1}(G)\) that satisfies all lists in \(\mathcal{L}\). Disjunctive Multicut asks, given an instance \((G,\mathcal{L})\), whether \(\operatorname{cost}(G,\mathcal{L})\leq k\).
Disjunctive Multicut problem generalizes not only Multicut (which is a special case with \(d=1\)) but also \(d\)-Hitting Set. To see the latter, take an edgeless graph \(G\) and make every request a _singleton_, i.e. a pair \(ss\) for a vertex \(s\in V(G)\). The only way to satisfy a singleton \(ss\) is to delete the vertex \(s\) itself, and the only way to satisfy a list of singletons is to delete one of the vertices in it.
The intuitive idea behind the approximation algorithm for Disjunctive Multicut is to iteratively simplify the instance \((G,\mathcal{L},k)\), making it closer to Bounded Hitting Set after each iteration. Roughly, we make progress if the maximum number of non-singleton requests in a list decreases. In each iteration, the goal is to find a set of \(O(k)\) vertices whose deletion, combined with some branching steps, simplifies every request list. This process can continue for \(O(d)\) steps until we obtain an instance of Bounded Hitting Set, which can be solved in fpt time by branching. The instance may increase in the process, but finally we obtain a solution of \(\operatorname{cost}\,f(d)\cdot k\) for some function \(f\). We do not optimize for \(f\) in our proofs. Observe also that in the context of constant-factor fpt approximability, some dependence on \(d\) is unavoidable since the problem with unbounded \(d\) generalizes Hitting Set.
Formally, for a request list \(L\), let \(\mu_{1}(L)\) and \(\mu_{2}(L)\) be the number of singleton and non-singleton cut requests in \(L\), respectively. Define the measure for a list \(L\) as \(\mu(L)=\mu_{1}(L)+3\mu_{2}(L)=|L|+2\mu_{2}(L)\), and extend it to a collection of list requests \(\mathcal{L}\) by taking the maximum, i.e. \(\mu(\mathcal{L})=\max_{L\in\mathcal{L}}\mu(L)\). Observe that \(\mu(\mathcal{L})\leq 3d\) for any instance of Disjunctive Multicut. Further, let \(V(L)=\bigcup_{st\in L}\{s,t\}\) denote the set of vertices in a list \(L\), and \(\nu(\mathcal{L})=\mu_{1}(L)+2\mu_{2}(L)\) be an upper bound on the maximum number of variable occurrences in a list of \(\mathcal{L}\). The workhorse of the approximation algorithm is the following lemma.
**Lemma 43**.: _There is a randomized algorithm Simplify that takes an instance \((G,\mathcal{L},k)\) of Disjunctive Multicut as input, and in \(O^{*}(2^{O(k)})\) time produces a graph \(G^{\prime}\) and a collection of requests \(\mathcal{L}^{\prime}\) such that \(|V(G^{\prime})|\leq|V(G)|\), \(\nu(\mathcal{L}^{\prime})\leq\nu(\mathcal{L})\), \(|\mathcal{L}^{\prime}|\leq k^{2}|\mathcal{L}|\), and \(\mu(\mathcal{L}^{\prime})\leq\mu(\mathcal{L})-1\). Moreover, the following holds._
* _If_ \(\operatorname{cost}(G,\mathcal{L})\leq k\)_, then, with probability_ \(2^{-O(k^{2})}\)_, we have_ \(\operatorname{cost}(G^{\prime},\mathcal{L}^{\prime})\leq 2k\)_._
* _If_ \(\operatorname{cost}(G,\mathcal{L})>3k\)_, then we have_ \(\operatorname{cost}(G^{\prime},\mathcal{L}^{\prime})>2k\)_._
Randomization in Lemma 43 comes from the use of the _random covering of shadow_ of [43, 22]. They also provide a derandomized version of this procedure, so our algorithm can be derandomized as well. We postpone the proof of Lemma 43 until Section 6.2 since it requires introduction of some technical machinery. For now, we show how to prove Theorem 44 using the result of the lemma.
**Theorem 44**.: Disjunctive Multicut _is fixed-parameter tractable._
Proof.: Let \((G,\mathcal{L},k)\) be an instance of Disjunctive Multicut. Repeat the following steps until \(\mu_{2}(\mathcal{L})=0\). Apply the algorithm of Lemma 43 to \((G,\mathcal{L},k)\), obtaining a new graph \(G\) and a new collection of lists \(\mathcal{L}\), and let \((G,\mathcal{L},k):=(G^{\prime},\mathcal{L}^{\prime},2k)\). When \(\mu_{2}(\mathcal{L})=0\), let \(W=\{vv:v\in V(G)\}\) be the set of singleton cut requests
for every vertex in \(V(G)\). Check whether \((W,\mathcal{L},k)\) is a yes-instance of Hitting Set - if yes, accept, otherwise reject. See Algorithm 1 for the pseudocode.
```
1:procedureSolveDJMC\((G,\mathcal{L},k)\)
2:while\(\mu_{2}(\mathcal{L})>0\)do
3:\((G,\mathcal{L})\leftarrow\textsc{Simplify}(G,\mathcal{L},k)\)
4:ifSimplify rejects then
5: reject
6:\(k\gets 2k\)
7:\(W\leftarrow\{vv:v\in V(G)\}\)
8:ifSolveHittingSet\((W,\mathcal{L},k)\) accepts then
9: accept
10:else
11: reject
```
**Algorithm 1** Main Loop.
To argue correctness, let \((G,\mathcal{L},k)\) be the input instance and \((G^{\prime},\mathcal{L}^{\prime},k^{\prime})\) be the instance obtained after simplification. By induction and Lemma 43, we have \(|V(G^{\prime})|\leq|V(G)|\) and \(\nu(\mathcal{L}^{\prime})\leq\nu(\mathcal{L})\). Since \(\nu(\mathcal{L}^{\prime})\leq\nu(\mathcal{L})\leq 2d\) and \(\mu_{2}(\mathcal{L})=0\), every list in \(\mathcal{L}\) has at most \(2d\) requests. Let \(r\) be the number of calls to Simplify performed by the algorithm. Note that \(r\leq\mu(\mathcal{L})\leq 3d\) since the measure decreases by at least one with each iteration, and define \(k^{\prime}=2^{r}k\). The lists in \(\mathcal{L}^{\prime}\) only contain singletons, thus \((G^{\prime},\mathcal{L}^{\prime},k^{\prime})\) is essentially an instance of Hitting Set with sets of size \(2d\). Moreover, \(|\mathcal{L}^{\prime}|\leq k^{2r}|\mathcal{L}|\), so the number of lists is polynomial in \(|\mathcal{L}|\). We can solve \((G^{\prime},\mathcal{L}^{\prime},k^{\prime})\) in \(O^{*}((2d)^{k^{\prime}})\) time by branching (see, for example, Chapter 3 in [24]). For the other direction, suppose \(\mathrm{cost}(G,\mathcal{L})\leq k\). By Lemma 43 and induction, we have \(\mathrm{cost}(G^{\prime},\mathcal{L}^{\prime})\leq 2^{r}k\leq k^{\prime}\) with probability \(2^{-O(rk^{2})}\), and the algorithm accepts. If \(\mathrm{cost}(I)>3k\), then \(\mathrm{cost}(G^{\prime},\mathcal{L}^{\prime})>k^{\prime}\) and the algorithm rejects.
### Simplification Procedure
In this section we prove Lemma 43. We start by iterative compression and guessing. Then we delete at most \(k\) vertices from the graph and modify it, obtaining an instance amenable to the main technical tool of the section - the _shadow covering_ technique.
#### 6.2.1 Initial Phase
Let \((G,\mathcal{L},k)\) be an instance of Disjunctive Multicut. By iterative compression, assume we have a set \(X\subseteq V(G)\) that satisfies all lists in \(\mathcal{L}\) and \(|X|=c\cdot k+1\), where \(c:=c(d)\) is the approximation factor. Assume \(Z\) is an optimal solution to \(G\), i.e. \(|Z|\leq k\) and \(Z\) satisfies all lists in \(\mathcal{L}\). Guess the intersection \(W=X\cap Z\), and let \(G^{\prime}=G-W\), \(X^{\prime}=X\setminus W\), and \(Z^{\prime}=Z\setminus W\). Construct \(\mathcal{L}^{\prime}\) starting with \(\mathcal{L}\) and removing all lists satisfied by \(W\). Further, guess the partition \(\mathcal{X}=(X_{1},\ldots,X_{\ell})\) of \(X^{\prime}\) into the connected components of \(G^{\prime}-Z^{\prime}\), and identify the variables in each subset \(X_{i}\) into a single vertex \(x_{i}\), and redefine \(X^{\prime}\) accordingly. Note that the probability of our guesses being correct up to this point is \(2^{-O(k\log k)}\). Also, these steps can be derandomized by creating \(2^{O(k\log k)}\) branches.
Now compute a minimum \(\mathcal{X}\)-multiway cut in \(G^{\prime}\), i.e. a set \(M\subseteq V^{1}(G^{\prime})\) that separates every pair of vertices \(x_{i}\) and \(x_{j}\) in \(X^{\prime}\). Note that \(Z^{\prime}\) is a \(\mathcal{X}\)-multiway cut by the definition of \(\mathcal{X}\), so \(|M|\leq|Z^{\prime}|\leq k\). Such a set \(M\) can be computed in \(O^{*}(2^{k})\) time using the algorithm of [25]. If no \(\mathcal{X}\)-multiway cut of size at most \(k\) exists, then abort the branch and make another guess for \(\mathcal{X}\). If an \(\mathcal{X}\)-multiway cut \(M\) of size at most \(k\) is obtained, remove the vertices in \(M\) from \(G\) and along with the lists in \(\mathcal{L}^{\prime}\) satisfied by \(M\). This completes the initial phase of the algorithm. Properties of the resulting instance are summarized below.
**Lemma 45**.: _After the initial phase we obtain a graph \(G^{\prime}\), a family of list requests \(\mathcal{L}^{\prime}\), and subset of vertices \(X^{\prime}\subseteq V(G^{\prime})\) such that \(|V(G^{\prime})|\leq|V(G)|\), \(\nu(\mathcal{L}^{\prime})\leq\nu(\mathcal{L})\), \(\mu(\mathcal{L}^{\prime})\leq\mu(\mathcal{L})\), and \(|X^{\prime}|\in O(k)\). The set \(X^{\prime}\) satisfies all lists in \(\mathcal{L}^{\prime}\) and intersects each connected component of \(G^{\prime}\) in at most one vertex. Moreover, the following hold._
* \(\mathrm{cost}(G,\mathcal{L})\leq k+\mathrm{cost}(G^{\prime},\mathcal{L}^{\prime})\)_._
* _If_ \(\mathrm{cost}(G,\mathcal{L})\leq k\)_, then, with probability_ \(2^{-O(k\log k)}\)_, we have_ \(\mathrm{cost}(G^{\prime},\mathcal{L}^{\prime})\leq k\)_. Moreover, there is a set_ \(Z^{\prime}\subseteq V(G^{\prime})\)_,_ \(|Z^{\prime}|\leq k\) _that satisfies all lists in_ \(\mathcal{L}^{\prime}\) _and is disjoint from_ \(X^{\prime}\)_._
Proof.: All statements apart from the last two are immediate from the construction. For the first statement, note that an optimal solution to \((G^{\prime},\mathcal{L}^{\prime},k)\) combined with the \(\mathcal{X}\)-multiway cut \(M\) has size at most \(2k\) and satisfies all lists in \(\mathcal{L}^{\prime}\).
To see that \(\mathrm{cost}(G,\mathcal{L})\leq k\) implies \(\mathrm{cost}(G^{\prime},\mathcal{L}^{\prime})\leq k\) with probability \(2^{-O(k\log k)}\), observe that, assuming our guesses for \(W\) and \(\mathcal{X}\) are correct, there is an optimal solution \(Z\) to \((G,\mathcal{L},k)\) such that \(W=Z\cap X\) and
partitions vertices of \(X\) into connected components according to \(\mathcal{X}\). Then, \(Z^{\prime}=Z\setminus W\) is a solution to \((G^{\prime},\mathcal{L}^{\prime})\), \(|Z^{\prime}|\leq k\), and \(Z^{\prime}\cap X=\emptyset\).
#### 6.2.2 Random Covering of Shadow
Random covering of shadow is a powerful tool introduced by [43] and sharpened by [22]. We use the latter work as our starting point. Although [22] present their theorems in terms of directed graphs, their results are applicable to our setting by considering undirected edges as bidirectional, i.e. replacing every edge \(uv\) with a pair of antiparallel arcs \((u,v)\) and \((v,u)\). Consider a graph \(G\) with vertices partitioned into deltable and undetable subsets, i.e. \(V(G)=V^{1}(G)\uplus V^{\infty}(G)\). Let \(\mathcal{F}=(F_{1},\dots,F_{q})\) be a family of connected subgraphs of \(G\). An \(\mathcal{F}\)_-transversal_ is a set of vertices \(T\) that intersects every subgraph \(F_{i}\) in \(\mathcal{F}\). If \(T\) is an \(\mathcal{F}\)-transversal, we say that \(\mathcal{F}\) is _\(T\)-connected_. For every \(W\subseteq V(G)\), the _shadow of \(W\) (with respect to \(T\))_ is the subset of vertices disconnected from \(T\) in \(G-W\). We state it for the case \(T\subseteq V^{\infty}(G)\) which suffices for our applications.
**Theorem 46** (Random Covering of Shadow, Theorem 3.5 in [22]).: _There is an algorithm RandomCover that takes a graph \(G\), a subset \(T\subseteq V^{\infty}(G)\) and an integer \(k\) as input, and in \(O^{*}(4^{k})\) time outputs a set \(S\subseteq V(G)\) such that the following holds. For any family \(\mathcal{F}\) of \(T\)-connected subgraphs, if there is an \(\mathcal{F}\)-transversal of size at most \(k\) in \(V^{1}(G)\), then with probability \(2^{-O(k^{2})}\), there exists an \(\mathcal{F}\)-transversal \(Y\subseteq V^{1}(G)\) of size at most \(k\) such that_
1. \(Y\cap S=\emptyset\)_, and_
2. \(S\) _covers the shadow of_ \(Y\) _with respect to_ \(T\)_._
The following consequence is convenient for our purposes.
**Corollary 47**.: _Let \(S\) and \(Y\) be the shadow-covering set and the \(\mathcal{F}\)-transversal from Theorem 46, respectively. Define \(R=V(G)\setminus S\) to be the complement of \(S\). Then \(Y\subseteq R\) and, for every vertex \(v\in R\), either \(v\in Y\) or \(v\) is connected to \(T\) in \(G-Y\)._
Proof.: By definition, \(R\) is the complement of \(S\), hence \(Y\cap S=\emptyset\) implies that \(Y\subseteq R\). Since \(S\) covers the shadow of \(Y\) with respect to \(T\), the set \(R\) is outside the shadow. Hence, a vertex \(v\in R\) is either connected to \(T\) in \(G-Y\) or it is contained in \(Y\).
Note that if a vertex \(v\in N(S)\) and \(v\) is undeletable, then \(v\in R\) and \(v\notin Y\), hence \(v\) is connected to \(T\) in \(G-Y\). Since \(Y\cap S=\emptyset\), every vertex in \(N(v)\cap S\) is also connected to \(T\) in \(G-Y\), so we can remove \(N(v)\cap S\) from \(S\) (and add it to \(R\) instead). By applying this procedure to exhaustion, we may assume that no vertex in \(N(S)\) is undeletable.
With the random covering of shadow at our disposal, we return to Disjunctive Multicut. By Lemma 45, we can start with an instance \((G,\mathcal{L},k)\) and a set \(X\subseteq V(G)\) such that \(|X|\in O(k)\), \(X\) satisfies all lists in \(\mathcal{L}\), every connected component of \(G\) intersects \(X\) in at most one vertex, and there is an optimal solution \(Z\) disjoint from \(X\). Let \(\mathcal{T}:=\mathcal{T}(G,\mathcal{L},X,Z)\) be the set of cut requests in \(\bigcup\mathcal{L}\) satisfied by both \(X\) and \(Z\). Define \(\mathcal{F}\) as the set of \(st\)-walks for all \(st\in\mathcal{T}\). Observe that an \(\mathcal{F}\)-transversal is precisely a \(\mathcal{T}\)-multicut. Apply the algorithm from Theorem 46 to \((G,X,k)\). Since \(X\) and \(Z\) are \(\mathcal{F}\)-transversals and \(|Z|\leq k\) by assumption, Theorem 46 and Corollary 47 imply that we can obtain a set \(R\subseteq V(G)\) in fpt time such that, with probability \(2^{-O(k^{2})}\), there is an \(\mathcal{F}\)-transversal \(Y\subseteq R\) of size at most \(k\), and every vertex in \(R\setminus Y\) is connected to \(X\) in \(G-Y\).
For every vertex \(v\in V^{1}(G)\setminus X\), define a set of vertices \(R_{v}\subseteq R\setminus X\) as follows:
* if \(v\) is disconnected from \(X\), then let \(R_{v}=\emptyset\);
* if \(v\in N(X)\) or \(v\in R\), then let \(R_{v}=\{v\}\);
* otherwise, let \(R_{v}=R\cap N(H)\), where \(H\) is the component of \(G[S]\) containing \(v\).
Note that, by definition, the set \(R_{v}\) is an \(Xv\)-separator in \(G\). Moreover, we have ensured that \(N(S)\) does not contain undeletable vertices, so \(R_{v}\) does not contain undeletable vertices. In a certain sense, the sets \(R_{v}\) are the only \(Xv\)-separators that \(Y\) needs to use. This idea is made precise in the following lemma.
**Lemma 48**.: _Let \(G\) be a graph. Let \(X\) and \(Y\) be disjoint subsets of \(V(G)\) such that \(X\) intersects every connected component of \(G\) in at most one vertex. Suppose \(R\subseteq V(G)\) is such that \(Y\subseteq R\) and all vertices in \(R\setminus Y\) are connected to \(X\) in \(G-Y\). If a vertex \(s\) is disconnected from \(X\) in \(G-Y\), then \(R_{s}\subseteq Y\)._
Proof.: Assume that \(s\) is connected to a vertex \(x\in X\) in \(G\), as otherwise the conclusion holds vacuously. Let \(Y\) be an \(xs\)-separator for some \(x\in X\). If \(s\in N(x)\), then clearly \(R_{s}=\{s\}\subseteq Y\). Otherwise, if \(s\in R\), then \(R_{s}=\{s\}\subseteq Y\) because every vertex in \(R\setminus Y\) is connected to \(X\) in \(G-Y\). Finally, if \(s\notin N(x)\cup R\), then we claim that \(R_{s}=R\cap N(H)\) is contained in \(Y\), where \(H\) is the connected component of \(s\) in \(G-R\). Suppose for the sake of contradiction that \(R_{s}\nsubseteq Y\) and there is \(v\in R_{s}\setminus Y\). Then \(v\in R\setminus Y\), so \(v\) is connected to \(x\). However, since \(v\in N(H)\), it has a neighbour in \(H\), and is connected to \(s\) in \(G-R\) and in \(G-Y\), contradicting that \(Y\) is an \(xs\)-separator.
Now we compute a simplified collection of lists \(\mathcal{L}^{\prime}\). Start by adding all lists in \(\mathcal{L}\) to \(\mathcal{L}^{\prime}\). Remove every singleton request \(xx\) such that \(x\in X\) from every list of \(\mathcal{L}^{\prime}\). For every list \(L\in\mathcal{L}^{\prime}\) not shortened this way, let \(st\in L\) be a non-singleton cut request satisfied by \(X\). Consider \(R_{s}\) and \(R_{t}\) and apply one of the following rules.
1. If \(|R_{s}|>k\) and \(|R_{t}|>k\), remove \(st\) from \(L\).
2. If \(|R_{s}|\leq k\) and \(|R_{t}|>k\), replace \(L\) with sets \((L\setminus\{st\})\cup\{aa\}\) for all \(a\in R_{s}\).
3. If \(|R_{s}|>k\) and \(|R_{t}|\leq k\), replace \(L\) with sets \((L\setminus\{st\})\cup\{bb\}\) for all \(b\in R_{t}\).
4. If \(|R_{s}|\leq k\) and \(|R_{t}|\leq k\), replace \(L\) with sets \((L\setminus\{st\})\cup\{aa,bb\}\) for all \(a\in R_{t}\), \(b\in R_{t}\).
Finally, make vertices in \(X\) undeletable, obtaining a new graph \(G^{\prime}\). This completes the simplification step. Note that each list in \(\mathcal{L}\) is processed once, so the running time of the last step is polynomial.
Now we prove some properties of \(G^{\prime}\) and \(\mathcal{L}^{\prime}\) obtained above. Note that \(|V(G^{\prime})|=|V(G)|\). Since every list in \(\mathcal{L}\) is processed once and with at most \(k^{2}\) new lists, the size of \(|\mathcal{L}^{\prime}|\) grows by a factor of at most \(k^{2}\). To see that \(\nu(\mathcal{L}^{\prime})\leq\nu(\mathcal{L})\) and \(\mu(\mathcal{L}^{\prime})\leq\mu(\mathcal{L})-1\), observe that every reduction rule replaces a list \(L\) with new lists with either one less non-singleton request (so \(\mu_{2}\) decreases by at least \(1\)), and adds up to two singleton requests (so \(\mu_{1}\) increases by at most \(2\)). Moreover, in every list of \(\mathcal{L}\) there is a cut request satisfied by \(X\), so no list of \(\mathcal{L}\) remains unchanged in \(\mathcal{L}^{\prime}\). We prove the remaining observations in separate lemmas.
**Lemma 49**.: _If \(\operatorname{cost}(G,\mathcal{L})\leq k\), then, with probability \(2^{-O(k^{2})}\), we have \(\operatorname{cost}(G^{\prime},\mathcal{L}^{\prime})\leq 2k\)._
Proof.: Let \(Z\) be a solution to \((G,\mathcal{L},k)\), i.e. \(|Z|\leq k\) and \(Z\) satisfies all requests in \(\mathcal{L}\). By Lemma 45, we may assume that \(Z\cap X=\emptyset\). Let \(R\) be the complement of the shadow covering set obtained by Theorem 46 and Corollary 47. With probability \(2^{-O(k^{2})}\), there exists an \(\mathcal{F}\)-transversal \(Y\subseteq R\) of size at most \(k\). We claim that \(Y\cup Z\) satisfies all lists in \(\mathcal{L}^{\prime}\). Since \(|Y\cup Z|\leq 2k\), this suffices to prove the lemma.
Consider a list \(L\in\mathcal{L}\). We show that \(Z\cup Y\) satisfies every list obtained from \(L\) by the reduction rules. If \(L\) contains \(xx\) for some \(x\in X\), then \(Z\) satisfies \(L\setminus\{xx\}\) because \(X\cap Z=\emptyset\). If \(L\) contains a request satisfied by \(Z\) but not \(X\), then this request remains in every list derived from \(L\), and \(Z\) satisfies all these lists. There is one remaining case: when \(X\) and \(Z\) satisfy the same non-singleton request \(st\) in \(L\). Then \(Y\) also satisfies \(st\) since \(st\in\mathcal{T}\) and \(Y\) is a \(\mathcal{T}\)-multicutic. We claim that \(Y\) satisfies all lists derived from \(L\) in this case. Note that there is a unique \(x\in X\) that is contained on every \(st\)-walk, and \(Y\) separates \(x\) from \(s\) or \(t\). Assume by symmetry that \(Y\) is an \(xs\)-separator. By Lemma 48, we obtain \(R_{s}\subseteq Y\), and since \(|Y|\leq k\), we have \(|R_{s}|\leq k\). Thus, every list derived from \(L\) contains a single request \(aa\) for some \(a\in R_{s}\), and \(Y\) satisfies every such list.
Now we show the remaining direction.
**Lemma 50**.: _If \(\operatorname{cost}(G,\mathcal{L})>2k\), then we have \(\operatorname{cost}(G^{\prime},\mathcal{L}^{\prime})>2k\)._
Proof.: We prove the contrapositive. Suppose \(\operatorname{cost}(G^{\prime},\mathcal{L}^{\prime})\leq 2k\) and \(Z^{\prime}\) is an optimal solution to \((G^{\prime},\mathcal{L}^{\prime},2k)\). We claim that \(Z^{\prime}\) is also a solution to \((G,\mathcal{L},2k)\). Consider a list \(L\in\mathcal{L}\). It suffices to show that if \(Z^{\prime}\) satisfies a list \(L^{\prime}\) derived from \(L\) by one of the reduction rules, then \(Z^{\prime}\) satisfies \(L\) as well. If \(L^{\prime}\) is derived from \(L\) be removing \(xx\) for some \(x\in X\), then \(Z^{\prime}\) satisfies \(L\setminus\{xx\}\) because the vertices in \(X\) are undeletable in \(G^{\prime}\), so \(Z^{\prime}\cap X=\emptyset\). If \(L^{\prime}\) is derived from \(L\) by removing some request \(st\in\mathcal{T}\), then we need to consider several cases. If \(Z^{\prime}\) satisfies \(L\setminus\{st\}\), then it clearly satisfies \(L\). Otherwise, we claim that \(R_{s}\subseteq Z^{\prime}\) or \(R_{t}\subseteq Z^{\prime}\). Suppose all lists derived from \(L\) after removing \(st\) have singletons \(aa\) for all \(a\in R_{s}\) added to them. Since \(Z^{\prime}\) satisfies all these lists and does not satisfy \(L\setminus\{st\}\), we have \(R_{s}\subseteq Z^{\prime}\). The cases when singletons \(bb\) for all \(b\in R_{t}\) are added to the derived lists follow by an analogous argument. This completes the case analysis.
We are now ready to prove Lemma 43.
Proof of Lemma 43.: Suppose \((G,\mathcal{L},k)\) is a yes-instance of Disjunctive Multicut. By Lemma 45, after the initial phase we obtain \(G^{\prime}\), \(\mathcal{L}^{\prime}\) such that \(|V(G^{\prime})|\leq|V(G)|\), \(\nu(\mathcal{L}^{\prime})\leq\nu(\mathcal{L})\), \(\mu(\mathcal{L}^{\prime})\leq\mu(\mathcal{L})\), and \(\operatorname{cost}(G^{\prime},\mathcal{L}^{\prime})\leq k\). Moreover, we obtain a set \(X^{\prime}\subseteq V(G^{\prime})\), \(|X^{\prime}|\in O(k)\), that satisfies all lists in \(\mathcal{L}^{\prime}\), intersects every component of \(G^{\prime}\) in at most one vertex, and is disjoint from an optimum solution \(Z^{\prime}\) to \((G^{\prime},\mathcal{L}^{\prime},k)\). Now we apply random covering of shadow and the list reduction rules to \(G^{\prime},\mathcal{L}^{\prime},X^{\prime}\), obtaining a new graph \(G^{\prime\prime}\) and a new set of lists \(\mathcal{L}^{\prime\prime}\). By Lemma 49, with probability \(2^{O(-k^{2})}\), we have \(\operatorname{cost}(G^{\prime\prime},\mathcal{L}^{\prime\prime})\leq 2k\). This proves one statement of Lemma 43.
For the second statement of Lemma 43, suppose \(\operatorname{cost}(G^{\prime\prime},\mathcal{L}^{\prime\prime})\leq 2k\). By Lemma 50, we have \(\operatorname{cost}(G^{\prime},\mathcal{L}^{\prime})\leq 2k\). By Lemma 45, \(\operatorname{cost}(G^{\prime}\mathcal{L}^{\prime})\leq 2k\) implies that \(\operatorname{cost}(G,\mathcal{L})\leq 2k+k\leq 3k\), and we are done.
### An Improved Algorithm for Steiner Multicut
We present a simpler and faster algorithm for (Vertex) Steiner Multicut, proving Theorem 37. Our approximation algorithm builds upon the \(O^{*}(2^{O(k)})\)-time 2-approximation for Vertex Multicut of [41]. Note that it is a special case of Steiner Multicut with \(p=2\). We only need to change one subroutine in their algorithm, so we describe the complete procedure informally, invoking relevant results from [41] and proving that our modified subroutine allows us to handle the cases with \(p>2\). Our goal is to reduce the problem to Strict Steiner Multicut, which is a special case of Steiner Multicut where the input comes with a designated vertex \(x\) such that \(\{x\}\) satisfies all requests, and the goal is to find a multicut of size at most \(k\) that does not include \(x\). As we show in the sequel, this problem can be solved in single-exponential lpt time.
Let \((G,\mathcal{T},k)\) be an instance of Steiner Multicut. Start by iterative compression and branching, which allows to assume access to a set \(X\) of size at most \(2k+1\) that satisfies all subsets in \(\mathcal{T}\). Further, we may assume that a hypothetical optimal solution \(Z\) is disjoint from \(X\). Let \(\mathcal{X}\) be the partition of \(X\) into connected components of \(G-Z\), and \(Z^{\prime}\subseteq Z\) be a minimal subset of \(Z\) that is a \(\mathcal{X}\)-multiway cut. One can guess \(|Z^{\prime}|\) in polynomial time. Using the method of two important separators (Lemma 3.2 of [41]) and guessing partial information about \(\mathcal{X}\) using a divide-and-conquer approach (as in Theorem 3.2 of [41]), one can find a subset \(M\) of at most \(2|Z^{\prime}|\) vertices such that \(X\) is partitioned into the connected components of \(G-M\) according to \(\mathcal{X}\). The remainder of the problem can be solved by \(|Z-Z^{\prime}|=k-Z^{\prime}\) deletions. Since \(M\) is a \(\mathcal{X}\)-multiway cut, all vertices of \(X\) that intersect a connected component of \(G-M\) can be identified. Thus, the problem reduces to solving several instances of Strict Steiner Multicut, one for each connected component of \(G-M\). To summarize, Theorem 3.2 of [41] leads to the following observation.
**Observation 51**.: _If Strict Steiner Multicut is solvable in \(O^{*}(2^{O(k)})\) time, then Steiner Multicut is approximable within a factor of \(2\) in \(O^{*}(2^{O(k)})\) time._
For the case with \(p=2\), [41] invoke the algorithm for Digraph Pair Cut of [37]. We generalize this algorithm to handle \(p>2\). For this, we need to invoke a definition.
**Definition 52** (Section 3.2 in [37]).: Let \(G=(V,E)\) be a graph and fix \(x\in V(G)\). A set \(W\subseteq V\) is _closest to \(v\)_ if \(v\notin W\) and \(W\) is the unique minimum \(vW\)-separator.
Intuitively, the closest \(vW\)-separator is distinguished among all \(vW\)-separators by the property that its deletion minimizes the subset of vertices reachable from \(v\). The uniqueness of the closest minimum \(vW\)-separator follows by submodularity of cuts. Such a set can be computed in polynomial time (see e.g. Section 3.2 of [37]).
**Lemma 53**.: Strict Steiner Multicut _with sets of size \(p\) is solvable in \(O^{*}(p^{k})\) time._
Proof.: Let \((G,\mathcal{T},k)\) be an instance of Steiner Multicut and assume \(x\in V(G)\) satisfies all subsets in \(\mathcal{T}\). Our goal is to compute a set of vertices \(W\subseteq V(G)\) such that \(|W|\leq k\), \(x\notin W\) and \(W\) satisfies all subsets in \(\mathcal{T}\).
Initialize a set \(Y=\emptyset\) and continue with the following steps.
1. Compute the minimum \(xY\)-separator \(W\) closest to \(x\).
2. If \(|W|>k\), reject the instance.
3. If \(W\) satisfies all subsets in \(\mathcal{T}\), accept the instance.
4. Otherwise, pick a subset \(T_{i}=\{t_{1},\ldots,t_{p}\}\) that is not satisfied by \(W\). Branch in \(p\) directions, adding \(t_{i}\) to \(Y\) in each branch.
Since \(W\) in step (1) is chosen to be a closest separator, the maxflow between \(x\) and \(Y\) is increased in step (4) by adding a vertex to \(Y\). Hence, the depth of the recursion tree is at most \(k\), and the branching factor is \(p\), which yields the running time \(O^{*}(p^{k})\).
To prove correctness, the key observation is that a set \(W\subseteq V(G)\) satisfies a subset \(T_{i}\in\mathcal{T}\) if and only if at least one vertex in \(t\in T_{i}\) is separated from \(x\) in \(W\). Clearly, if all vertices of \(T_{i}\) are connected to \(x\) in \(G-W\), then \(W\) does not satisfy \(T_{i}\). Moreover, if \(W\) is an \(xt\)-separator for some \(t\in T_{i}\), then then either there is a vertex \(s\in T_{i}\setminus\{t\}\) reachable from \(x\) but not from \(t\) in \(G-W\), or the set \(T_{i}\) is completely separated from \(x\) in \(G-W\). In the first case, \(W\) is an \(st\)-separator for \(\{s,t\}\subseteq T_{i}\), so it satisfies \(T_{i}\). In the second case, recall that \(x\) satisfies \(T_{i}\), so there is a pair of vertices in \(T_{i}\) such that all paths connecting them contain \(x\). Thus, separating \(x\) and \(T_{i}\) satisfies \(T_{i}\).
It remains to show that if \((G,\mathcal{T},k)\) is a yes-instance, then it has a solution that is closest to \(x\). Suppose \(Z\) is a solution to \((G,\mathcal{T},k)\) and let \(Z^{\prime}\) be the unique minimum \(vZ^{\prime}\)-separator. Clearly, \(|Z^{\prime}|\leq|Z|\leq k\) and if \(Z\) separates \(x\) from \(t\in V(G)\), then so does \(Z^{\prime}\). Hence, \(Z^{\prime}\) is a solution as well.
Combining Lemmas 51 and 53 proves Theorem 37.
Singleton Expansion
In this section we study the complexity of the _singleton expansion_ of equality languages, i.e., informally, the effect of enriching an equality language \(\Gamma\) by assignment constraints (\(v=i\)). Viewed as a constraint language, allowing constraints (\(v=i\)) for some constant \(i\) corresponds to adding the singleton unary relation \(R_{i}=\{(i)\}\) to \(\Gamma\), hence the term singleton expansion. For completeness, we consider both adding just a constant number of such relations, and infinitely many.
More formally, let \(c\in\mathbb{N}\). Define \(\Gamma_{c}=\{\{(1)\},\ldots,\{(c)\}\}\) and \(\Gamma_{\mathbb{N}}=\{\{(i)\}\mid i\in\mathbb{N}\}\). For an equality language \(\Gamma\), define \(\Gamma^{+}=\Gamma\cup\Gamma_{\mathbb{N}}\) and for \(c\in\mathbb{N}\) define \(\Gamma^{+}_{c}=\Gamma\cup\Gamma_{c}\). For an equality language \(\Gamma\), a _singleton expansion of \(\Gamma\)_ is either the language \(\Gamma^{+}\) or \(\Gamma^{+}_{c}\) for \(c\in\mathbb{N}\). For every equality language \(\Gamma\) and every singleton expansion \(\Gamma^{\prime}\) of \(\Gamma\), we study the complexity of \(\textsc{MinCSP}(\Gamma^{\prime})\).
We note that if \(\Gamma=\{\)\(=\)\(\}\), then \(\textsc{MinCSP}(\Gamma^{+})\) naturally captures the problem Edge Multiway Cut. Let \((G,\mathcal{T},k)\) be an instance of Edge Multiway Cut with terminal set \(\mathcal{T}=\{t_{1},\ldots,t_{p}\}\). Then we create an equivalent instance of \(\textsc{MinCSP}(\Gamma^{+})\) over variable set \(V(G)\setminus\mathcal{T}\) as follows. First, we assume there are no edges in \(G[\mathcal{T}]\), as any such edge must be deleted anyway. Next, for every edge \(t_{i}v\in E(G)\) with \(v\in V(G)\setminus\mathcal{T}\) and \(t_{i}\in\mathcal{T}\), add a soft constraint (\(v=i\)), and for every edge \(uv\in E(G-\mathcal{T})\) add a soft constraint (\(u=v\)). Clearly, this reduction can also be employed in the reverse, i.e., for every instance of \(\textsc{MinCSP}(\Gamma^{+})\) we can create an equivalent instance of Edge Multiway Cut. In a similar way, \(\textsc{MinCSP}(=,\Gamma_{2})\) corresponds to \(st\)-Min Cut and is in P, and \(\textsc{MinCSP}(=,\Gamma_{s})\) for \(s\in\mathbb{N}\) corresponds to \(s\)-Edge Multiway Cut, the restriction of Edge Multiway Cut to \(s\) terminals, which for \(s\geq 3\) is NP-hard but FPT.
In this sense, studying singleton expansions of equality languages allows us to consider MinCSPs that may be intermediate between Multiway Cut and Multicut.
Unfortunately, our main conclusion is that nothing novel occurs in this range. Let us first provide the explicit characterization of all properties under consideration for \(\textsc{MinCSP}(\Gamma^{+})\), since these cases are relatively manageable. Say that \(\Gamma\) is _positive conjunctive_ if \(\Gamma\) is both conjunctive and constant, i.e., every relation \(R\in\Gamma\) is defined as a conjunction of positive literals, and _positive conjunctive and connected_ if it is additionally split, i.e., the literals (\(x_{i}=x_{j}\)) in the definition of a relation \(R(x_{1},\ldots,x_{r})\in\Gamma\), if viewed as edges \(x_{i}x_{j}\) in a graph, form a connected graph.
**Theorem 54**.: _Let \(\Gamma\) be an equality language. The following hold._
* \(\textsc{CSP}(\Gamma^{+})\) _is in P if_ \(\Gamma\) _is Horn, and NP-hard otherwise_
* \(\textsc{MinCSP}(\Gamma^{+})\) _is NP-hard_
* \(\textsc{MinCSP}(\Gamma^{+})\) _has a polynomial-time constant-factor approximation if_ \(\Gamma\) _is strictly negative or if_ \(\Gamma\) _is positive conjunctive, and otherwise it has no constant-factor approximation under UGC_
* \(\textsc{MinCSP}(\Gamma^{+})\) _is FPT if either every relation in_ \(\Gamma\) _is_ \(\mathsf{NEQ}_{3}\) _or split, or_ \(\Gamma\) _is strictly negative, otherwise it is W[1]-hard_
* \(\textsc{MinCSP}(\Gamma^{+})\) _has a constant-factor FPT approximation if_ \(\Gamma\) _is negative, and otherwise it is_ Hitting Set_-hard_
The cases for \(\Gamma^{+}_{c}\), \(c\in\mathbb{N}\) are more involved, and require additional definitions. Let us provide the statements in multiple steps. Recall that \(\textsc{MinCSP}(\Gamma)\) and \(\textsc{MinCSP}(\Delta)\) are _equivalent_ if there are cost-preserving reductions in between the problems in both directions. We first observe that if \(\Gamma\) implements \(\neq\) and \(=\), then we can "emulate" arbitrarily many constants via auxiliary variables (Lemma 58), and \(\textsc{MinCSP}(\Gamma^{\prime})\) maps back to \(\textsc{MinCSP}(\Gamma)\) for any singleton expansion \(\Gamma^{\prime}\) of \(\Gamma\). Now Theorem 14 and Lemma 16 give the following.
**Lemma 55**.: _Let \(\Gamma^{\prime}\) be a singleton expansion of an equality language \(\Gamma\). If \(\Gamma\) is not Horn or constant, then \(\textsc{CSP}(\Gamma^{\prime})\) is NP-hard. If \(\Gamma\) is Horn but not strictly negative or constant, then \(\textsc{MinCSP}(\Gamma^{\prime})\) is equivalent to \(\textsc{MinCSP}(\Gamma)\). Otherwise \(\Gamma\) is strictly negative or constant (and not both)._
For strictly negative languages \(\Gamma\), all the positive properties mentioned in Theorem 54 of course translate to \(\textsc{MinCSP}(\Gamma^{\prime})\) for any singleton extension \(\Gamma^{\prime}\) of \(\Gamma\).
**Lemma 56**.: _Let \(\Gamma^{\prime}\) be a singleton expansion of a finite strictly negative equality language \(\Gamma\). Then \(\textsc{CSP}(\Gamma^{\prime})\) is in P and \(\textsc{MinCSP}(\Gamma^{\prime})\) is NP-hard, FPT, and has a constant-factor approximation._
Finally, we assume that \(\Gamma\) is constant. For \(c\in\mathbb{N}\), let the \(c\)_-slice_ of \(\Gamma\) be the language
\[\Delta=\{R\cap[c]^{r(R)}\mid R\in\Gamma\}\]
where \(r(R)\) is the arity of \(R\). For \(c=2\), we will also interpret this as a Boolean language. The language \(\Delta\) is _trivial_ if for every relation \(R\in\Delta\), of arity \(r\), either \(R=\emptyset\) or \(R=[c]^{r}\). We generalize the notions of _positive
conjunctive and positive conjunctive and connected_ to \(\Delta\) in the natural way. Finally, a Boolean language is _affine_ if every relation in it can be modelled as the set of solutions to a system of affine linear equations over \(\mathrm{GF}(2)\). We state the remaining cases.
**Theorem 57**.: _Let \(\Gamma\) be a constant equality language and \(c\in\mathbb{N}\). Let \(\Delta\) be the \(c\)-slice of \(\Gamma\). If \(c=1\) then \(\textsc{MinCSP}(\Gamma_{c}^{+})\) is in P. Otherwise the following hold._
* _If_ \(\Gamma\) _has a retraction to domain_ \([c]\) _then_ \(\textsc{MinCSP}(\Gamma_{c}^{+})\) _is equivalent to_ \(\textsc{MinCSP}(\Delta_{c}^{+})\)_._
* _If_ \(\Gamma\) _has no retraction to domain_ \([c]\) _but is Horn, then_ \(\mathrm{CSP}(\Gamma_{c}^{+})\) _is in P but_ \(\textsc{MinCSP}(\Gamma_{c}^{+})\) _is_ \(\textsc{Hitting Set}\)_-hard_
* _Otherwise_ \(\mathrm{CSP}(\Gamma_{c}^{+})\) _is NP-hard._
_Furthermore, assume that \(\Gamma\) has a retraction to domain \([c]\). If \(c\geq 3\), then this implies that \(\Delta\) is positive conjunctive. Furthermore the following hold._
* _If_ \(\Delta\) _is trivial, then_ \(\textsc{MinCSP}(\Gamma_{c}^{+})\) _is in P_
* _If_ \(\Delta\) _is non-trivial, positive conjunctive and connected, then_ \(\textsc{MinCSP}(\Gamma_{c}^{+})\) _is in P for_ \(c=2\)_, and NP-hard but FPT and constant-factor approximable for_ \(c\geq 3\)__
* _If_ \(\Delta\) _is positive conjunctive but not connected, then_ \(\textsc{MinCSP}(\Gamma_{c}^{+})\) _is W[1]-hard but constant-factor approximable_
* _If_ \(c=2\) _and_ \(\Delta\) _is affine but not positive conjunctive, then_ \(\mathrm{CSP}(\Gamma_{2}^{+})\) _is in P but_ \(\textsc{MinCSP}(\Gamma_{c}^{+})\) _has a cost-preserving reduction from_ \(\textsc{Nearest Codeword}\)__
* _Otherwise_ \(c=2\) _and_ \(\mathrm{CSP}(\Gamma_{2}^{+})\) _is NP-hard_
This theorem in particular refines Theorem 7 from Section 1.
### The first step
Let us begin by showing that if \(\Gamma\) implements \(=\) and \(\neq\) then the singleton expansion over \(\Gamma\) adds no additional power.
**Lemma 58**.: _Let \(\Gamma\) be an equality language that implements \(=\) and \(\neq\). Then there is a cost-preserving reduction from \(\textsc{MinCSP}(\Gamma^{+})\) to \(\textsc{MinCSP}(\Gamma)\)._
Proof.: Let \(I\) be an instance of \(\textsc{MinCSP}(\Gamma^{+})\) and let \(C\subset\mathbb{N}\) be the set of constants \(i\) used in assignment constraints \(v=i\) in \(I\). Create a new set of variables \(T=\{t_{i}\mid i\in C\}\) and add a crisp constraint \(t\neq t^{\prime}\) for all distinct pairs \(t,t^{\prime}\in T\). Replace every constraint \((x=i)\) in \(I\) by an implementation of \((x=t_{i})\), and keep all other constraints unchanged. Let \(I^{\prime}\) be the output instance produced. We show that this reduction is cost-preserving. On the one hand, let \(\alpha\) be an assignment to \(V(I)\). Define an assignment \(\alpha^{\prime}\) to \(V(I^{\prime})\) by extending \(\alpha\) by \(\alpha(t_{i})=i\) for every \(i\in C\). Then \(\alpha\) and \(\alpha^{\prime}\) have the same cost. On the other hand, let \(\alpha^{\prime}\) be an assignment to \(V(I^{\prime})\) with finite cost. Since \(I^{\prime}\) consists of only equality-language constraints, we may apply any bijunction over \(\mathbb{N}\) to \(\alpha^{\prime}\) and retain an assignment that satisfies the same set of constraints. In particular, since \(\alpha^{\prime}\) has finite cost it must hold that \(\alpha^{\prime}(t)\neq\alpha^{\prime}(t^{\prime})\) for every distinct pair \(t,t^{\prime}\in T\). We may then apply a bijunction to \(\alpha^{\prime}\) such that \(\alpha^{\prime}(t_{i})=i\) for every \(i\in C\). Now letting \(\alpha\) be the restriction of \(\alpha^{\prime}\) to the variables \(V(I)\) produces an assignment for \(I\) of the same cost as \(\alpha^{\prime}\). Finally, assume that \(I^{\prime}\) has no finite-cost solutions. Then by the above, the same holds for \(I\), as otherwise a finite-cost solution to \(I\) could be transformed to a finite-cost solution to \(I^{\prime}\).
We may thus focus on languages that are either constant or strictly negative: By Theorem 14, if \(\Gamma\) is not constant and not Horn, then \(\mathrm{CSP}(\Gamma)\) is NP-hard, and by Lemma 16, if \(\Gamma\) is not constant and not strictly negative but is Horn, then Lemma 58 applies. Recall in particular that every strictly negative language is Horn.
Our main focus will be on languages that are constant, as they allow a richer range of interesting behaviour, but even strictly negative equality languages yield non-trivial problems under singleton expansion.
Let us formalize the following simple consequence.
**Corollary 59**.: _If \(\Gamma\) is Horn, then \(\mathrm{CSP}(\Gamma_{c}^{+})\) and \(\mathrm{CSP}(\Gamma^{+})\) are in P for every \(c\)._
Proof.: Apply Lemma 58 to \(\Gamma^{\prime}=\Gamma\cup\{=,\neq\}\). The cost-preserving reduction in particular preserves satisfiability.
This finishes Lemma 55.
### Strictly negative languages
Assume that \(\Gamma\) is strictly negative. We note that in this case, adding a single unary singleton relation makes the optimization problem hard.
**Lemma 60**.: MinCSP\((\neq,x=1)\) _is NP-hard._
Proof.: There is a cost-preserving reduction from Vertex Cover. Let \(G\) be a graph. Create an instance \(I\) of MinCSP\((\neq,x=1)\) as follows. The variable set is \(V(G)\). For every edge \(uv\in E(G)\), create a crisp constraint \((u\neq v)\). For every vertex \(v\in V(G)\), create a soft constraint \((v=1)\). On the one hand, let \(S\) be a vertex cover for \(G\). Then we may assign variables of \(S\) distinct values from \(2,3,\ldots\), thereby breaking the soft constraint \((v=1)\) for every \(v\in S\), but satisfying every crisp disequality constraint. Conversely, let \(\alpha\) be an assignment to \(I\) of finite cost. Let \(S\) be the set of variables \(v\in V(I)\) such that \(\alpha(v)\neq 1\). Then \(S\) is a vertex cover of \(G\).
On the other hand, if \(\Gamma\) is strictly negative then its singleton expansion has both an FPT algorithm and constant-factor approximation.
**Lemma 61**.: _Let \(\Gamma\) be a strictly negative language. Then MinCSP\((\Gamma^{+})\) is FPT and has a constant-factor approximation._
Proof.: Let \(I\) be an instance of MinCSP\((\Gamma^{+})\) with parameter \(k\). We branch over obvious obstructions. First, assume there is a variable \(v\in V(I)\) that is subject to contradictory assignment constraints \((v=i)\) and \((v=j)\), \(i\neq j\). Then we recursively branch into the two cases of either setting \(\alpha(v)=i\), in which case \((v=j)\) is violated (possibly along with further assignment constraints on \(v\)), or concluding that \((v=i)\) is violated and deleting those constraints. In both branches, the parameter \(k\) decreases. Next, form a tentative assignment \(\alpha\) by setting \(\alpha(v)=i\) for every variable \(v\) subject to an assignment constraint \((v=i)\), and setting all other variables to mutually distinct values, distinct from all values occurring in assignments in \(I\). If \(\alpha\) satisfies \(I\), then we are done. Otherwise, let \(R(x_{1},\ldots,x_{r})\) be a constraint violated by \(\alpha\). Since \(R\) is strictly negative, it has a definition as a conjunction of strictly negative clauses. Let \(C\) be such a clause that is violated by \(\alpha\). Then \(C\) is a disjunction over a finite number of negative literals \((x_{i}\neq x_{j})\), and for every such literal \((x_{i}\neq x_{j})\) there are assignment constraints \((x_{i}=a)\) and \((x_{j}=a)\) in \(I\) for some shared value \(a\). Then we can branch on either \(R\) being violated or an assignment constraint \((x_{i}=a_{i})\), \(i\in[r]\) being violated. This yields \(r+1=O(1)\) branches, and since \(k\) decreases in each branch the result is an FPT branching algorithm.
For the approximation, similar arguments apply. Let \(v\) be a variable subject to multiple contradictory assignment constraints. Let \(i\in\mathbb{N}\) be the constant for which the number of copies of a constraint \((v=i)\) is maximized. Let \(m_{1}\) be the number of assignment constraints \((v=j)\) for \(j\neq i\), and let \(m_{2}\) be the number of constraints \((v=i)\). Delete all constraints \((v=j)\) for \(j\neq i\), and delete \(\min(m_{1},m_{2})\) constraints \((v=i)\). Let \(X_{1}\) be set of assignment constraints deleted in total in this phase. Then any assignment to the instance will violate at least half of the constraints in \(X_{1}\), and in the remaining instance every variable \(v\) occurs in assignment constraints \((v=i)\) for at most one value \(i\). Let \(\alpha\) be as above, selecting an assignment that satisfies all assignment constraints and assigns unique distinct values to all variables not occurring in assignment constraints. Then, as above, if there is a constraint \(R(X)\) not satisfied by \(\alpha\) then there is an explicit contradiction between \(R(X)\) and at most \(r(R)=O(1)\) assignment constraints. Since at least one of these constraints must be violated in any assignment, we get an \(O(1)\)-approximation by simply deleting both \(R(X)\) and one copy of every assignment constraint with scope intersecting \(X\). Repeating this until no constradictions remain gives an \(r(\Gamma)\)-approximation, where \(r(\Gamma)\) is the largest arity of a relation in \(\Gamma\).
We summarise the properties of singleton expansions of strictly negative languages as follows.
**Lemma 62** (Lemma 56, repeated).: _Let \(\Gamma^{\prime}\) be a singleton expansion of a finite strictly negative equality language \(\Gamma\). Then CSP\((\Gamma^{\prime})\) is in P and MinCSP\((\Gamma^{\prime})\) is NP-hard but FPT, and has a constant-factor approximation._
Proof.: CSP\((\Gamma^{\prime})\) is in P by Cor. 59 (or indeed by Lemma 61). For NP-hardness, we invoke Lemma 60. In particular since no proper relation is both strictly negative and constant and \(\Gamma\) contains at least one relation by assumption, Lemma 16 implies that \(\Gamma\) implements \(\neq\). Then MinCSP\((\Gamma_{1}^{+})\) is already NP-hard by Lemma 60, and adding further unary singleton relations to the language clearly does not change this fact. Finally, the FPT algorithm and constant-factor approximation of Lemma 61 also applies to MinCSP\((\Gamma^{\prime})\).
### Constant languages
We now assume that \(\Gamma\) is constant but not strictly negative. For studying these cases, we need to employ the algebraic machinery for studying CSPs.
Let \(R\subseteq D^{r}\) be a relation, and \(f\colon D^{c}\to D\) an operation on a domain \(D\). We say that \(f\)_preserves_\(R\) if, for any \(t_{1},\ldots,t_{c}\in R\) we have \(f(t_{1},\ldots,t_{c})\in R\), where \(f\) is applied component-wise. Let \(\Gamma\) be a constraint language
over \(D\). A _polymorphism_ of \(\Gamma\) is an operation over \(D\) that preserves every relation \(R\in\Gamma\). A polymorphism \(f\colon D^{c}\to D\) is _essentially unary_ if there exists an index \(i\in[c]\) and an operation \(g\colon D\to D\) such that \(f(x)=g(x_{i})\) for every \(x\in D^{c}\). A polymorphism is _essential_ if it is not essentially unary. Polymorphisms are a standard tool in studying finite constraint languages, since they characterize the expressive power of the language up to pp-definability. Bodirsky [3] shows that under mild assumptions they can be used for infinite languages and languages over infinite domains as well.
**Theorem 63** ([3, Theorem 5.2.3]).: _Let \(\Gamma\) be an \(\omega\)-categorical structure. A relation \(R\) has a pp-definition in \(\Gamma\) if and only if \(R\) is preserved by all polymorphisms of \(\Gamma\)._
In particular, every equality constraint language is \(\omega\)-categorical. Furthermore, for any equality constraint language \(\Gamma\) and any \(c\in\mathbb{N}\), the language \(\Gamma^{+}_{c}\) is \(\omega\)-categorical (see Section 3.1 of Bodirsky [3]). We will not need to consider any subtleties around the language \(\Gamma^{+}\) which has an infinite number of additional relations, since all hardness claims over languages \(\Gamma^{+}\) will follow from finite subsets \(\Gamma^{+}_{c}\) of \(\Gamma^{+}\).
We give two simple statements for reference.
**Lemma 64**.: _Let \(\Gamma\) be an equality constraint language and \(c\in\mathbb{N}\) a constant, \(c\geq 2\). If \(\Gamma^{+}_{c}\) has no essential polymorphisms then \(\operatorname{CSP}(\Gamma^{+}_{c})\) is NP-hard._
Proof.: Define \(Q(a,b,c)\) over \(\mathbb{N}\) as the relation \((a=b\lor b=c)\). As noted by Bodirsky [3, Lemma 5.3.2], \(Q\) is preserved by all essentially unary operations but has no essential polymorphisms. As noted above, \(\Gamma^{+}_{c}\) is \(\omega\)-categorical, and therefore pp-defines \(Q\). However, \(\operatorname{CSP}(Q,x=1,x=2)\) is NP-hard: Note that \(\exists y_{1},y_{2}:(y_{1}=1)\wedge(y_{y}=2)\wedge Q(y_{1},x,y_{2})\) pp-defines the unary relation \(x\in\{1,2\}\). Thus it is enough that \(\operatorname{CSP}(Q)\) is NP-hard as a Boolean language, which is standard (e.g., see Exercise 3.24 in [20]).
We also give a direct proof of the following implementation result. Recall that \(\mathsf{ODD}_{3}\subset\mathbb{N}^{3}\) accepts any tuple that takes either one or three distinct values.
**Lemma 65**.: _Let \(\Gamma\) be a constant equality language. Let \(f\colon\mathbb{N}\to\mathbb{N}\) be the retraction defined by \(f(1)=1\) and \(f(x)=2\) otherwise. If \(\Gamma\) is not preserved by \(f\), then \(\Gamma^{+}_{2}\) implements \(\mathsf{ODD}_{3}\)._
Proof.: Let \(R\in\Gamma\) be a relation not preserved by \(f\), of arity \(r\). For a tuple \(\mathbf{t}\in\mathbb{N}^{r}\), write \(f(\mathbf{t})=(f(t_{1}),\ldots,f(t_{r}))\) for the result of applying \(f\) to \(\mathbf{t}\). Since \(\Gamma\) is constant, \(\mathbf{a}:=(1,\ldots,1)\in R\), and since \(R\) is not preserved by \(f\) there is a tuple \(\mathbf{c}\in R\) such that \(f(\mathbf{c})\notin R\). Note that we have a refinement order, where \(\mathbf{c}\) strictly refines \(f(\mathbf{c})\) which strictly refines \(\mathbf{a}\). Let \(\mathbf{c}\) be a least refined tuple in \(R\) such that \(f(\mathbf{c})\notin R\), and let \(r^{\prime}\) be the number of distinct values used in \(\mathbf{c}\). Without loss of generality, assume that \(\mathbf{c}\) uses values \(1\) through \(r^{\prime}\). Note that \(r^{\prime}\geq 3\). Define a relation \(R^{\prime}\) as \(R^{\prime}(x_{1},\ldots,x_{r^{\prime}})=R(x_{c_{1}},\ldots,x_{c})\). Then \(R^{\prime}\) accepts \((1,\ldots,1)\) and \((1,\ldots,r^{\prime})\), but no tuple between these two in the refinement order. Thus \(R^{\prime\prime}(x,y,z)=\exists_{x_{\ell},\ldots,x_{\ell^{\prime}}}R^{\prime}(x,y,z,x_{4},\ldots,x_{r^{\prime}})\) is an implementation of \(\mathsf{ODD}_{3}\).
We show that \(\textsc{MinCSP}(\mathsf{ODD}_{3},\Gamma_{2})\) is Hitting Set-hard.
**Lemma 66**.: _There is a cost-preserving reduction from Hitting Set to \(\textsc{MinCSP}(\mathsf{ODD}_{3},\Gamma_{2})\)._
Proof.: We give a variant of the hardness proof for \(\textsc{MinCSP}(\mathsf{ODD}_{3},=,\neq)\) (Lemma 22).
**Claim 66.1**.: _The language \(\{\mathsf{ODD}_{3},\Gamma_{2}\}\) pp-defines, for every arity \(\ell\), the relation \(R(x_{1},\ldots,x_{\ell})\) that accepts every tuple except \((1,\ldots,1)\) and \((2,\ldots,2)\)._
Proof of claim: Create a pp-definition with local variables \(z_{1}\), \(z_{2}\) and \(Y=\{y_{2},\ldots,y_{\ell}\}\) using the constraints
\[(z_{1}=1)\wedge(z_{2}=2)\wedge\mathsf{ODD}_{3}(x_{1},x_{2},y_{2})\wedge\bigwedge _{i=2}^{\ell-1}\mathsf{ODD}_{3}(y_{i},x_{i+1},y_{i+1})\wedge\mathsf{ODD}_{3}(z _{1},z_{2},y_{\ell}).\]
We claim that this pp-defines the relation \(R\). First, assume that \(x_{1}=x_{2}=\ldots=c\) for some value \(c\). Then \(y_{\ell}=c\) by induction. If \(c\in\{1,2\}\), then the formula is unsatisfiable, otherwise the assignment where \(y=c\) for every \(y\in Y\) satisfies the formula. Next, assume that \(x_{i}\neq x_{i+1}\). Then we may set \(y_{i+1}\) to any free value, and by induction we can set all variables \(y_{j}\) for \(j>i\) to distinct values not equal to \(1\) or \(2\) and not used by any variable \(x_{i}\). Then also the constraint \(\mathsf{ODD}_{3}(z_{1},z_{2},y_{\ell})\) is satisfied.
Now let \(I=(n,\mathcal{F},k)\) be an instance of Hitting Set (using the same encoding as in Lemma 22). We create an instance \(I^{\prime}\) of \(\textsc{MinCSP}(\mathsf{ODD}_{3},\Gamma_{2})\) on variable set \(V=\{v_{1},\ldots,v_{n}\}\) as follows. First, create a soft constraint (\(v_{i}=1\)) for every \(i\in[n]\). Next, for every set \(F\in\mathcal{F}\) we construct the relation of Claim 66.1 over the variable set \(\{v_{i}\mid i\in F\}\). We let every such construction consist of crisp constraints only. We claim that this defines a cost-preserving reduction. On the one hand, let \(S\subseteq[n]\) be a hitting set for \(\mathcal{F}\). Then we assign \(\alpha(v_{i})=3\) for \(i\in S\) and \(\alpha(v_{i})=1\) otherwise. This satisfies every constraint of \(I^{\prime}\) except the soft constraints (\(v_{i}=1\)) for \(i\in S\)
i.e., the cost of \(\alpha\) is precisely \(|S|\). On the other hand, let \(\alpha\) be an assignment to \(I^{\prime}\) of some finite cost \(c\), and let \(S=\{i\in[n]\mid\alpha(v_{i})\neq 1\}\) be the indices corresponding to violated soft constraints in \(I^{\prime}\). Then \(S\) is a hitting set of \(\mathcal{F}\).
With the preliminaries above in place, we can provide the main algebraic characterization we use from previous work.
The following is a direct rephrasing of a result of Bodirsky et al. [5]. We use the phrase \(f\)_takes (at most) \(k\) values_ to mean that the image of \(f\) has cardinality (at most) \(k\). For the definition of quasilinear operation, see [5]; we will only need that quasilinear operations take at most two values.
**Theorem 67** (Theorem 8 of [5]).: _Let \(\Gamma\) be an equality language with at least one unary polymorphism that is not constant or injective. If \(\Gamma\) has any essential polymorphism, then the essential polymorphisms of \(\Gamma\) are described by one of the following cases._
1. _All quasilinear operations_
2. _For some_ \(k\geq 2\)_, all operations which take at most_ \(k\) _different values_
The remaining case looks as follows.
**Lemma 68**.: _Let \(\Gamma\) be a constant equality language that does not have any unary polymorphism that is not injective or a constant. Then \(\Gamma_{2}^{+}\) implements \(\mathsf{ODD}_{3}\). Furthermore, if \(\Gamma\) has an essential polymorphism then \(\Gamma\) is Horn._
Proof.: The first statement follows from Lemma 65. The second statement is Theorem 15 of Bodirsky et al. [5].
#### 7.3.1 At most two singletons
We first treat the case of \(\Gamma_{c}^{+}\) for \(c\leq 2\), since it behaves somewhat differently from the rest. We begin with a trivial observation.
**Proposition 69**.: _If \(\Gamma\) is a constant equality constraint language, then \(\textsc{MinCSP}(\Gamma_{1}^{+})\) is in P._
Thus we consider the language \(\Gamma_{2}^{+}\). Our main strategy will be a reduction to a Boolean MinCSP problem, all of which have been fully characterized [33, 13].
**Lemma 70**.: _Let \(\Gamma\) be a finite, constant equality language. One of the following applies._
* \(\Gamma_{2}^{+}\) _has a retraction to domain_ \(\{1,2\}\) _and_ \(\textsc{MinCSP}(\Gamma_{2}^{+})\) _is equivalent under cost-preserving reductions to_ \(\textsc{MinCSP}(\Delta)\) _for some Boolean language_ \(\Delta\)__
* \(\Gamma\) _is Horn and_ \(\mathrm{CSP}(\Gamma_{2}^{+})\) _is in P, but_ \(\textsc{MinCSP}(\Gamma_{2}^{+})\) _is_ _Hitting Set-hard_
* \(\mathrm{CSP}(\Gamma_{2}^{+})\) _is NP-hard_
Proof.: First assume that \(\Gamma\) is preserved by the retraction \(f\) defined in Lemma 65. Define the language
\[\Gamma^{\prime}=\{R\cap\{1,2\}^{r(R)}\mid R\in\Gamma\}\]
as the slice of \(\Gamma\) that only uses two values, and interpret \(\Gamma^{\prime}\) as a language over domain \(\{1,2\}\). Let \(\Delta\) be the corresponding Boolean language, under the domain renaming \(1\mapsto 0\) and \(2\mapsto 1\), with the singleton relations (\(x=0\)) and (\(x=1\)) added to \(\Delta\). We claim that \(\textsc{MinCSP}(\Gamma_{2}^{+})\) and \(\textsc{MinCSP}(\Delta)\) are equivalent problems. Indeed, let \(I\) be an instance of \(\textsc{MinCSP}(\Gamma_{2}^{\downarrow}\) and let \(I^{\prime}\) be the instance of \(\textsc{MinCSP}(\Delta)\) resulting from replacing every constraint in \(I\) to the corresponding constraint over \(\Delta\). Let \(\varphi\) be an assignment to \(V(I)\). By applying the retraction \(f\) to \(\varphi\) followed by the domain renaming mapping, we get an assignment \(\varphi^{\prime}\colon V(I)\to\{0,1\}\) which violates at most as many constraints in \(I^{\prime}\) as \(\varphi\) violates in \(I\). Correspondingly, any assignment \(\varphi^{\prime}\colon V(I^{\prime})\to\{0,1\}\) can be used directly as an assignment to \(V(I)\), and violates precisely the same set of constraints in \(I\) as \(\varphi^{\prime}\) violates in \(I^{\prime}\). Clearly, the same reduction also works when interpreted as a reduction from \(\textsc{MinCSP}(\Delta)\) to \(\textsc{MinCSP}(\Gamma_{2}^{+})\).
Otherwise, if \(\Gamma\) is not preserved by \(f\) then by Lemma 65, \(\Gamma_{2}^{+}\) implements \(\mathsf{ODD}_{3}\) and \(\textsc{MinCSP}(\Gamma_{2}^{+})\) is at least Hitting Set-hard by Lemma 66, and the question is whether \(\mathrm{CSP}(\Gamma_{2}^{+})\) is in P. If \(\Gamma\) has no essential polymorphism, then \(\mathrm{CSP}(\Gamma_{2}^{+})\) is NP-hard by Lemma 64. Hence assume that \(\Gamma\) has some essential polymorphism. If \(\Gamma\) has a unary polymorphism that is not constant or an injection, then by Theorem 67, \(\Gamma\) is preserved by every quasilinear operation. However, the retraction \(f\) of Lemma 65 is quasilinear (see Bodirsky et al. [5]), so this case is impossible. In the remaining case, \(\Gamma\) is Horn by Lemma 68 and \(\mathrm{CSP}(\Gamma_{2}^{+})\) is in P by Cor. 59.
Finally, the possibilities for the last case here are quite limited.
**Lemma 71**.: _Let \(\Gamma\) be a constant equality language such that \(\Gamma_{2}^{+}\) has a retraction to domain \(\{1,2\}\) and let \(\Delta\) be the corresponding Boolean language. The following hold._
* _If_ \(\Delta\) _is positive conjunctive and connected then_ MinCSP__\((\Gamma_{2}^{+})\) _is in P_
* _If_ \(\Delta\) _is positive conjunctive but not connected, then_ CSP__\((\Gamma_{2}^{+})\) _is in P and_ MinCSP__\((\Gamma_{2}^{+})\) _is W[1]-hard but has a constant-factor approximation_
* _If_ \(\Delta\) _is not positive conjunctive but affine, then_ CSP__\((\Gamma_{2}^{)}\) _is in P but_ MinCSP__\((\Gamma_{2}^{+})\) _is Nearest_ _Codeword-hard_
* _Otherwise_ CSP__\((\Gamma_{2}^{+})\) _is NP-hard._
Proof.: Let \(\Gamma^{\prime}\) be the intersection of \(\Gamma\) with domain \(\{0,1\}\), interpreted as a Boolean language (as in Lemma 70). The possible cases for such languages follow from Bonnet et al. [13] and the structure of Post's lattice of co-clones. Specifically, \(\Gamma^{\prime}\) is 0-valid, 1-valid and preserved by negation. If \(\Gamma^{\prime}\) has no further polymorphism, then CSP(\(\Gamma_{2}^{+}\)) is NP-hard; if \(\Gamma^{\prime}\) is preserved by the 3-ary XOR operation, then CSP(\(\Gamma_{2}^{+}\)) is in P, but if \(\Gamma^{\prime}\) has no further polymorphism then MinCSP(\(\Gamma_{2}^{+}\)) is as hard to FPT-approximate as Nearest Codeword; and in every remaining case, \(\Gamma^{\prime}\) is positive conjunctive [13]. Finally, if a Boolean language \(\Gamma^{\prime}\) is positive conjunctive, then either every relation in \(\Gamma^{\prime}\) is connected, in which case the relation is submodular and MinCSP(\(\Gamma_{2}^{+}\)) is in P, or \(\Gamma^{\prime}\) implements \(R_{=,=}\) and MinCSP(\(\Gamma_{2}^{+}\)) is W[1]-hard. However, as in Lemma 61 we can split every relation \(R\in\Gamma^{\prime}\) into separate equality constraints, and reduce to st-Min Cut, up to a constant-factor approximation loss.
For completion, let us provide example languages for each of the cases of Lemma 71. If \(\Gamma=\{Q\}\) where \(Q(a,b,c)\equiv(a=b\lor b=c)\) is from Lemma 64, then \(\Gamma\) is closed under the retraction to domain \(\{1,2\}\) but CSP(\(\Gamma_{2}^{+}\)) is NP-hard. Next, consider the relation \(R(a,b,c,d)\) which accepts any assignment where every block has even cardinality (i.e., \(R\) accepts tuples \((1,1,1,1)\), \((1,1,2,2)\), \((1,2,1,2)\) and \((1,2,2,1)\)). Then MinCSP(\(\Gamma_{2}^{+}\)) corresponds to MinCSP(\(\Delta\)) for the Boolean language \(\Delta=\{x=0,x=1,a+b+c+d=0\ (\text{mod}\ 2)\}\) which is Nearest Codeword-hard [13]. The final two cases are represented by the languages \(\Gamma=\{R_{=,=}\}\) and \(\Gamma=\{=\}\). As a final example, consider \(\Gamma=\{R\}\) where \(R(a,b,c,d)\) is the 4-ary relation what accepts any tuple \((a,b,c,d)\) where \(a=b\) and \(|\{a,b,c,d\}|\neq 3\). Then \(\Gamma\) is not itself positive conjunctive, but \(\Gamma\) is constant and closed under the retraction \(f\colon\mathbb{N}\to\{1,2\}\) and the Boolean slice \(\Gamma^{\prime}\) of \(\Gamma\) simply contains the relation \((a=b)\) with two additional irrelevant arguments.
#### 7.3.2 At least three singletons
The cases where \(c\geq 3\) (and the case of \(\Gamma^{+}\)) are more regular.
**Lemma 72**.: _Let \(c\geq 3\) and let \(\Gamma\) be a constant equality language. One of the following applies._
* MinCSP__\((\Gamma_{c}^{+})\) _is equivalent to_ MinCSP__\((\Delta_{c}^{+})\) _where_ \(\Delta\) _is the_ \(c\)_-slice of_ \(\Gamma\)_, and_ \(\Delta\) _is positive conjunctive_
* \(\Gamma\) _is Horn but the last case does not apply;_ CSP__\((\Gamma_{c}^{+})\) _is in P, but_ MinCSP__\((\Gamma_{c}^{+})\) _is_ Hitting Set_-hard_
* _Neither case applies, and_ CSP__\((\Gamma_{c}^{+})\) _is NP-hard._
_Similarly, one of the following applies._
* \(\Gamma\) _is positive conjunctive_
* \(\Gamma\) _is Horn but not positive conjunctive;_ CSP__\((\Gamma^{+})\) _is in P, but_ MinCSP__\((\Gamma^{+})\) _is_ Hitting Set_-hard_
* _Neither case applies, and_ CSP__\((\Gamma^{+})\) _is NP-hard._
Proof.: Consider the polymorphisms of \(\Gamma_{c}^{+}\). They are precisely the intersection of the polymorphisms of \(\Gamma\) and of \(\Gamma_{c}\), i.e., every polymorphism \(f\) of \(\Gamma\) such that \(f(i,\ldots,i)=i\) for every \(i\in[c]\). Then any such \(f\) is an operation that takes at least \(c\) values. First, assume that \(\Gamma\) has at least one unary polymorphism that is not constant or injective, so that Theorem 67 applies. In this case, either \(\Gamma_{c}^{+}\) has no essential polymorphisms or \(\Gamma\) is preserved by every operation that takes \(c\) values. In the former case, CSP(\(\Gamma_{c}^{+}\)) is NP-hard by Lemma 64. In the latter case, it first of all follows that \(\Gamma_{c}^{+}\) is preserved by a retraction to domain \([c]\). Furthermore, consider the resulting slice language
\[\Gamma^{\prime}=\{R\cap[c]^{r(R)}\mid R\in\Gamma\}\]
(where \(r(R)\) denotes the arity of \(R\)) as a language over domain \([c]\). Then every relation \(R\in\Gamma^{\prime}\) is preserved by every operation over \([c]\). The only such relations are \(=\) and relations pp-defined over \(=\). Thus, the slice
language \(\Gamma^{\prime}\) is positive conjunctive. Otherwise, by assumption Lemma 68 applies, so that \(\Gamma^{+}_{c}\) implements \(\mathsf{ODD}_{3}\) and \(\textsc{MinCSP}(\Gamma^{+}_{c})\) is Hitting Set-hard by Lemma 66. Furthermore, either \(\Gamma\) has no essential polymorphisms or \(\Gamma\) is Horn. In the former case \(\textsc{CSP}(\Gamma^{+}_{c})\) is NP-hard by Lemma 64; in the latter case \(\textsc{CSP}(\Gamma^{+}_{c})\) is in P by Cor. 59.
The case of \(\Gamma^{+}\) is similar. Clearly the hardness cases for \(\Gamma^{+}_{c}\), \(c\geq 3\) carry over to \(\Gamma^{+}\). Hence if \(\Gamma\) is not Horn then \(\textsc{CSP}(\Gamma^{+})\) is NP-hard. Furthermore, let \(R\in\Gamma\) be any relation not preserved by all operations. Then every tuple of \(R\) uses at most \(r(R)\) distinct values, and \(R\cap[r(R)]^{r(R)}\) is not preserved by all operations. Hence \(\Gamma^{+}_{r(R)}\) is already Hitting Set-hard by the above. The only remaining case is that \(\Gamma\) itself is pp-definable over \(\{=\}\), i.e., positive conjunctive.
We also observe the tractable cases for positive conjunctive languages.
**Lemma 73**.: _Let \(\Gamma\) be a finite, positive conjunctive equality language, possibly trivial, and let \(\Gamma^{\prime}\) be a singleton expansion of \(\Gamma\) with at least three singleton relations. Then \(\textsc{CSP}(\Gamma^{\prime})\) is in P and \(\textsc{MinCSP}(\Gamma^{\prime})\) has a constant-factor approximation. Furthermore the following hold._
1. _If_ \(\Gamma\) _is trivial, then_ \(\textsc{MinCSP}(\Gamma^{\prime})\) _is in P_
2. _If_ \(\Gamma\) _is connected but not trivial, then_ \(\textsc{MinCSP}(\Gamma^{\prime})\) _is NP-hard but FPT_
3. _If_ \(\Gamma\) _is not connected, then_ \(\textsc{MinCSP}(\Gamma^{\prime})\) _is W[1]-hard._
Proof.: Since every conjunctive language is Horn, \(\textsc{CSP}(\Gamma^{\prime})\) is in P by Cor. 59. If \(\Gamma\) is trivial, then instances of \(\textsc{MinCSP}(\Gamma^{\prime})\) consist of trivial constraints (always satisfied, or never satisfied) which can be discarded, and unary constraints (\(x=i\)). An optimal assignment to such an instance can be computed easily with a greedy algorithm. Otherwise, let \(R\in\Gamma\) be non-trivial. Since \(R\) is positive conjunctive, \(R\) is Horn and not strictly negative and \(R\) implements \(=\) by Lemma 16. Since \(\textsc{MinCSP}(=,\Gamma_{3})\) captures Edge Multiway Cut with three terminals, which is NP-hard, it follows that \(\textsc{MinCSP}(\Gamma^{\prime})\) is NP-hard. If \(\Gamma\) is positive conjunctive and connected, then \(\Gamma\cup\{=,\neq\}\) is split, hence \(\textsc{MinCSP}(\Gamma^{\prime},=,\neq)\) is FPT by Theorem 20 and Lemma 58. Finally, let \(R\in\Gamma\) be a relation that is positive conjunctive but not connected. Then \(R\) is defined by a conjunction of positive literals (\(x_{i}=y_{i}\)), and the graph induced over its arguments has at least two non-trivial connected components since otherwise \(R\) is split. Then \(R\) implements the relation \(R^{\prime}(x_{1},y_{1},x_{2},y_{2})\equiv(x_{1}=y_{1})\wedge(x_{2}=y_{2})\) using existential quantification over irrelevant variables and \(\textsc{MinCSP}(\Gamma^{\prime})\) is W[1]-hard by a simple reduction from Split Paired Cut (see Section 4.2).
For approximation, since \(\Gamma\) is finite there is a finite bound \(d\in\mathbb{N}\) such that every relation \(R\in\Gamma\) is a conjunction of at most \(d\) terms. Then we can split every constraint
\[R(x_{1},\ldots,x_{r})=(x_{i_{1}}=x_{j_{1}})\wedge\ldots\wedge(x_{i_{d}}=x_{j_{d}})\]
in an instance of \(\textsc{MinCSP}(\Gamma^{\prime})\) into the \(d\) separate constraints (\(x_{i_{p}}=x_{j_{p}}\)), \(p\in[d]\). This increases the cost of the instance at most by a factor \(d\). Now we have an instance with just equality and assignments, which reduces to Edge Multiway Cut which has a constant-factor approximation [18].
## 8 Discussion
We classify the parameterized complexity of \(\textsc{MinCSP}(\Gamma)\) for all finite equality constraint languages \(\Gamma\) and their singleton expansions. In particular, we show that for an equality language \(\Gamma\), \(\textsc{MinCSP}(\Gamma)\) is in FPT if \(\Gamma\) only contains split relations and \(\textsc{NEQ}_{3}\), it is W[1]-hard and fpt-approximable within a constant factor if \(\Gamma\) is negative but contains a relation that is neither split nor \(\textsc{NEQ}_{3}\), and is Hitting Set-hard otherwise. We also show that the complications introduced by singleton expansion can be handled by well-known fpt algorithms.
Note that singleton expansion is distinct from adding constants to the underlying structure (\(\mathbb{N},=\)), as the latter creates a much more powerful language. For example, just adding a single constant to the structure allows first-order reducts to implement arbitrary Boolean relations (using, e.g., \(v=0\) and \(v\neq 0\) as the two domain values). Hence, studying \(\textsc{MinCSP}(\Gamma)\) for reducts of the structure (\(\mathbb{N},=\)) with any finite number of constants added generalizes the task of producing a parameterized dichotomy for \(\textsc{MinCSP}(\Gamma)\) over all finite-domain language \(\Gamma\), which is a highly challenging task. There is a CSP dichotomy for this setting [11], but it explicitly reduces back to the CSP dichotomy for finite-domain languages, and no such result is known for parameterized complexity. The complexity of \(\textsc{MinCSP}(\Gamma)\) for the reducts of (\(\mathbb{N},=\)) extended with finitely many constants is therefore an interesting but challenging question as it generalizes \(\textsc{MinCSP}(\Gamma)\) for all finite-domain constraint languages \(\Gamma\). We note that this question is open even for the case of (\(\mathbb{N},=\)) plus a single constant, as mentioned above.
The natural next step is to study _temporal constraint languages_ where the relations are first-order definable over (\(\mathbb{Q};<\)). Note that relations \(\leq\) and \(\neq\) belong to this class, and \(\textsc{MinCSP}(\leq,\neq)\) is equivalent to Symmetric
Directed Multicut, an important open problem on digraphs [27, 35]. One way forward would be to study _constant-factor fpt approximation_ instead since there the classes of tractable languages are preserved by (equality-free) pp-definitions, and thus one can employ powerful tools from the algebraic approach to CSP.
|
2306.07203 | $p$-adic Holography from the Hyperbolic Fracton Model | We reveal a low-temperature duality between the hyperbolic lattice model
featuring fractons and infinite decoupled copies of Zabrodin's $p$-adic model
of AdS/CFT. The core of the duality is the subsystem symmetries of the
hyperbolic fracton model, which always act on both the boundary and the bulk.
These subsystem symmetries are associated with fractal trees embedded in the
hyperbolic lattice, which have the same geometry as Zabrodin's model. The
fracton model, rewritten as electrostatics theory on these trees, matches the
equation of motion of Zabrodin's model. The duality extends from the action to
lattice defects as $p$-adic black holes. | Han Yan, Christian B. Jepsen, Yaron Oz | 2023-06-12T16:07:37Z | http://arxiv.org/abs/2306.07203v2 | # \(p\)-adic Holography from the Hyperbolic Fracton Model
###### Abstract
We reveal a low-temperature duality between the hyperbolic lattice model featuring fractons and infinite decoupled copies of Zabrodin's \(p\)-adic model of AdS/CFT. The core of the duality is the subsystem symmetries of the hyperbolic fracton model, which always act on both the boundary and the bulk. These subsystem symmetries are associated with fractal trees embedded in the hyperbolic lattice, which have the same geometry as Zabrodin's model. The fracton model, rewritten as electrostatics theory on these trees, matches the equation of motion of Zabrodin's model. The duality extends from the action to lattice defects as \(p\)-adic black holes.
## I Introduction
The contemporary landscape of theoretical physics is marked by an exhilarating fusion of quantum many-body systems, quantum gravity, and quantum information, guided by the holographic principle and the anti-de Sitter/conformal field theory (AdS/CFT) correspondence [1, 2]. This duality not only provides a profound insight into quantum gravity but also offers a robust tool for addressing condensed matter problems [3, 4]. Additionally, the development of numerous tensor-network holographic toy models reveals the quantum-informational correcting feature of holographic entanglement [5, 6, 7].
Fracton states of matter [8, 9, 10, 11, 12, 13, 14, 15, 16], closely tied with Lifshitz gravity [17, 18], have recently been incorporated into the AdS/CFT landscape. Prior investigations [19, 20, 21] have indeed revealed that various holographic information properties of fracton models in hyperbolic space are consistent, akin to holographic states built via tensor networks [6, 7, 22]. However, despite these advancements, essential questions regarding the effective AdS bulk theory and corresponding boundary CFT for these states remain a mystery.
In this work, we help answer these questions by establishing a connection between the \(p\)-adic Zabrodin model [23], which is arguably the simplest toy model of AdS/CFT, and hyperbolic lattice models featuring fractons. Notably, the lattice model's subsystem symmetry is described by fractal trees identical to the \((p+1)\)-fractal tree in the Zabrodin model, forming the basis for the connection. The connection between the two models can be established from the action and information properties to \(p\)-adic black holes (BHs) as lattice defects. This discovery builds a valuable framework for understanding physics on both sides of the correspondence. Furthermore, it enriches the AdS/condensed matter theory (CMT) program [3, 4, 24] by presenting an example of condensed matter theory within the bulk, rather than conventionally on the boundary.
## II Zabrodin's model
Zabrodin's \(p\)-adic model [23], a simple AdS/CFT toy model, uses Gaussian fields on the vertices of an infinite \((p+1)\)-regular tree \(\mathcal{T}_{p}\) to discretize the AdS bulk. This tree is also known as the Bruhat-Tits tree or \((p+1)\)-degree fractal tree, as shown in Fig. 2(a) for a 3-regular tree. Its action is given by
\[S_{\text{bulk}}=\frac{\gamma}{2}\sum_{v\in\mathcal{T}_{p}}\sum_{v^{\prime} \sim v}(\phi_{v}-\phi_{v^{\prime}})^{2}\,, \tag{1}\]
where \(v^{\prime}\) denotes neighbor vertices of \(v\). The model's equation of motion is given in terms of a graph Laplacian:
\[\triangle\phi_{v}\equiv(p+1)\phi_{v}-\sum_{v^{\prime}\sim v}\phi_{v^{\prime} }=0. \tag{2}\]
Focusing on prime \(p\), Zabrodin established a duality between this discrete theory and a continuous boundary theory,
\[S_{\text{boundary}}=\frac{\gamma}{4}\frac{p(p-1)}{p+1}\int_{\mathbb{Q}_{p}} dx\,dy\,\frac{(\phi(x)-\phi(y))^{2}}{|x-y|_{p}^{2}}\,, \tag{3}\]
where \(\mathbb{Q}_{p}\) is the field of \(p\)-adic numbers and \(|\cdot|_{p}\) is the \(p\)-adic norm [25, 26]. In the limit, where source terms in (1) approach the tree boundary, the partition functions of (1) and (3) become identical. The theory (3) had previously been introduced as an effective theory for the endpoints of \(p\)-adic strings, from which the \(p\)-adic Veneziano amplitude [27] can be derived [28, 29]. With his model (3), Zabrodin had discovered the worldsheet theory of \(p\)-adic strings. With the advent of \(p\)-adic AdS/CFT [30, 31], Zabrodin's duality was reinterpreted as the first example of \(p\)-adic holography. From this new perspective, the tree \(\mathcal{T}_{p}\) is instead viewed as the \(p\)-adic version of AdS bulk, and the \(p\)-adic numbers is the boundary space in which \(p\)-adic CFTs live.
## III Hyperbolic fracton model with Ising perturbations
In this work, we discuss the Hyperbolic Fracton Model (HFM) with ferromagnetic Ising interactions (HFM+Ising). These models are placed on hyperbolic lattices, tessellations of a 2D plane with constant negative curvature by polygons. The lattices are characterized by Schlafli symbols, pairs of integers \((m,n)\), which
represent tessellations of \(m\)-sided regular polygons with \(n\) polygons meeting at each vertex. This configuration necessitates the condition \(m^{-1}+n^{-1}<1/2\), permitting infinitely many different hyperbolic tessellations. An Example with Schlafli symbol \((5,6)\) are shown in Fig. 1. Our study is primarily concerned with tessellations having an even \(n\), and uses the \((5,6)\) lattice as the concrete example.
In the HFM, the scalar degrees of freedom \(Z_{p_{i}}\) are located at the centers of polygons labeled by \(p_{i}\)'s. The total Hamiltonian is given by
\[\mathcal{H}_{\text{HFM-Ising}}=\mathcal{H}_{\text{HFM}}+\mathcal{H}_{\text{ Ising}}. \tag{4}\]
It includes the HFM part \(\mathcal{H}_{\text{HFM}}\) and the Ising part \(\mathcal{H}_{\text{Ising}}\). The Ising part is the familiar Ising model with ferromagnetic nearest-neighbor interactions,
\[\mathcal{H}_{\text{Ising}}=\alpha\sum_{\langle i,j\rangle}(Z_{p_{i}}-Z_{p_{j}} )^{2}. \tag{5}\]
We now elaborate on the HFM component, \(\mathcal{H}_{\text{HFM}}\). The HFM extends the model initially presented in Refs. [19, 20, 21], with excitations analogous to fractons in gapped fracton orders. Its Hamiltonian is:
\[\mathcal{H}_{\text{HFM}}=U\sum_{v\in\text{vertices}}\left(Z_{p_{1}}-Z_{p_{2}}+ \cdots-Z_{p_{n}}\right)^{2}, \tag{6}\]
where \(p_{1},\ldots,p_{n}\) indicate the \(n\) sites encircling vertex \(v\) in a clockwise sequence (refer to Fig. 1 for an example).
Alternatively, considering \(Z_{p_{1}}\) as \(\mathbb{Z}_{N}\) numbers, one can replace \((Z_{p_{1}}-Z_{p_{2}}+\cdots-Z_{p_{n}})^{2}\) with \((\frac{N}{\pi})^{2}\sin^{2}\left[\frac{\pi}{N}(Z_{p_{1}}-Z_{p_{2}}+\cdots-Z_{ p_{n}})\right]\). In the case where \(Z_{p_{i}}\) adopts \(\mathbb{Z}_{2}\) values, the model has been examined for the \((5,4)\) tessellation in Refs. [19, 20], representing a simplified extreme of the general HFM.
A fracton is an excitation of the vertex term. Note that by changing the value of a single \(Z_{p_{i}}\), we create an \(m\)-multipole of fracton instead of dipole, which leads to the immobility of a single fracton in the system. This study concentrates on the low-energy sector in the limit \(U\gg\alpha\) and \(U\gg T\) (the sector devoid of fracton excitations). We illustrate that the HFM+Ising model is congruent to an infinite set of \(p\)-adic Zabrodin models, further exploring its holographic information properties.
## II Emergent Zabrodin model from HFM
We first examine \(\mathcal{H}_{\text{HFM}}\) without Ising interactions, which will be shown to match the tensionless-string limit of the Zabrodin's model. The primary result is the model's duality to multiple copies of the electrostatics theory on distinct \(n/2-\)degree fractal trees (\(n/2\)-regular trees). In the dual model, ground states on each tree satisfy the EOM of the Zabrodin model (Eq. (2)). Additionally, we can verify the holographic information characteristics of the model within this limit.
The HFM, described by Eq. (6), exhibits a large ground state degeneracy, a feature dictated by its subsystem symmetries that commute with the Hamiltonian. The key to finding them is to notice that changing an even number of neighboring variables \(Z_{p_{i}}\) at a given vertex by the same constant \(C\) leaves the vertex term invariant. This results in subsystem symmetries defined by the boundaries of \(n/2\)-degree fractal trees, pivotal to the underlying physics. Two examples, represented by blue and red trees in thick lines, are depicted in Fig. 1.
To construct these trees more formally, begin at an arbitrarily chosen vertex and select \(n/2\) nonadjacent edges from the total of \(n\) edges, keeping in mind that we're focusing on even-\(n\) tessellations. Extend the chosen edges to the neighboring vertices and repeat the selection process, making sure to include the previously chosen edges, thus making the choice unique. Continuing this process, we construct a degree-\(n/2\) fractal tree embedded within the hyperbolic lattice, extending the selected edges indefinitely. This method yields an infinitely many trees on the lattice in the thermodynamic limit.
We then define the _fractal-tree wedge_ as the part of the lattice bounded by the edges of a tree, a pattern also seen in hyperbolic fracton orders [32]. In Fig. 1, the blue and red shaded regions serve as examples. Given a classical state, denoted as \(\prod_{p_{i}}|Z_{p_{i}}\rangle\), the associated subsystem symmetry for a particular wedge \(w\) can be defined via the following operation:
\[X_{\text{wedge }w}^{\dagger}(C)\ket{Z_{p_{i}}}=\ket{Z_{p_{i}}+C},\quad\forall p _{i}\in w, \tag{7}\]
while \(X_{\text{wedge }w}^{\dagger}(C)\) acts as the identity operator on \(\ket{Z_{p_{j}}}\) for all plaquettes \(p_{j}\) outside \(w\).
We'll now formulate the dual model to the HFM, aiding our understanding of subsystem symmetries and illuminating its connection to the Zabrodin \(p\)-adic model.
We start by defining \(1d\) vector DOFs, denoted as \(\mathbf{E}_{e_{i}}\), which reside on the lattice's edges, \(e_{i}\), and point along them. Each edge, \(e\), can be described in two ways: it's
Figure 1: \((5,6)\) tessellation of the hyperbolic plane on the Poincaré disk: all polygons on the disk are identical, but look smaller when drawn farther from the center. The thick blue and red lines are fractal trees, and the shaded regions are associated with different subsystem symmetries.
sandwiched by the plaquettes \(p_{i},\ p_{j}\), denoted as \(e=p_{i,j}\), and it has vertices \(v_{k},\ v_{l}\) at its endpoints, denoted as \(e=v_{k,l}\). For an edge as depicted in Fig. 2a, we define:
\[\mathbf{E}_{v_{1,2}}\equiv(Z_{p_{1}}-Z_{p_{2}})\mathbf{\hat{r}}_{v_{1,2}}=(-Z_{p_{1}}+Z_ {p_{2}})\mathbf{\hat{r}}_{v_{2,1}}=\mathbf{E}_{v_{2,1}}. \tag{8}\]
Note that the order \(v_{1}\to p_{1}\to v_{2}\to p_{2}\) around the edge is in a clockwise direction (see Appendix for the counting of DOFs).
At a vertex, the term \((Z_{p_{1}}-Z_{p_{2}}+\cdots-Z_{p_{q}})^{2}\) can be recast in terms of \(\mathbf{E}\) as illustrated in Fig. 2b for the special case \(q=6\),
\[(Z_{p_{1}}-Z_{p_{2}}+\cdots-Z_{p_{6}})^{2} \tag{9}\] \[= (\mathbf{E}_{v_{0,1}}\cdot\mathbf{\hat{r}}_{v_{0,1}}+\mathbf{E}_{v_{0,3}}\cdot \mathbf{\hat{r}}_{v_{0,3}}+\mathbf{E}_{v_{0,5}}\cdot\mathbf{\hat{r}}_{v_{0,5}})^{2}\] \[= (\nabla\cdot\mathbf{E})^{2}\text{ on the three blue edges}\]
Notably, the \(\mathbf{E}\) fields associated with an \(n/2\)-degree fractal tree correlate, while remaining uncoupled from DOFs on other tree, except for an exact inter-tree constraint at each vertex,
\[(\nabla\cdot\mathbf{E})^{2}\text{ on blue edges}=(\nabla\cdot\mathbf{E})^{2}\text{ on red edges}. \tag{10}\]
But in the low-energy sector where fracton excitations are absent, these constraints no longer couple the fractal trees to each other.
Let's delve into the physics on a single fractal tree. The Hamiltonian on the tree of interest is
\[\mathcal{H}_{\text{tree}}=U\sum_{v_{i}\text{ on }\mathcal{T}}(\nabla\cdot\mathbf{E})^{2}. \tag{11}\]
This Hamiltonian unveils that the model equates to an electrostatics theory on the tree, involving solely the electric field sector of electrodynamics. It imposes an energy cost on charge excitations, thus the low-energy sector corresponds to charge-free electric field configurations. The subsystem symmetry in the dual model is expressed as:
\[X^{\dagger}_{\text{wedge }w}(C)\left|\mathbf{E}_{e_{i}}\right>=\left|\mathbf{E}_{e_{i} }+C\mathbf{\hat{r}}_{e_{i}}\right>, \tag{12}\] \[\text{ for edges }e_{i}\text{ on the boundary of wedge }w.\]
In the context of electrostatics, this has a clear interpretation: the injection of an electric field string from one end on the boundary to the other. Given the absence of loops on a fractal tree, this is the sole method of flux injection without charge creation, as depicted in Fig. 3a. Thus we identify a set of multiple copies of electrostatics on \(n/2\)-degree fractal trees as a dual model to the HFM.
This identification aids in elucidating certain holographic properties such as the AdS-Rindler reconstruction. Using Fig. 3b as an example, an observer measuring boundary electric fields \(E_{1}\) to \(E_{4}\) can reconstruct bulk DOFs up to \(E_{7}\) (minimal covering surface) using the charge-free condition at every vertex, but not beyond (see Appendix for details).
Furthermore, we can recast the fractal-tree electrostatics using the electric field potential, whose low energy sector explicitly obeys the EOM of the Zabrodin \(p\)-adic model. We assign electric potential \(\phi_{v_{i}}\)'s to the vertices. The electric field between vertices \(v_{1},\ v_{2}\) is \(\mathbf{E}_{v_{1,2}}=(\phi_{v_{1}}-\phi_{v_{2}})\mathbf{\hat{r}}_{v_{1,2}}\). The Hamiltonian (11) becomes
\[\mathcal{H}_{\text{tree}} =U\sum_{v_{i}\text{ on }\mathcal{T}}\left((\frac{n}{2})\phi_{v_{i}}- \phi_{v_{i,1}}-\cdots-\phi_{v_{i,n/2}}\right)^{2} \tag{13}\] \[\equiv U\sum_{v_{i}\text{ on }\mathcal{T}}\left(\triangle\phi \right)^{2}.\]
Here, \(v_{i,j}\) denote the \(n/2\) vertices next to \(v_{i}\), and \(\triangle\) denotes the Laplacian operator on the tree. Hence, the ground states of the dual model to HFM are precisely the solutions to the EOM of the Zabrodin model (Eq. (2)), with the prime \(p\) of Zabrodin's model being related to the second integer \(n\) of the Schlafli symbol via the relation \(n=2p+2\). In the case when \((n-2)/2\) is not a prime, the bulk HFM does not change behaviour substantially, but the dual tree model no longer admits a convenient holographic description in terms of \(p\)-adic numbers; what is lost is the multiplicative property of the norm: \(|ab|_{p}=|a|_{p}|b|_{p}\).
Figure 3: (a) Subsystem symmetry in the dual model as injecting a flux of electric field. (b) AdS-Rindler reconstruction: knowing boundary electric fields \(E_{1}\) to \(E_{4}\) allows reconstruction up to \(E_{7}\) (minimal covering surface), but not beyond.
Figure 2: Degree of freedom (DOF) \(Z\), \(\mathbf{E}\), and \(\phi\) defined on the plaquette centers, edges, and vertices.
Note that the degenerate ground states of \(\mathcal{H}_{\text{HFM}}\) contribute unequally to the action of the Zabrodin model, (Eq. (1))
\[S_{\text{bulk}}=\frac{\gamma}{2}\sum_{v\in\mathcal{T}_{n/2}}\sum_{v^{\prime} \sim v}(\phi_{v}-\phi_{v^{\prime}})^{2}=\gamma\sum_{e\in\mathcal{T}_{n/2}} \boldsymbol{E_{e}}^{2}, \tag{14}\]
which differentiates the charge-free electric field configurations by the familiar \(\gamma\boldsymbol{E}^{2}\) term, instilling tension to the strings on the fractal tree. Consequently, the pure HFM model realizes the Zabrodin model's tensionless-string limit.
To account for the omitted string tension contribution, we should enable the Ising sector \(\mathcal{H}_{\text{Ising}}\) (Eq. (5)) and match \(\alpha=\gamma\), so that
\[\alpha(Z_{p_{i}}-Z_{p_{j}})^{2}=\gamma\boldsymbol{E}_{e=p_{i,j}}^{2}. \tag{15}\]
Once the Ising Hamiltonian is turned on, the Hamiltonian associated to each of the trees exactly matches that of Zabrodin's model. An exact duality is not present generically, however, owing to the fact that two trees intersect at each vertex on the lattice, and the edge degrees of freedom of these trees are constrained to produce identical vertex terms. Only in the limit when \(U\) is much greater than \(kT=\beta^{-1}\), do the trees decouple, as the vertex terms are all required to assume their lowest possible value. This is a low-temperature limit according to the scale set by \(U\), but not according to the scale set by \(\alpha\): we do not assume \(\alpha>>kT\).
We see here that the large \(U\) limit of the HFM+Ising in its action on a single fractal tree, as an electrostatics theory, is equivalent to the Zabrodin model. Consequently, the HFM+Ising model on the full hyperbolic lattice, as infinitely many fractal trees defining the subsystem symmetries, is a model hosting infinitely many holographic \(p\)-adic Zabrodin models.
_Lattice defects as \(p\)-adic black holes. -_ Finally, we explain how lattice defects on HFM corresponds to BTZ black holes in \(p\)-adic AdS/CFT. We set \(\alpha=0\) and turn off the Ising interaction, because in this case the electrostatics picture of the BH offers a particularly simple intuitive understanding of the microstates and entropy of the BH.
Defined on a rigid lattice, the \(p\)-adic model has no notion of dynamical gravity. But Refs. [31; 33] proposed the notion of \(p\)-adic BHs as topological objects, constructed in analogy with the arithmetic interpretation of the BTZ BH [34] as a quotient of the isometry group of AdS by a discrete subgroup [35]. The \(p\)-adic BTZ BH boils down to the same action defined on a graph with a loop of \(L\) edges and \(L\) vertices, attached to \(L\) fractal trees of coordination number \((p+1)\), see Fig. 4. Subsequent papers [36; 37; 38; 39; 40] have corroborated the identification of this graph with a type of BTZ BH, with \(L\) as the size of the BH perimeter.
We now explain how a defect in the HFM can lead to the emergence of a \(p\)-adic BH. The defect sits at a vertex, as shown by the green disk in Fig. 5, and takes the form of a new plaquette with DOF \(Z_{p_{0}}\) there.
Initially, the vertex has two trees, \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\), whose respective wedges determine the subsystem symmetries of the HFM model. However, the inclusion of the defect merges \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) into a single tree \(\mathcal{T}^{\prime}\), with a loop at its center, as shown in Fig. 5. Every wedge bounded by the new tree \(\mathcal{T}^{\prime}\) defines a subsystem symmetry. The low-energy domain of the electrostatics theory living on \(\mathcal{T}^{\prime}\) abides by the EOM for the \(p\)-adic theory with the BTZ BH. With the Ising term included, it also matches the action.
The introduction of a defect and suppression of the vertex term leads to an increase in the ground state degeneracy, corresponding to the microstates of the \(p\)-adic BH and its resultant entropy. Consider the simplest BH from a single defect, depicted in Fig. 5. Initially, the electric fields on trees \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) comply with individual charge-free conditions, \(\rho_{1}=0\) and \(\rho_{2}=0\) if one measured the electric fields on the boundary of a region
Figure 5: A defect on the hyperbolic lattice and the merger of two fractal trees that it engenders.
Figure 4: Quotient construction of \(p\)-adic BTZ black hole. Top: Picking any doubly-infinite path in the Bruhat-Tits tree, the shifting of all vertices by \(L\) edges along this path is an isometry for any \(L\in\mathbb{N}\). Bottom: Identifying vertices related by such an isometry results in the \(p\)-adic BTZ black hole. In this case \(L=4\) and \(p=2\).
containing the BH. After merging these into a single tree \(\mathcal{T}^{\prime}\) (or equivalently \(\mathcal{T}_{1}\cup\mathcal{T}_{2}\) for observers outside), only a single charge-free condition applies to the whole tree: \(\rho^{\prime}=\rho_{1}+\rho_{2}=0\). Consequently, \(\rho_{1}(=-\rho_{2})\) can take non-zero values. These are the new states affiliated with the BH. Meanwhile, boundary observers measuring \(\mathbf{E}\) there will struggle to extract any BH information (\(\rho_{1}\)) unless they simultaneously measure a large boundary region covering all ends on either original tree, \(\mathcal{T}_{1}\) or \(\mathcal{T}_{2}\). This concept aligns with physics discussed in simpler cases where trees represent geodesics, as mentioned in Refs. [19; 20].
A defect encompassing a larger region merges \(N\) trees, with a longer loop connecting all branches outside the loop. Here, the original \(N\) conditions \(\rho_{i}=0\)\(\forall\)\(i=1,\dots,N\) on the ground state are reduced into a single one, \(\sum_{i}\rho_{i}=0\). So the BH DOF is then labeled by a series of numbers \(\rho_{1},\ \rho_{2},\ \dots,\ \rho_{N-1}\). The count of \(N\) corresponds to the count of trees partially overlapping with the defect region. In the Appendix, we show that for a region of large radius (number of layers) \(R\), the number of BH DOF is
\[N_{\text{BH}}\sim e^{R\log\lambda_{+}}\sim\sinh{(R\log\lambda_{+})}, \tag{16}\]
where \(\lambda_{+}\) is a constant depending on the lattice tessellation \((m,n)\). The number of DOF \(N_{\text{BH}}\), which measures the entropy of the BH, is proportional to its horizon size.
## IV Discussion
We've demonstrated that, at low temperatures, the simplest HFM coupled with weak Ising interactions is equivalent to the \(p\)-adic Zabrodin model - a simple toy model of AdS/CFT. This equivalence enriches our understanding of both models. In particular, the dual electrostatics model is instrumental in illustrating the holographic information properties inherent to the Zabrodin model, and the Zabrodin model's effective boundary action (Eq. (3)) can be leveraged to compute the 3- and 4-point correlation functions of the HFM. A challenge is presented, however, by the fact that the Ising-fracton hyperbolic model is dual to not one but a large set of copies of the Zabrodin model, and the boundary duals of these theories are all scrambled together.
Our findings invite future quantitative exploration of emerging new concepts. Going beyond the low-energy sector, the series of Zabrodin models that make up the dual of the HFM become interlinked. Each tree is described using \(p\)-adic geometry, yet when they're combined, conventional hyperbolic geometry is restored. This raises intriguing questions around whether this transition of geometry carries a deeper meaning.
Furthermore, when the fractal trees reduce to geodesics (the equivalent of "straight lines" in hyperbolic space) in the limit of 2-adic geometry [19; 20; 21], the associated electrostatics theory represents a classical analogue of a "bit thread" [41; 42; 43]. This notion of a "bit thread" is a string in the bulk with its endpoints entangled on the boundary. Our work proposes its extension as a "bit tree" in the bulk with multiple entangled endpoints. This opens up fascinating possibilities; for instance, could these "bit trees" offer fresh insights into the structure of holographic entanglement entropy and holographic code based on tensor networks? We hope that the equivalence demonstrated in our work can serve as a key node, concretely linking various facets of AdS/CFT.
## V Acknowledgment
H.Y. thanks Hao-Yu Sun and Andriy H. Nevidomskyy for helpful discussions. H.Y. was supported by the U.S. National Science Foundation Division of Materials Research under the Award DMR-1917511. Y.O. is supported by the ISF center of excellence.
|
2307.10456 | Determination of the critical points for systems of directed percolation
class using machine learning | Recently, machine learning algorithms have been used remarkably to study the
equilibrium phase transitions, however there are only a few works have been
done using this technique in the nonequilibrium phase transitions. In this
work, we use the supervised learning with the convolutional neural network
(CNN) algorithm and unsupervised learning with the density-based spatial
clustering of applications with noise (DBSCAN) algorithm to study the
nonequilibrium phase transition in two models. We use CNN and DBSCAN in order
to determine the critical points for directed bond percolation (bond DP) model
and Domany-Kinzel cellular automaton (DK) model. Both models have been proven
to have a nonequilibrium phase transition belongs to the directed percolation
(DP) universality class. In the case of supervised learning we train CNN using
the images which are generated from Monte Carlo simulations of directed bond
percolation. We use that trained CNN in studding the phase transition for the
two models. In the case of unsupervised learning, we train DBSCAN using the raw
data of Monte Carlo simulations. In this case, we retrain DBSCAN at each time
we change the model or lattice size. Our results from both algorithms show
that, even for a very small values of lattice size, machine can predict the
critical points accurately for both models. Finally, we mention to that, the
value of the critical point we find here for bond DP model using CNN or DBSCAN
is exactly the same value that has been found using transfer learning with a
domain adversarial neural network (DANN) algorithm. | M. Ali Saif, Bassam M. Mughalles | 2023-07-19T20:58:12Z | http://arxiv.org/abs/2307.10456v1 | Determination of the critical points for systems of directed percolation class using machine learning
###### Abstract
Recently, machine learning algorithms have been used remarkably to study the equilibrium phase transitions, however there are only a few works have been done using this technique in the nonequilibrium phase transitions. In this work, we use the supervised learning with the convolutional neural network (CNN) algorithm and unsupervised learning with the density-based spatial clustering of applications with noise (DBSCAN) algorithm to study the nonequilibrium phase transition in two models. We use CNN and DBSCAN in order to determine the critical points for directed bond percolation (bond DP) model and Domany-Kinzel cellular automaton (DK) model. Both models have been proven to have a nonequilibrium phase transition belongs to the directed percolation (DP) universality class. In the case of supervised learning we train CNN using the images which are generated from Monte Carlo simulations of directed bond percolation. We use that trained CNN in studding the phase transition for the two models. In the case of unsupervised learning, we train DBSCAN using the raw data of Monte Carlo simulations. In this case, we retrain DBSCAN at each time we change the model or lattice size. Our results from both algorithms show that, even for a very small values of lattice size, machine can predict the critical points accurately for both models. Finally, we mention to that, the value of the critical point we find here for bond DP model using CNN or DBSCAN is exactly the same value that has been found using transfer learning with a domain adversarial neural network (DANN) algorithm [23].
## 1 Introduction
In the field of artificial intelligence, machine learning is a powerful tool which has been attracted a lot of attention today [1, 2, 3, 4, 5, 6]. Machine learning has been demonstrated its ability and capability in various fields of science and technology, ranging from images classification and speech recognition to natural language processing and autonomous vehicles [7, 8]. Recently, due to its competency to capture features and classifications, machine learning has also been widely employed to study the equilibrium phase transition. Systems such as Ising model [9, 10], XY model [11], Potts model [12, 13, 14], Hubbard model [15], condensed matters [16, 17, 18] and quantum
phase transition [19; 20; 21] have been studied using machine learning. More recently, machine learning has also been used to study the nonequilibrium phase transition. Wanzhou Zhang et. al. have been applied machine learning with supervised and unsupervised to study the phase transitions of site and bond percolations [11]. Jianmin Shen et. al. have been used supervised and unsupervised to study the phase transitions of directed percolation in both \((1+1)\) and \((2+1)\) dimensions [22]. Phase transitions of site percolation and directed percolation have been also studied using transfer learning [23]. In this work we will use supervised and unsupervised learning to study the phase transition for two models which are directed bond percolation model and Domany-Kinzel cellular automaton model.
Machine learning is a field of artificial intelligence that aims to develop computer algorithms and models which can automatically learn patterns from data, improve their performance on the task at hand, and make predictions or decisions without being explicitly programmed to do so [2; 3]. Machine learning is an attempting to make the computer simulates the function of human brain. In machine learning, we train the computer on a large data set of examples which are known outcomes, and computer uses sophisticated algorithms to learn from this data and generalize its predictions to new, unseen examples. This process involves iteratively adjusting the model's parameters based on feedback from the training data, until the model is able to accurately predict outcomes on test data.
Supervised learning [4; 5; 22; 24] and unsupervised learning [22; 25; 26; 27] are the two main categories of the machine learning algorithms. The difference between the two algorithms is the kind of training data they work with. Supervised learning involves training a model on a labeled dataset, where each example is associated with a known output or target variable. The goal of the model is to learn a mapping between the input features and the output variable, so that it can make accurate predictions about the output variable for new inputs. Examples of using supervised learning include image classification, language translation, and weather prediction. Unsupervised learning, on the other hand, involves training a model on an unlabeled dataset, where there are no known output variables. Instead, the model is tasked with finding patterns and relationships within the data and clustering similar data points together. It is like letting the model explore and learn autonomously from the data. Examples of using unsupervised learning include discovering topics in text data, identifying groups of customers with similar behavior, and detecting anomalies in network traffic. The third category of machine learning algorithms is called transfer learning which comes from mixing both supervised learning and unsupervised learning.
In the case of phase transition, supervised learning techniques are useful in classifying the phases of systems those undergo a phase transition and determine the critical points, while unsupervised learning algorithms are more suitable for clustering and dimensionality reduction [9; 28]. There are many of supervised learning algorithms for example linear regression, logistic regression, decision trees, random forest, support vector machines and neural networks. Unsupervised learning algorithms include, hierarchical clustering, K-means clustering, density-based spatial clustering of applications with noise (DBSCAN), expectation-maximization (EM), principal component analysis (PCA), association rule mining and self-organizing maps (SOM). These are just a few examples of the many supervised and unsupervised learning algorithms which are available. The choice of algorithm depends on the specific problem we are trying to solve, the type of data we have and other factors such as computational resources and the need for interpretability. In this work we will use supervised learning techniques with neural networks algorithm and unsupervised learning with density-based spatial clustering of applications with noise algorithm to study the phase transition for systems which undergo a nonequilibrium phase transition belongs to directed percolation class.
We will examine the ability of machine learning in identifying the phases of those systems and determine the critical points.
## 2 Algorithms Description
We introduce here a brief description for the supervised learning algorithm and unsupervised learning algorithm which we use them in this work.
### Supervised Learning Algorithm
A convolutional neural network (CNN) is a type of neural network which is inspired by the brain neurons [6, 29, 30, 31]. It is commonly used for processing data that has a grid pattern, such as image classification, segmentation, object detection and other computer vision tasks. It is particularly effective in handling large spatial data, such as images and video, and are widely used in industry and academia for a variety of applications. Unlike a fully connected neural network where all input features are connected to all hidden layers, CNN is designed to process input images by leveraging the spatial relationships between pixels. CNN typically is composed of three kinds of layers: Convolution, pooling, and fully connected layers. In a CNN, the input image is convolved with a set of learnable filters or kernels that slide over the input image and extract spatial patterns or features. This process is known as convolution and the output of a convolutional layer is a set of feature maps that highlight different aspects of the input image. The feature maps are then passed through a nonlinear activation function, such as ReLU (Rectified Linear Unit), to introduce nonlinearity into the model. A CNN can be consists of multiple convolutional layers, followed by pooling layers that down sample the feature maps and reduce the spatial dimensions of the output. The pooled feature maps are then fed into one or more fully connected layers that learn to classify the image based on the extracted features. In Fig. 1 we show schematic representation of our architecture of CNN [32]. It consists of two CNN layers each layer composes of convolutional layer, activation layer (ReLU) and pooling layer. The two layers of CNN are followed by flattened layer and two fully connected (FC) layers. To measure the performance of our CNN, we use cross-entropy loss with soft-max loss function. The ADAM optimizer is used to speed up our neural network.
### Unsupervised Learning Algorithm
The density-based spatial clustering of applications with noise (DBSCAN) is a density-based clustering algorithm commonly used in unsupervised learning. It was proposed by Martin Ester et. al. in 1996 [33]. DBSCAN is widely used in various applications such as spatial data analysis, image segmentation, anomaly detection and customer segmentation. The advantages of DBSCAN include its ability to discover clusters of any arbitrary sizes and shapes, its ability to detect outliers as noise points and it does not require specifying the number of clusters in advance. However, it may struggle with datasets of varying densities and suffers from parameter sensitivity. The main idea behind DBSCAN is to group together data points that are close to each other in regions of high density while separating regions with lower density. Unlike traditional clustering algorithms like K-means, DBSCAN does not require specifying the number of clusters in advance and can discover clusters of various shapes. We can describe the method that DBSCAN algorithm works as follows [33]:
* Density-Based: DBSCAN defines clusters based on the density of data points. It considers a data point as a core point if there are at least a minimum number of data points (minPts) within a specified distance \(\epsilon\) of that point are called \(\epsilon\)-neighborhood of a core point. Core points are the foundation of clusters.
* Directly Density-Reachable: A data point is directly density-reachable from a core point if that point belongs to the \(\epsilon\)-neighborhood of that core point and the \(\epsilon\)-neighborhood of the core point are larger than or equal minPts.
* Density-Reachable: Two data points are density-reachable if they can be reached by a chain of points (each within \(\epsilon\) distance of the previous one), in which for every two consecutive points in the chain the later point is directly density-reachable to the preceding one.
* Density-connectivity: A data point is density connected to any other point, if there is a common point between those two points such that both of them are density-reachable to that common point.
* Noise Points (Outliers): Data points which are not core points and cannot be reached by density connectivity are treated as noise points or outliers.
To find a cluster, DBSCAN algorithm starts randomly selecting a data point that has not been visited and retrieves all data points density-reachable from it. If the chosen point is a core point, this procedure yields a cluster, otherwise DBSCAN visits the next point of the database.
## 3 Directed Percolation (DP) and Model Description
Percolation occurs when the isolated clusters of networks change under the effect of system parameters into a fully connected structure whose size is of the order of the network size [34]. For
Figure 1: Schematic representation of our CNN architecture.
example in the two dimensional bond percolation on the networks if we consider the connection between any two neighbors on the network is opened with probability \(p\), hence for small values of \(p\), there will be many small of isolated clusters in the network. However, when \(p\) is increased gradually, these clusters start to merge and at a certain value of \(p=pc\), called the percolation threshold, a giant connected component cluster spans the lattice. The change in the clusters size from isolated clusters to a spanning cluster marks a phase transition. There are two kinds of bond percolation, isotropic (undirected) bond percolation and anisotropic (directed) bond percolation. Anisotropic bond percolation which abbreviate as directed percolation (DP) involves the flow in a specific direction in space. The bonds (channels) work as valves in a way that the flow in the network can only percolate on a given direction.
In the dynamic systems if we consider the given direction as time, DP may be interpreted as \(d+1\) dimensional system describes the spreading of some non-conserved process. Phase transition in this class of system occurs between the two distinct phases, the phase where the spreading on the network dies out (absorbing phase) and the phase where the spreading survives (active phase). DP phase transition is the most famous class of nonequilibrium phase transition. Many of systems have been found to be in this class. The good thing which help us to use machine learning to identify the phase transition of DP class is that: Even so simple numerical simulations of any system of DP kind can show the temporal evolution of such systems changes significantly at the phase transition. By means of that, from the typical space-time snapshot of this class, we can distinguish simply between the absorbing phase and active phase. In Fig. 2 we show typical space-time evolution of system of DP class in \((1+1)\) dimension. Top panel of Fig. 2 shows the case when we start simulations initially from fully occupied lattice, and bottom panel shows when we start simulations with single occupied site. It is clear from Fig. 2 that, the behavior of DP on both sides of \(p_{c}\) is completely different. We can clearly from the images distinguish between the absorbing phase from active phase. Therefore, we will exploit this property to train CNN in order to become able to tell us to which phase the image belongs to when we feed it with new unseen image.
Using CNN and DBSCAN algorithms, we aim to study the following two simple models in \((1+1)\) dimension. Both models have been proven to undergo a phase transition from the absorbing phase to active phase which belongs to DP class. In what follows, we give a brief description of those two models.
### Directed Bond Percolation
Directed bond percolation (bond DP) process on one dimension describes the time evolution of binary variable \(s_{i}(t)\) of one dimensional lattice site, where \(i\) denotes to spatial coordinate (horizontal axes) and t is a discrete time variable (vertical axes) [34]. The site is active (occupied) if \(s_{i}=1\) and inactive (empty) when \(s_{i}=0\). Time evolution of this model occurs according to the following rules:
\[s_{i}(t+1)=\left\{\begin{array}{llll}1&\mbox{if}&s_{i-1}=1&\mbox{and}&z^{-}< p\\ 1&\mbox{if}&s_{i+1}=1&\mbox{and}&z^{+}<p\\ 0&\mbox{otherwise}&\end{array}\right. \tag{1}\]
Where \(p\) is the probability for the bond between any two connected sites to be open. \(z^{\pm}\in[0,1]\) is random number selected from uniform distribution. This model has been found to show a DP phase transition with critical point at \(p_{c}=0.6447\)[35].
### The Domany-Kinzel Cellular Automaton
Second model which we intend to study using machine learning is the Domany-Kinzel cellular automaton (DK) on \((1+1)\) dimension [34]. The sites \(i\) of DK model at any time take the binary value \(s_{i}(t)=\{0,1\}\). The time evolution of the DK model occurs according to the following rules:
\[s_{i}(t+1)=\left\{\begin{array}{ll}1&\mbox{if}\hskip 56.905512pts_{i-1}\neq s _{i+1}\hskip 28.452756pt\mbox{and}\hskip 14.226378ptz_{i}(t)<p_{1}\\ 1&\mbox{if}\hskip 56.905512pts_{i-1}=s_{i+1}=1\hskip 14.226378pt\mbox{and} \hskip 14.226378ptz_{i}(t)<p_{2}\\ 0&\mbox{otherwise}\end{array}\right. \tag{2}\]
In contrast to bond DP, the DK model depends on two percolation probabilities \(p_{1}\) and \(p_{2}\). \(z_{i}(t)\in[0,1]\) is random number selected from uniform distribution. This model has been found to undergo a phase transition of DP class at the critical point \(p_{1}=0.64770\) and \(p_{2}=87376\)[34].
## 4 Learning Percolation by CNN Method
Our goal on this work to train the CNN on large numbers of images which are similar to the images in Fig. 2. After the training the CNN should be able to decide to which phase the images belong
Figure 2: Typical DP bond clusters in \((1+1)\) dimensions grown initially from: (top panel) fully occupied lattice, (bottom panel) a single active seed.
when feeds with a new unseen image. We have two choice in hand to learn machine. The first choice to use directly the raw data of Monte Carlo simulations to train machine, whereas the second choice is to start converting the raw data of Monte Carlo simulations to images, after that we use those images to train machine. Whereas the second choice needs more efforts we adopt it here for the following reasons:
* We need to train the machine only on the images generated from a one model, e. g. bond DP, and use that trained machine to study any new model of DP class even if we do not have a prior knowledge of the value of critical point for that model.
* We do not need to retrain the machine at each time we change the lattice size or simulation time.
For purpose of using CNN algorithm to determine the critical points of any system of DP universality class, we start training it using the images which have been generated from Monte Carlo simulations of bond DP model. Each image used in training is due to the Monte Carlo simulation of lattice of size \(L=1000\) sites, and for time is \(T=1000\) time steps. Hence, the dimensions of each image is \(L\times T\). We have generated (training images) 150 images in subcritical region (\(p<P_{c}\)) and 150 images in supercritical region (\(p>p_{c}\)). we give the label (0) for the images in subcritical region and (1) for the images in supercritical region. We have also generated 60 images for validation (testing images). Close to the critical point we consider the image to be of label 1 if that image survives for time which is greater than the correlation time at that point such the image in Fig. 3 (a), otherwise the image label is 0 such as the image in Fig. 3 (b). During the training, machine encodes each image to \(100\times 100\) pixels in size with color space is gray scale. We find that, this value of pixels is enough to get a good performance for machine, however smaller values of pixels do not work properly. Fig. 4 shows an example of original image and its encoded images. After we train the CNN using the images described above we find that: Even with this small sample of images CNN can predict the label of any image with accuracy of 99% after 8 epoches of training. This value of accuracy does not mean the CNN can determine the critital points with that accuracy, however it means the CNN can predict the label of the images with that value of accuracy. Ability of CNN in determantion of critical points will depend on our selection of the label for the images near the critical point which we use it in training. As we have shown in Fig. 3, there is some difficulty in determantion the label of the images as we approach the critical point.
Figure 3: Generated images close to the critical point.
We use the trained CNN in order to determine the critical points for the both models, bond DP and DK, at various values of lattice size (\(L=20,40,60,80,100\)). The dimension of each image which we generate is always \(L\times T\). To take the effect of correlation time in account we set \(T\) to be proportional to \(L^{Z}\), where \(z=1.580(1)\) is the dynamical exponent [34]. During the testing, machine encodes each image to \(100\times 100\) pixels in size. Fig. 5 (a) shows the average probability of being the label of image is \(0\) (\(P_{0}\)) as function of \(p\) for lattice of size \(L=20\) for bond DP model. We consider the point \(p\) where the CNN predicts \(0\) and \(1\) with equal probability to be the critical point \(p_{c}\) for the model [14, 23]. Estimated value for the critical point in this case as we can see from Fig. 5 (a) is \(p_{c}=0.619(2)\). For the case of DK model, we fix the value of the first probability \(p_{1}\) to be equal to the critical point of this model \(p_{1c}=0.6447\)[34], and allow CNN to determine the second probability \(p_{2c}\). Fig. 5 (b) shows \(P_{0}\) for DK model as function of \(p_{2}\) for lattice of size \(L=20\). The critical point estimated in this case is \(p_{2c}=0.792(7)\) see Fig. 5 (b). For both models, we averaged over \(100\) images far from critical point and \(500\) images beside the critical point.
In order to determine the critical points for both models when the system size goes to infinity,
Figure 4: Original image and its encoded images.
Figure 5: CNN results of one-dimensional with image configuration as input to CNN at \(L=20\). (a) Bond DP, (b) DK model.
we extract the critical points using CNN at various values of lattices sizes for both models. For the case of bond DP model, Fig. 6 (a) shows the critical points \(p_{c}\) predicted by CNN as function of \(1/L\) for the values of \(L=20,40,60,80\) and \(100\). Extrapolation of the results to an infinite system size suggests the critical point for bond DP in \((1+1)\) dimension to be \(p_{c}=0.6454(5)\), which agrees very well with the standard value of \(0.6447\)[35] for this model. The estimated value for critical point we find here is exactly the value of critical point that have been found for this model using DANN [23]. For DK model we fix again the value of \(p_{1c}\) to be \(0.6447\) and use CNN to predict the values of \(p_{2c}\). In Fig. 6 (b) we plot the values of the critical points \(p_{2c}\) predicted by CNN as function of \(1/L\) for the lattices of sizes \(L=20,40,60,80\) and \(100\). Estimated critical point in this case is \(p_{2c}=0.8768(4)\) in the limit of an infinite system size. This value again coincide very will with the standard value of \(0.87376\)[34] for this model.
## 5 Learning Percolation by DBSCAN method
To train machine using DBSCAN algorithm for both models bond DP and DK, we generate a samples of \(n\) uncorrelated configurations for each value of probabilities which are from \(0.1\) to \(1\) with an interval of \(0.1\) using the Monte Carlo simulation. Each configuration is taken after updating the model for time \(t>L^{Z}\), where \(z=1.580(1)\) is the dynamical exponent [34]. We collect the samples into an \(M\times L\) matrix,
\[X=\left(\begin{array}{ccccc}0&1&...&1&0\\ &&.\\ &&.\\ 1&1&...&0&1\end{array}\right)_{M\times L} \tag{3}\]
where \(M=10n\) is the total number of configurations, and \(L\) is the number of lattice sites. Each element \(X_{ij}\) in the matrix \(X\) describes the state of site \(j\) on the configuration \(i\). Such a matrix of
Figure 6: Critical points as function of \(1/L\) for the values of \(L=20,40,60,80\) and \(100\) using CNN for \((1+1)\) dimension of: (a) Bond DP, (b) DK model.
raw data is the only data we feed to the DBSCAN algorithm. The learning with unsupervised does not need a prior knowledge of the values of the critical points unlike with learning using supervised machine. Here we use the machine to extract prominent features of the data and then use this information to classify the samples into distinct phases.
Initially and before we go to use DBSCAN to study the phase transition, we would like to know how our configurations distributed in two dimensional space. For that we will reduce the dimensional space of our configurations from the dimension \(L\) to two dimensional space. There are many of unsupervised learning algorithms perform that task. Latent semantic analysis (LSA), t-distributed stochastic neighbor embedding (t-SNE), a trainable autoencoder, locally linear embedding (LLE), principle components analysis (PCA) and isometric mapping are examples of unsupervised learning algorithms which project the high-dimensional data into a lower-dimensional approximating manifold. Here our purpose is mainly to show the configurations distribution, so LSA algorithm is a suitable tool to execute that. Fig. 7 shows the clusters distributions for bond DP model with lattice of size \(L=20\) for \(M=4000\) configurations after we update the model for 10 time steps (left) and 40 time steps (right). In that figure we use LSA to reduce the dimension of our configurations from 20 to 2 and use DBSCAN algorithm for classification. It is clear from the figure that, for small values of updating time DBSCAN classifies the system to three classes (left of Fig. 7), that means the informations used to train DBSCAN are not sufficient. However as we update the system for longer time DBSCAN becomes able to predict the correct labeling (right of Fig. 7).
To determine the critical points accurately, we use Monte Carlo simulations to prepare a matrix \(X\) Eq. 3 for both models, bond DP model and DK model. Our matrix \(X\) consists of \(M=1000\) configurations have been prepared as we describe in the beginning of this section. Training of DBSCAN algorithm with the matrix \(X\) reveals that, the DBSCAN classifies the configurations \(M\) automatically into two classes. We use the trained DBSCAN to study the phase transition for both models bond DP and DK. The results of using DBSCAN to determine the critical points for lattice size \(L=20\) is shown in Fig. 8. In that figure, we plot the average probability \(P_{0}\) of being the label of the configuration is 0 as function of \(p\) for bond DP model (Fig. 8 (a)) and \(p_{2}\) for DK model (Fig. 8 (b)). For DK model we fix the value of \(p_{1}=0.6447\) as in CNN case.
To get the critical points for both models we use DBSCAN to extract further critical points for lattices of sizes \(L=20,40,60,80\) and \(120\). Fig. 9 shows the critical points which we have obtained using DBSCAN at various values of lattice size for both models. Extrapolation of the critical point
Figure 7: Projection of the configurations onto the two dimensional plane for bond DP model on one dimension lattice of size \(L=20\) after 10 steps of iterations (left) and 40 iterations (right) for \(M=2000\) configurations.
to infinite lattice size for bond DP model suggests to be \(p_{c}=0.6455(2)\) and for DK model to be \(p_{2c}=0.8741(2)\). These values of critical points we find here coincide exactly with those points we found using CNN. Table 1 summarizes the critical points for bond DP model and DK model which we get them using CNN algorithm and DBSCAN algorithm comparing to the results which have been found for the critical points for the same models in previous studies. We can note that, the value of critical point that we find it here for bond DP using CNN and DBSCAN is the same value that has been found using DANN [23]. Finally the previous results with the results have been obtained in preceding studies [11, 22, 23] confirm the ability of machine learning algorithms in helping to determine accurately the critical points for models which have a phase transition of DP universality class.
## 6 Conclusion
Using machine learning with supervised learning with CNN algorithm and unsupervised learning with DBSCAN algorithm, we have determined the critical points for bond DP model and DK model. The critical points which we find using CNN and DBSCAN for both models are the same values of the critical points those have been obtained using Monte Carlo simulations methods. We assert that, unlike the Monte Carlo methods which need to be applied on a large lattice size to get accurately the critical points of models under study, machine learning can work even with a very small lattice size and predicts the critical points accurately. Therefore, beyond a mean field theory and Monte Carlo simulations, machine learning algorithms can be considered as a new tool in hand to deal with phase transitions in equilibrium and nonequilibrium systems.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & CNN (this work) & DBSCAN (this work) & DANN [23] & Standard [34] \\ \hline Bond DP & 0.6454(5) & 0.6455(2) & 0.6453(5) & 0.64770 \\ \hline DK model & 0.8768(4) & 0.8741(2) & – & 0.87376 \\ \hline \end{tabular}
\end{table}
Table 1: Critical points for bond DP and DK models.
Figure 8: DBSCAN results of one-dimensional with raw data configurations as input to DBSCAN at \(L=20\). (a) Bond DP, (b) DK model. |
2305.01322 | An Autonomous Non-monolithic Agent with Multi-mode Exploration based on
Options Framework | Most exploration research on reinforcement learning (RL) has paid attention
to `the way of exploration', which is `how to explore'. The other exploration
research, `when to explore', has not been the main focus of RL exploration
research. The issue of `when' of a monolithic exploration in the usual RL
exploration behaviour binds an exploratory action to an exploitational action
of an agent. Recently, a non-monolithic exploration research has emerged to
examine the mode-switching exploration behaviour of humans and animals. The
ultimate purpose of our research is to enable an agent to decide when to
explore or exploit autonomously. We describe the initial research of an
autonomous multi-mode exploration of non-monolithic behaviour in an options
framework. The higher performance of our method is shown against the existing
non-monolithic exploration method through comparative experimental results. | JaeYoon Kim, Junyu Xuan, Christy Liang, Farookh Hussain | 2023-05-02T11:08:05Z | http://arxiv.org/abs/2305.01322v3 | # An Autonomous Non-monolithic Agent with Multi-mode Exploration based on Options Framework
###### Abstract
Most exploration research on reinforcement learning (RL) has paid attention to 'the way of exploration', which is 'how to explore'. The other exploration research, 'when to explore', has not been the main focus of RL exploration research. The issue of 'when' of a monolithic exploration in the usual RL exploration behaviour binds an exploratory action to an equidational action of an agent. Recently, a non-monolithic exploration research has emerged to examine the mode-switching exploration behaviour of humans and animals. The ultimate purpose of our research is to enable an agent to decide when to explore or exploit autonomously. We describe the initial research of an autonomous multi-mode exploration of non-monolithic behaviour in an options framework. The higher performance of our method is shown against the existing non-monolithic exploration method through comparative experimental results.
non-monolithic exploration, autonomous multi-mode exploration, options framework
## I Introduction
Exploration is the crucial part of RL algorithms because it gives an agent the choice to uncover unknown states. There have been many RL exploration research studies with various viewpoints, such as intrinsic reward [1, 2, 3, 4], skill discovery [5, 6, 7], Memory base [8, 9, 10, 11], and Q-value base [12]. Although exploration research has evolved, it has concentrated on 'how to explore', which is how an agent selects an exploratory action. However, the exploration research regarding 'when to explore' has not been researched in earnest.
There are two types of methodology regarding 'when to explore', which are monolithic exploration and non-monolithic exploration. The noise-based monolithic exploration, a representative monolithic exploration, is that a noise, which is usually sampled from a random distribution, is added to the original action of a behaviour policy before putting the final action to an environment. The original action of policy and the noise to be added act as an exploitation and exploration respectively. Hence, the behaviour policy using monolithic exploration is affiliated to a time-homogeneous behaviour policy. However, in a non-monolithic exploration, the original action of a behaviour policy is not added to a noise. They act for their own purpose at a separate step. Therefore, the behaviour policy using a non-monolithic exploration belongs to a heterogeneous mode-switching one (Fig. 1).
We have investigated the initial research [13] of non-monolithic exploration. As the tentative work, there are still several limitations. Firstly, there is only one exploration policy (we call it one-mode exploration). An agent can require more choice of entropy of exploration mode which denotes more exploration modes greater than one-mode exploration. Secondly, the period of exploration to be controlled should be not fixed but variable. Thirdly, the research takes advantage of a simple threshold hyper-parameter function, which is named 'homeostasis', for the variable scale of trigger signals for switching exploration or exploitation. However, there should be a natural switching mechanism by using the policy itself. It also claims other informed triggers, which are action-mismatch-based triggers and variance-based triggers.
In this paper, we propose an autonomous non-monolithic agent with multi-mode exploration based on an options framework to resolve the above-mentioned considerations. Specifically, we adopt a Hierarchical Reinforcement Learning (HRL) as an options framework chaining together a sequence of exploration modes and exploitation in order to achieve switching behaviours at intra-episodic time scales. Thus, we can achieve a multi-mode exploration with the different entropy. In order to enable autonomous switching between exploration policies and an exploitation policy where the switching is based on intrinsic signals, we adapt a guided exploration using a reward modification of each switching mode. A robust optimal policy is also researched to maintain the potential performance.
Meanwhile, for this research the following 5 questions should be answered. How can an options framework be adopted in order to take advantage of the context of a HRL for exploration modes and exploitation? How does an agent have the flexibility of the exploration period? How does an agent get more entropy choice of exploration mode? How can an agent determine the switching of non-monolithic multi-mode exploration by itself without any subsidiary function
such as 'homeostasis'? How does an agent avoid the inherent disturbance of a policy but have a robust optimal policy?
It is worth mentioning that there are no similar works in the literature, so the reference methods are partially based on the method proposed in [13] even though their work is not based on an options framework. In the end, our exploration method shows a better performance.
The contributions are summarized as follows.
* _Development of an options framework model supporting an autonomous non-monolithic multi-mode exploration:_ We introduce a novel HRL model architecture to support an autonomous non-monolithic multi-mode exploration for the first 3 research questions.
* _Development of a switching method for a non-monolithic exploration by using an inherent characteristic of a policy:_ Our model use a guided exploration with a reward modification for the fourth research question.
* _Improved robustness of the policy:_ A robust optimal policy can be ensured by taking advantage of an evaluation process for the last research question.
The rest of this research is explained as follows. Section II surveys the research of exploration and HRL related to our research. Section III explains our proposed model. Section IV describes the experiments for the performance measurement of our model compared with a non-monolithic model, [13], as a reference model and a monolithic exploration, HIRO. We discuss several acknowledged issues from the experiment in Section V. Finally, we present the conclusion of the current research and suggestions for future works in Section VI.
## II Related work
### _Options framework_
The set of option, which is a generalized concept of action, over an MDP is comprised of a semi-Markov decision process (SMDP). Semi-MDPs are defined to deal with the different levels of an abstract action based on the variable period. HRL is a representative generalization of reinforcement learning where the environment is modelled as a semi-MDP [14].
Each action in non-monolithic exploration mode which adopts a multi-mode exploration has different effect during the different period. Thus the sequence of action is defined by taking advantage of an option framework for a multi-mode exploration.
### _Exploration_
The various events for triggers have been considered whether they are acquired from uncertainty or not [15, 16, 17].
The experiment of [18] shows the efficacy of robot behaviour learning from self-exploration and a socially guided exploration supported by a human partner. [19] claims about the Bayesian framework which supports changing dynamics online and prevents conservativeness by using a variance bonus uncovering the level of transition of adversity. [20] claims Tactical Optimistic and Pessimistic (TOP) estimation
Fig. 1: An example of noise-based monolithic exploration (left) and non-monolithic exploration (right). The final action, which is a scalar in this example for the well understanding explanation, denotes the action of an agent represented with a solid circle at each step. The solid line denotes the exploitation, an original action of a behaviour policy. The solid circle in the noise-based monolithic exploration is a final action which combines the original action of a behaviour policy and a sampled bounded noise at each step. However, the solid circle in the non-monolithic exploration is defined according to the mode of each step, i.e. exploitation which is an an original action of a behaviour policy or exploration which is a random noise or a policy.
for the value estimation strategy of optimistic and pessimistic online by using a quantile approximation [21]. Hence, the belief distribution is constructed by following quantile estimation:
\[q_{Z^{s}(s,a)}=q_{Z(s,a)}+\beta q_{\sigma(s,a)} \tag{1}\]
where \(q_{Z(s,a)}\) and \(q_{\sigma(s,a)}\) are the mean and standard deviation of quantile estimation respectively and Z(s,a) is a return random variable. The belief distribution is optimistic and pessimistic when \(\beta\geq 0\) and \(\beta<0\) respectively.
[13] claims the importance of a non-monolithic exploration against a monolithic exploration. Its representative non-monolithic exploration method utilizes 'homeostasis' based on the difference of value function between k steps which is referred to as the following 'value promise discrepancy',
\[D_{promise}(t-k,t):=\big{|}V(s_{t}-k)-\sum_{i=0}^{k-1}\gamma^{i}R_{t-i}-\gamma^ {k}V(s_{t})\big{|} \tag{2}\]
where \(V(s)\) is the agent's value estimate at state s, \(\gamma\) is a discount factor and \(R\) is the reward.
The above-mentioned researches regarding the guidance-exploration framework, robust-MDP research and the adaptive optimism inspired our research on the basis of [13].
### _Hierarchical RL_
The Semi-Markov Decision Process(SMDP) takes advantage of options defining a domain knowledge and can reuse the solutions to sub-goals [14]. [22] claims that an Adaptive Skills, Adaptive Partitions framework supports learning near-optimal skills which are composed automatically and concurrently with skill partitions, in which an initial misspecified model is corrected. [23] proposes an algorithm to solve tasks with sparse reward in which the research suggests an algorithm to accelerate exploration with the construction of options minimizing the cover time. [24] also deals with a sparse reward environment. Thus, it formalizes the concept of fast and slow curiosity for the purpose of stimulating a long-time horizon exploration. The option-critic architecture has the intra policy, which follows the option chosen by the policy over options until the end of the condition of option termination [25]. [26] claims that each policy in HRL, which utilizes a flow-based deep generative model for retaining a full expressivity, is trained through a latent variable with a bottom-up layer-wise method. HIRO claims the method to synchronize adjacent levels of hierarchical reinforcement learning to efficiently train the higher level policy.
Our model makes use of HIRO for our exploration research because it is a traditional goal-conditioned HRL.
## III Our model
From the rising issue of value promise discrepancy used in [13], our research pays close attention to an autonomous multi-mode non-monolithic exploration model where an agent makes an action as to when an exploration mode starts and exits by itself. In addition, the expected model takes advantage of its inherent characteristic for the action. For the purpose, our research adopts an options framework.
Fig. 2: The architecture of our suggested model (right) compared with that of the reference paper using a homeostasis [13] (left)
An options framework especially in a goal-conditioned HRL is the appropriate consideration to control the multi-mode exploration through a fully state-dependent hierarchical policy. For the first research question proposed in Section I, our model has three levels of HRL as shown in Fig. 2 together with the implemented model of [13]. Our model names each of the levels according to the height of the level: \(Top,\ Middle\) and \(Low\). The policies are in each level: \(\pi_{T}^{PPO}\) for _Top_, \(\pi_{M}^{TD3}\), \(\pi_{M}^{PPO}\) and \(\pi_{M}^{RND}\) for _Middle_ and \(\pi_{L}^{TDS}\) for _Low_.
The hierarchical control process is easy to systematically construct a multi-mode exploration, \(g^{\text{expl-mode}}\), as the option against a function control such as homeostasis. The exploration mode policy, \(\pi_{T}^{PPO}\), can choose one of three policies of _Middle_ level as follows
\[g^{\text{expl-mode}}\sim\pi_{T}^{PPO}. \tag{3}\]
Therefore, the value of \(g^{\text{expl-mode}}\) denotes one of two exploration modes, which are uniform random and PPO, or one exploitation which is TD3. It also provides several control benefits for exploration as there are the inherent characteristics of the options framework.
In order to accomplish the purpose of our research, our options framework model comprises four elements: the inherent switching mode decision of the policy itself, empowering more entropy degrees for exploration, a guided exploration mode, and the use of an evaluation process for robustness.
### _The inherent switching mode decision of a policy itself_
Since the inherent training method of \(\pi_{T}^{PPO}\) is used in our model, one policy of _Middle_ level can be chosen according to an option, \(g^{\text{expl-mode}}\), of \(\pi_{T}^{PPO}\). \(\pi_{T}^{PPO}\) is synthesizing the reward-maximization of policy on all modes into its own policy without a subsidiary aid. This leads to the fact that the period of both exploration and exploitation is controlled by the inherent characteristic of an agent. In the end, all characteristic of the non-monolithic exploration mode policy can be integrated to the reward-maximization of policy for the second research question in Section I. We can verify the choice of a switching mode on the count of each exploration mode as shown in Section IV.
### _Empowering more entropy choice for exploration_
Our model pursues multi-mode exploration for the exploration mode policy according to the degree of entropy of exploration mode as the degree of optimism. Our model has two exploration modes, which are a \(\pi_{M}^{RND}\) and a \(\pi_{M}^{PPO}\), and one exploitation policy, \(\pi_{M}^{TD3}\), in _Middle_ level for the third research question in Section I. Thus, while an agent is being trained, we hypothesize that the degree of each entropy of three policies is as follows,
\[H\Big{(}\pi_{M}^{RND}\Big{)}>H\Big{(}\pi_{M}^{PPO}\Big{)}>H\Big{(}\pi_{M}^{TD3} \Big{)} \tag{4}\]
where \(H\Big{(}\pi_{M}^{\star}\Big{)}\) denotes the overall entropy of a policy \(\pi_{M}^{\star}\).
Our model just consumes PPO for an exploration mode so that it will be discarded at the end of training. Our model takes care of only off-policy, TD3, as a final target policy. Meanwhile, PPO and TD3 are trained together whenever a data occurs due to one of three sub-policies of _Middle_ level. If PPO is trained to some degree, our model expects that the performance of PPO is higher than the performance of uniform random regarding the result of exploration.
### _Guided exploration_
There are two phases of a potential reward progress during the training of our agent. Thus, our model takes a guided exploration into consideration for the agent in order to keep the first phase. Since our model pursues an options framework in a goal-conditioned HRL, the exploration mode policy can follow a reward-maximizing policy so that the modification of reward \(R\) from an environment is conducted with a preset parameter \(\alpha_{g^{\text{expl-mode}}}\) as
\[R_{final}=R+\alpha_{g^{\text{expl-mode}}}*R \tag{5}\]
where \(R_{final}\) denotes a modified reward according to a preset parameter \(\alpha_{g^{\text{expl-mode}}}\) and an environment reward \(R\).
The value of \(\alpha_{g^{\text{expl-mode}}}\) is differently or sometimes equally preset according to the type of \(g^{\text{expl-mode}}\) as
\[\alpha_{\text{uniform random}}>\alpha_{\text{ppo}}>\text{\emph{or equal to }}\alpha_{\text{td3}} \tag{6}\]
\begin{table}
\begin{tabular}{l l} \hline \hline Symbol & Meaning \\ \hline \(t\) & action step \\ \(state\) & current state \\ \(next\_state\) & next state \\ \(Top\) & The highest level \\ \(Middle\) & The higher level \\ \(Low\) & The lower level \\ \(Action\) & The action of _Low_ level (The action of \(\pi_{L}^{TD3}\)) \\ \(target\_pos\) & The context of _Top_ and _Middle_ \\ \(goal\) & The current lower-level context of three sub-policies of _Middle_ level (The current goal for \(\pi_{L}^{TD3}\)) \\ \(next\_goal\) & The next lower-level context of three sub-policies of _Middle_ level (The next goal for \(\pi_{L}^{TD3}\)) \\ \(R\) & The reward received from an environment (The sign of R is negative in Ant domain of OpenAI \\ Gym) & \\ \(\pi_{T}^{PPO}\) & The policy(on-policy) of _Top_ level \\ \(\pi_{T}^{TD3}\) & The policy(off-policy) of _Middle_ level \\ \(\pi_{M}^{MPO}\) & The policy(on-policy) of _Middle_ level \\ \(\pi_{M}^{MPO}\) & The policy (The uniform random for random policy) of _Middle_ level \\ \(\pi_{L}^{TD3}\) & The policy(off-policy) of _Low_ level \\ \(g^{\text{expl-mode}}\) & The action of \(\pi_{T}^{PPO}\) of _Top_ level \\ \(g_{\text{expl-mode}}\) & The preset value of \(\alpha\) according to \(g^{\text{expl-mode}}\) \\ \(S\_O\_g^{\text{expl-mode}}\) & The reference value of \(Success\_ratio\) according to \(g^{\text{expl-mode}}\) \\ \(loss\) & The loss of \(\pi_{T}^{PPO}\) of _Top_ level \\ \(S\_E\) & The \(Success\_rate\) of evaluation function of \(\pi^{TD3}\) of _Middle_ level \\ \(Done\_m\) & The count of _Done_ during the horizon of _Top_ level \\ \(R\_m\) & The sum of \(R\) during the horizon of _Top_ level \\ \(Count\_m\) & The count during the horizon of _Top_ level \\ \(S\_O\_m\) & The ratio of success count regarding \(g^{\text{expl-mode}}\) of \(\pi_{T}^{PPO}\) during the horizon of _Top_ level \\ \(\rho\) & The preset value of target rate, i.e. the average \\ & number of switches of the reference model \\ \hline \hline \end{tabular}
\end{table} TABLE I: Key Notations.
where \(\alpha_{\text{uniform random}}\), \(\alpha_{\text{ppo}}\), \(and\)\(\alpha_{\text{td3}}\) denotes a preset parameter \(\alpha_{g^{\text{exp-mode}}}\) of \(\pi_{M}^{RND}\), \(\pi_{M}^{PPO}\) and \(\pi_{M}^{TD3}\) for _Middle_ respectively.
Finally, since the value of \(R_{final}\) is utilized in the training of the exploration mode policy, a reward-maximized option for the fourth research question in Section 1 is preferred by the exploration mode policy depending on the value of \(\alpha_{g^{\text{exp-mode}}}\). As the value of \(\alpha_{g^{\text{exp-mode}}}\) gets bigger, the occurrence probability of its exploration mode gets smaller.
### _Evaluation for robustness_
For the second phase of the potential reward progress of our agent, our model adopts the online evaluation process to keep a robust optimal policy. The occurrence of success rate in the online evaluation process shows that the performance of an agent enters the second stage of reward progress in this research. From the second stage, our agent is required to have robust optimal policy by using online evaluation process. The online process evaluating the off-policy, \(\pi_{M}^{TD3}\), operates every preset step. Then, it outputs the success rate, \(S\_E\), according to the type of \(g^{\text{exp-mode}}\). Thus, \(loss_{final}\) of the exploration mode policy, \(\pi_{T}^{PPO}\), for the fifth research question in Section 1 is calculated in this research as
\[loss_{final}=loss+S\_E*loss \tag{7}\]
where \(loss_{final}\) is a modified loss according to \(S\_E\).
In our model, as the online value of \(S\_E\) increases, the loss of the exploration mode policy in the mode of uniform random and online policy becomes bigger than its online original loss.
## IV Experiments
The control of multi-mode exploration of our model as an autonomous non-monolithic agent is shown by the count of exploration modes and exploitation, since their counts are critical for the analysis of our model. Each count describes the current situation of reward-maximization of policy on all modes. Through the analysis, we aim to answer the following crucial question. _Can our model show better performance than that of the representative model of the reference paper, [13], and a noise-based monolithic exploration policy?_ We evaluate our model and them in two tasks, Ant Push and Ant Fall, of Ant domain of OpenAI Gym. The reference models for the comparison are two models of [13], XU-intra(100, informed, \(p^{*}\), X) and XI-intra(100, informed, \(p^{*}\), X), which are called 'Ref:Uniform random' and 'Ref:PPO' respectively in our reference model1. PPO is utilized for the intrinsic explore mode of our reference model. A noise-based monolithic exploration policy is HIRO, which is composed of TD3 at each level. Our model and two reference models are also implemented based on HIRO. In order to evaluate the best performance among three models, we have the four analysis items as follows:
Footnote 1: Please read the section 3.1 of [13] for the experimental details
1. How many counts are assigned to each policy through whole training steps?
2. How does the transition of our model between exploration mode and exploitation occur compared with the forced exploration transition of reference model?
3. How much is the difference between uniform random and on-policy as the exploration policy of our model based on a guided exploration strategy?
4. How much does the evaluation process influence the performance of the second reward phase?
The results of Ant Push and Ant Fall are represented in Fig. 3 and Fig. 4 respectively. Moreover, our source and implementation details are available online2. Algorithm 1 shows the main part of algorithm which is implemented on the reference code.
Footnote 2: [https://github.com/jangikim2/An-Autonomous-Non-monolithic-Agent-with-Multi-mode-Exploration-based-on-Options-Framework](https://github.com/jangikim2/An-Autonomous-Non-monolithic-Agent-with-Multi-mode-Exploration-based-on-Options-Framework)
### _Comparison with the reference paper and pure off-policy_
#### Iv-A1 Ant Push
Our model outperforms all other models through almost all training steps. The exploitation of our model and two reference models occurs during the most of the training steps. The exploration mode of our model and two reference models takes place less than the exploitation of them. HIRO shows the best performance in the early period but quickly loses the potential through whole training steps as the other models takes advantage of the diverse exploration modes. The performance of 'Ref:Uniform random' is better than that of 'Ref:PPO'.
The exploration mode of Uniform random of our model and 'Ref:Uniform random' does not take place for a long time, but for a short time and gradually. Meanwhile, more exploration of 'Ref:PPO' occurs than that of 'Ref:Uniform random' according to a preset target rate \(\rho\) where the incessant exploration occurs after the starting mode.
After the starting mode in Algorithm 1, the PPO exploration mode of our model has about 3600 steps, which is more than the Uniform random exploration mode of our model, which is about 2100 steps. Most of the PPO exploration mode of our model occurs before 1M steps. The comparison of the total steps of the two exploration modes and exploitation of our model is
\[Total\_Step\Big{(}\pi_{M}^{TD3}\Big{)}>Total\_Step\Big{(}\pi_{M}^{PPO}\Big{)}> Total\_Step\Big{(}\pi_{M}^{RND}\Big{)} \tag{8}\]
where \(Total\_Step\Big{(}\pi_{M}^{\ \ \ \ }\
#### Iv-A2 Ant Fall
Our model shows a competitive performance against all other models after 3M steps. The preset of reward modification of Ant Fall is different from that of Ant Push, which means that \(\alpha_{\text{on-policy}}\) is equal to \(\alpha_{\text{off-policy}}\). The exploitation of our model occurs over 1M steps less than that of the two reference models, which is different from the situation of Ant Push. The performance of HIRO is stationary and decrease in the latter part. Unlike Ant Push, the performance of 'Ref:PPO' is better than that of 'Ref:Uniform random'.
The exploration mode of 'Ref:Uniform random' and our model takes place for longer than that of 'Ref:Uniform random' and our model in Ant Push. Meanwhile, the exploration mode of 'Ref:PPO' and that of 'Ref:Uniform random' are almost the same since a preset target rate \(\rho\) for 'Ref:Uniform random' and 'Ref:Uniform random' is the same.
Although \(\alpha_{\text{on-policy}}\) is equal to \(\alpha_{\text{off-policy}}\), since the ratio of success count regarding each action of the second level is also modified, after the starting mode, the total step comparison of two exploration mode and exploitation of our model is through whole training steps as
\[\textit{Total\_Step}\left(\pi_{M}^{TD3}\right)>\textit{Total\_Step}\left(\pi_ {M}^{PPO}\right)>>\textit{Total\_Step}\left(\pi_{M}^{RND}\right). \tag{9}\]
Unlike the Ant PUSH task, the second phase in the Ant Fall task suffers a drop of reward between 4M and 4.5 M steps. The success rate of the evaluation process stays between 0.5 and 0.6 during the period. The recovery of reward quickly takes place due to the success rate compared with IV-B2.
Fig. 4: The count of exploration modes and exploitation and the reward and success rate of higher level policy for our model, Ref:Uniform random, Ref:PPO and HIRO in Ant Fall
Fig. 3: The count of exploration modes and exploitation and the reward and success rate of higher level policy for our model, Ref:Uniform random, Ref:PPO and HIRO in Ant Push
### _Ablation study_
We investigate our model without the reward modification, the loss modification and both modifications. Fig. 5 shows the results of the experiment compared with our normal model in the Ant Push task. Therefore, the part related to our normal model in Ant Push is removed for the purpose of experimenting each case.
#### Iv-B1 Without the reward modification
While the exploitation has less steps than our normal model, the exploration of Uniform random and PPO has more steps than our normal model. The performance of reward and success rate slowly increase.
#### Iv-B2 Without the loss modification
Again, the two exploration modes have more steps than our normal model and the exploitation has less steps than our normal model. It shows a drop of reward between 2.2M steps and 3M steps due to the increase in PPO exploration. Although the success rate is better than that of our normal model during the period, its performance is worse than that of our normal model.
#### Iv-B3 Without both the reward modification and the loss modification
Too many explorations and less exploitation cause the worst performance.
## V Discussion
### _The effect of on-policy for exploration_
When the on-policy operates in the beginning of exploration, the performance of on-policy, \(\pi_{M}^{PPO}\), is not competitive. However, after it is trained by itself or other policies to some extent, the performance of on-policy shows better performance than the random policy. Meanwhile, In practice \(\pi_{T}^{PPO}\) is likely not to suffer a local minima due to the three policies of _Middle_ level.
### _The effect of reward modification_
In Ant Fall task, the performance of our model in the early steps up to about 2M steps lags behind all other models. The reason is that the on-policy operates for long time up to then since \(\alpha_{\text{on-policy}}\) is equal to \(\alpha_{\text{off-policy}}\). The reward modification for the guided exploration takes advantage of the fixed value of \(\alpha_{g^{\text{expit-task}}}\), which is not an adaptive strategy.
### _The effect of loss modification_
The occurrence of on-policy and random policy in the Ant Fall task between 4M and 4.5M steps gives rise to a drop in performance of the agent. In particular, the modeling of uncertainty reflecting the success rate, \(S\_E\), can be considered. The higher \(S\_E\) is, the lower the uncertainty is. Thus, \(S\_E\) is related to the uncertainty.
## VI Conclusion
In order to overcome the issues of a non-monolithic exploration, this paper introduces an autonomous non-monolithic agent with multi-mode exploration based on options framework. We reveal the potential of our model to follow a behaviour thought of humans and animals. Our model takes advantage of the difference in the degree of entropy of each exploration policy with a guidance-exploration framework. A robust optimal policy can be expected due to the evaluation process. The research on a guided exploration of the adaptive strategy for the multi-mode exploration of an autonomous non-monolithic agent is required. The further research on the modeling of \(S\_E\) in the agent is also required for the robust optimal policy.
|
2307.11990 | On integer linear combinations of terms of rational cycles for the
generalized 3x+1 problem | In the paper, some special linear combinations of the terms of rational
cycles of generalized Collatz sequences are studied. It is proved that if the
coefficients of the linear combinations satisfy some conditions then these
linear combinations are integers. The discussed results are demonstrated on
some examples. In some particular cases the obtained results can be used to
explain some patterns of digits in $p$-adic representations of the terms of the
rational cycles. | Yagub N. Aliyev | 2023-07-22T06:10:29Z | http://arxiv.org/abs/2307.11990v5 | # On integer linear combinations of terms of rational cycles for the generalized 3x+1 problem
###### Abstract
In the paper, some special linear combinations of the terms of rational cycles of generalized Collatz sequences are studied. It is proved that if the coefficients of the linear combinations satisfy some conditions then these linear combinations are integers. The discussed results are demonstrated on some examples. In some particular cases the obtained results can be used to explain some patterns of digits in \(p\)-adic representations of the terms of the rational cycles.
+
Footnote †: _Key words and phrases_: \(3x+1\) problem, Collatz conjecture, rational cycles, integer linear combinations.
+
Footnote †: _Key words and phrases_: \(3x+1\) problem, Collatz conjecture, rational cycles, integer linear combinations.
## 1 Introduction
Collatz conjecture or \(3x+1\) problem claims that for any positive integer \(x_{0}\), the recursive sequence defined for \(n\geq 0\) by \(x_{n+1}=S(x_{n})=\frac{3x_{n}+1}{2}\) if \(x_{n}\) is odd and \(x_{n+1}=T(x_{n})=\frac{x_{n}}{2}\) if \(x_{n}\) is even, there is a positive integer \(N\) such that \(x_{N}=1\) (see [6]). It is known that this holds true for almost all \(x_{0}\) in the sense of some density. See [14], [3], [4], [13], and the references therein for the works in this direction. Another approach is to find all cycles of this recursive sequence. It is conjectured that there are only finitely many such cycles. The only known cycle is the one generated by \(x_{0}=1\). There are more cycles if
\(x_{0}\) is allowed to be zero or a negative integer but it is conjectured that their number is also finite (The Finite Cycles Conjecture, see [6], [7]). The only known non-positive cycles are the ones generated by \(x_{0}=0\), \(x_{0}=-1\), \(x_{0}=-5\), and \(x_{0}=-17\). If a sequence of functions consisted of several \(S\) and \(T\) is given, then one can speak about rational cycles (see [1]). There is a rational number \(x_{0}\) such that if the operations \(S\) and \(T\) are applied in the given order, then the final result is again \(x_{0}\). Rational cycles generated by such \(x_{0}\) have some interesting properties (see [7]). Many generalizations of Collatz conjecture were considered by replacing \(S\) and \(T\) operations by more general \(S_{k}(x)=\frac{p_{i}x+k}{q}\). One can find such generalizations in [10], [11], [9]. See [2], [8], where similar generalizations are used to prove some results related to undecidability properties. See [5], [12], where these generalizations are considered in the context of 2-adic numbers and \(q\) based numeral systems.
In this paper we focus on properties of the terms of rational cycles \(x_{i}\), which show that these rational numbers are "integer like". A linear combination with integer coefficients of two integers is again an integer. In the paper it is proved that under some conditions over the coefficients of the linear combination, the rational numbers \(x_{i}\) and \(x_{i+b}\) also form integer linear combinations. Two worked out examples demonstrating the results on particular cases are given. Some applications of these results explaining peculiar patterns of digits in \(p\)-adic representations of these rational numbers \(x_{i}\) are also discussed.
## 2 Notations and lemmas
Consider composition \(P=B_{0}\circ B_{1}\circ\ldots\circ B_{n-1}\) of functions \(B_{i}(x)=\frac{p_{i}x+k_{i}}{q}\), where \(n>1\), \(k_{i}\) are integers, \(p_{i},q\) are non-zero integers such that \((p_{i},q)=1\) for \(i=0,1,\ldots,n-1\). When it is necessary to extend the index \(i\), beyond the interval \([0,n-1]\), we suppose that \(B_{i}=B_{j}\) if \(i\equiv j\pmod{n}\). Consider equation \(B_{0}\circ B_{1}\circ\ldots\circ B_{n-1}(x)=x\), which can also be written as
\[\frac{p_{n-2}\frac{p_{n-1}x+k_{n-1}}{q}}{q}\] \[\vdots\] \[p_{0}\frac{p_{1}\frac{p_{1}}{q}+k_{1}}{q}+k_{0}}{q}=x.\]
Note that its solution \(x_{0}\) is a rational number (cf. formula 1.2 and 1.3, [7]):
\[x_{0}=\frac{p_{0}p_{1}\ldots p_{n-2}k_{n-1}+p_{0}p_{1}\ldots p_{n-3}k_{n-2}q+ \ldots+p_{0}k_{1}q^{n-2}+k_{0}q^{n-1}}{q^{n}-p_{0}p_{1}\ldots p_{n-1}}.\]
Similarly, consider equations \(B_{i}\circ B_{i+1}\circ\ldots\circ B_{i+n-1}(x)=x\) for \(i=0,1,\ldots,n-1.\) Note that their solutions \(x_{i}\) are also rational numbers:
\[x_{i}=\frac{p_{i}p_{i+1}\ldots p_{i+n-2}k_{i-1}+p_{i}p_{i+1}\ldots p_{i+n-3}k_{i -2}q+\ldots+p_{i}k_{i+1}q^{n-2}+k_{i}q^{n-1}}{q^{n}-p_{0}p_{1}\ldots p_{n-1}},\]
where all the indices are taken modulo \(n.\) In the following, it is assumed that \(x_{i}=x_{j}\) if \(i\equiv j(\text{mod}n).\) Consider also numbers \(U_{i}=\frac{q^{i}}{q^{n}-p_{0}p_{1}\ldots p_{n-1}},\) for \(i=0,1,\ldots,n.\) Note that \(U_{n}=p_{0}p_{1}\ldots p_{n-1}U_{0}+1.\) Note also that
\[x_{i}=p_{i}p_{i+1}\ldots p_{i+n-2}k_{i-1}U_{0}+p_{i}p_{i+1}\ldots p_{i+n-3}k_{ i-2}U_{1}+\ldots+p_{i}k_{i+1}U_{n-2}+k_{i}U_{n-1}.\]
Note that there are infinitely many pairs of non-zero integers \(\alpha,\beta\) and integers \(b,\) such that \(0<b<n\) and \(\alpha U_{0}+\beta U_{b}\) is an integer or equivalently, \(q^{n}-p_{0}p_{1}\ldots p_{n-1}|\alpha+\beta q^{b}.\) Indeed, one can take, for example, \(\alpha=k,\)\(\beta=-kq^{\phi(|q^{n}-p_{0}p_{1}\ldots p_{n-1}|)-1},\) and \(b=1,\) where \(k=1,2,\ldots\) and \(\phi\) is Euler's totient function. If \(\alpha\) and \(\beta\) are fixed then one can ask if such \(b\) exists. The answer to this question depends on the choice of \(\alpha\) and \(\beta\). For example, if \(q=2,\)\(n=4,\)\(p_{0}=3,\)\(p_{1}=p_{2}=p_{3}=1,\) then \(q^{n}-p_{0}p_{1}p_{2}p_{3}=13.\) If \(\alpha=1,\beta=1\) then such \(b\) (\(0<b<4\)) does not exist. But if \(\alpha=9,\beta=1\) then \(b=2\) satisfies the condition.
**Lemma 2.1**.: _If \(\alpha U_{0}+\beta U_{b}\) is an integer, then \(p_{0}p_{1}\ldots p_{n-1}\beta U_{0}+\alpha U_{n-b}\) is also an integer._
Proof.: Our claim is equivalent to prove that if \(q^{n}-p_{0}p_{1}\ldots p_{n-1}|\alpha+\beta q^{b}\) then
\[q^{n}-p_{0}p_{1}\ldots p_{n-1}|p_{0}p_{1}\ldots p_{n-1}\beta+\alpha q^{n-b}.\]
Indeed, since \((q,q^{n}-p_{0}p_{1}\ldots p_{n-1})=1\) and
\[q^{b}(p_{0}p_{1}\ldots p_{n-1}\beta+\alpha q^{n-b})=p_{0}p_{1}\ldots p_{n-1} \beta q^{b}+\alpha q^{n},\]
it is sufficient to show that
\[q^{n}-p_{0}p_{1}\ldots p_{n-1}|p_{0}p_{1}\ldots p_{n-1}\beta q^{b}+\alpha q^{n},\]
which follows from
\[p_{0}p_{1}\ldots p_{n-1}\beta q^{b}+\alpha q^{n}=p_{0}p_{1}\ldots p_{n-1}( \alpha+\beta q^{b})+\alpha(q^{n}-p_{0}p_{1}\ldots p_{n-1}).\]
**Lemma 2.2**.: _If \(\alpha U_{0}+\beta U_{b}\) is an integer, then for \(i=0,1,2,\ldots\) the numbers \(\alpha U_{i}+\beta U_{i+b}\) and \(p_{0}p_{1}\ldots p_{n-1}\beta U_{i}+\alpha U_{n+i-b}\) are also integers._
Proof.: By mutiplying \(\alpha U_{0}+\beta U_{b}\) and \(p_{0}p_{1}\ldots p_{n-1}\beta U_{0}+\alpha U_{n-b},\) which are both integers, by integer \(q^{i},\) we obtain that both of the numbers \(\alpha U_{i}+\beta U_{i+b}\) and \(p_{0}p_{1}\ldots p_{n-1}\beta U_{i}+\alpha U_{n+i-b}\) are integers.
## 3 Main results
**Theorem 3.1**.: _If \(\alpha U_{0}+\beta U_{b}\) is an integer, then for any \(i\), satisfying \(0\leq i<i+b<n\), the number \(\alpha x_{i}+\beta p_{i}p_{i+1}\ldots p_{i+b-1}x_{i+b}\) is also an integer._
Proof.: Note that we can write
\[\alpha x_{i}+\beta p_{i}p_{i+1}\ldots p_{i+b-1}x_{i+b}\]
\[=\alpha(p_{i}p_{i+1}\ldots p_{i+n-2}k_{i-1}U_{0}+p_{i}p_{i+1}\ldots p_{i+n-3}k _{i-2}U_{1}+\ldots+p_{i}k_{i+1}U_{n-2}+k_{i}U_{n-1})\]
\[+\beta p_{i}p_{i+1}\ldots p_{i+b-1}(p_{i+b}p_{i+b+1}\ldots p_{i+b+n-2}k_{i+b-1} U_{0}\]
\[+p_{i+b}p_{i+b+1}\ldots p_{i+b+n-3}k_{i+b-2}U_{1}+\ldots+p_{i+b}k_{i+b+1}U_{n-2 }+k_{i+b}U_{n-1})\]
\[=k_{0}M_{0}+k_{1}M_{1}+\ldots+k_{n-1}M_{n-1},\]
where
\[M_{j}=\begin{cases}p_{i}p_{i+1}\ldots p_{j-1}(\alpha U_{n+i-1-j}+\beta p_{0}p _{1}\ldots p_{n-1}U_{i+b-1-j})&\text{if $i\leq j<i+b$},\\ p_{i}p_{i+1}\ldots p_{n+j-1}(\alpha U_{i-1-j}+\beta U_{i+b-1-j})&\text{if $0\leq j<i$},\\ p_{i}p_{i+1}\ldots p_{j-1}(\alpha U_{n+i-1-j}+\beta U_{n+i+b-1-j})&\text{if $i+b \leq j<n$}.\end{cases}\]
By Lemma 2.1 and Lemma 2.2, all \(M_{j}\), for \(j=0,1,\ldots n-1\), are integers and therefore the claim is true.
**Corollary 3.2**.: _If \(\alpha U_{0}+\beta U_{b}\) is an integer, then for any \(i\), the number \(\alpha x_{i}+\beta p_{i}p_{i+1}\ldots p_{i+b-1}x_{i+b}\) is also an integer._
Proof.: Since it was assumed that \(x_{i}=x_{j}\) if \(i\equiv j(\mathrm{mod}n)\), without loss of generality, we can suppose that \(0\leq i<n\). The case \(i+b<n\) was considered in Theorem 2.1. So, we can suppose that \(i+b\geq n\). Since \(0<b<n\), we have \(0\leq i+b-n<i\), and therefore \(x_{i+b}=x_{i+b-n}\). Consequently,
\[\alpha x_{i}+\beta p_{i}p_{i+1}\ldots p_{i+b-1}x_{i+b}=\alpha x_{i}+\beta p_{i }p_{i+1}\ldots p_{i+b-1}x_{i+b-n}.\]
Multiply this number by \(p_{i+b-n}p_{i+b-n+1}\ldots p_{i-1}\), which is relatively prime to \(q^{n}-p_{0}p_{1}\ldots p_{n-1}\), and therefore can not change the property of being or not being an integer for the number \(\alpha x_{i}+\beta p_{i}p_{i+1}\ldots p_{i+b-1}x_{i+b}\), we obtain
\[\beta p_{0}p_{1}\ldots p_{n-1}x_{i+b-n}+\alpha p_{i+b-n}p_{i+b-n+1}\ldots p_{i -1}x_{i},\]
which is an integer by Lemma 2.1 and Theorem 3.1.
**Remark 3.3**.: The above results are trivially true for the cases when \(b=0\) and \(b=n\). Indeed, for the case when \(b=0\), if \((\alpha+\beta)U_{0}\) is an integer, then \((q^{n}-p_{0}p_{1}\ldots p_{n-1})|(\alpha+\beta)\), and therefore \((\alpha+\beta)x_{i}\) is also an integer for \(i=0,1,\ldots,n-1\). For the case when \(b=n\), if \(\alpha U_{0}+\beta U_{n}=\alpha U_{0}+\beta p_{0}p_{1}\ldots p_{n-1}U_{0}+\beta\) is an integer, then \((\alpha+\beta p_{0}p_{1}\ldots p_{n-1})U_{0}\) is also an integer (\(\beta\) is an integer), and therefore \((q^{n}-p_{0}p_{1}\ldots p_{n-1})|(\alpha+p_{0}p_{1}\ldots p_{n-1}\beta)\). Consequently \((\alpha+\beta p_{0}p_{1}\ldots p_{n-1})x_{i}\) is an integer for \(i=0,1,\ldots,n-1\).
**Corollary 3.4**.: _If one of the numbers \(x_{j}\)\((j\in\{0,1,\ldots,n-1\})\) is a fraction with denominator \(d\) in its simplest form then all of \(x_{i}\) for \(i=0,1,\ldots,n-1\), are like fractions with the same denominator \(d\)._
Proof.: It was mentioned earlier that one can always take \(\alpha=1\), \(\beta=-q^{\phi(|q^{n}-pqp_{1}\ldots p_{n-1}|)-1}\), and \(b=1\). Then by the main result, the number \(x_{i}+\beta p_{i}x_{i+1}\) is an integer for \(i=0,1,\ldots,n-1\). Note that \(d\) is a divisor of \(q^{n}-p_{0}p_{1}\ldots p_{n-1}\), and therefore \(d\) is relatively prime to \(\beta\) and all \(p_{i}\) for \(i=0,1,\ldots,n-1\). Since \(x_{j}\) is a fraction with denominator \(d\), the other numbers \(x_{j-1}\), \(x_{j-2}\), \(x_{j-3},...\) are all like fractions with the same denominator \(d\) in their simplest forms.
**Corollary 3.5**.: _If one of the numbers \(x_{j}\)\((j\in\{0,1,\ldots,n-1\})\) is an integer then all of \(x_{i}\) for \(i=0,1,\ldots,n-1\), are integers._
## 4 Examples
Let us take \(q=3\). Consider composition of functions \(P=B_{0}\circ B_{1}\circ B_{2}\circ B_{3}\), where \(B_{0}(x)=\frac{-5x-2}{3}\), \(B_{1}(x)=\frac{2x+1}{3}\), \(B_{2}(x)=\frac{7x+6}{3}\), \(B_{3}(x)=\frac{-x+3}{3}\). Here \(n=4\), \(p_{0}=-5\), \(k_{0}=-2\), \(p_{1}=2\), \(k_{1}=1\), \(p_{2}=7\), \(k_{2}=6\), \(p_{3}=-1\), \(k_{3}=3\), and \(q^{n}-p_{0}p_{1}p_{2}p_{3}=3^{4}-(-5)\cdot 2\cdot 7\cdot(-1)=11\).
The solution of equation \(B_{0}\circ B_{1}\circ B_{2}\circ B_{3}(x)=x\) is the number \(x_{0}=-69/11\). Note that \(x_{0}=x_{4}\). We can also find the other numbers \(x_{1}=x_{5}=37/11\), \(x_{2}=50/11\), \(x_{3}=12/11\), by solving the equations \(B_{1}\circ B_{2}\circ B_{3}\circ B_{0}(x)=x\), \(B_{2}\circ B_{3}\circ B_{0}\circ B_{1}(x)=x\), \(B_{3}\circ B_{0}\circ B_{1}\circ B_{2}(x)=x\), respectively. We also find the numbers \(U_{i}=2^{i}/11\)\((i=0,1,2,3,4)\). Note that \(4U_{0}+2U_{2}=2\) is an integer, which is equivalent to say that \(11|(4+2\cdot 3^{2})\). So, we can take \(\alpha=4\), \(\beta=2\), and \(b=2\). We observe that \(4x_{i}+2p_{i}p_{i+1}x_{i+2}\) is an integer for each of \(i=0,1,2,4\). Indeed,
\[\begin{array}{ll}4x_{0}+2p_{0}p_{1}x_{2}&=4\cdot(-69/11)+2\cdot(-5)\cdot 2 \cdot(50/11)&=-116,\\ 4x_{1}+2p_{1}p_{2}x_{3}&=4\cdot(37/11)+2\cdot 2\cdot 7\cdot(12/11)&=44,\\ 4x_{2}+2p_{2}p_{3}x_{4}&=4\cdot(50/11)+2\cdot 7\cdot(-1)\cdot(-69/11)&=106,\\ 4x_{3}+2p_{3}p_{4}x_{5}&=4\cdot(12/11)+2\cdot(-1)\cdot(-5)\cdot(37/11)&=38,\\ \end{array}\]
are all integers.
Now, note that \(-5U_{0}-13U_{1}=-4\) is also an integer, which is equivalent to say that \(11|-44=-5-13\cdot 3^{1}\). This means that we can take \(\alpha=-5\), \(\beta=-13\), and \(b=1\). So, \(-5x_{i}-13p_{i}x_{i+1}\) should be an integer for each of \(i=0,1,2,3\). Indeed,
\[\begin{array}{ll}-5x_{0}-13p_{0}x_{1}&=-5\cdot(-69/11)-13\cdot(-5)\cdot(37/1 1)&=250,\\ -5x_{1}-13p_{1}x_{2}&=-5\cdot(37/11)-13\cdot 2\cdot(50/11)&=-135,\\ -5x_{2}-13p_{2}x_{3}&=-5\cdot(50/11)-13\cdot 7\cdot(12/11)&=-122,\\ -5x_{3}-13p_{3}x_{4}&=-5\cdot(12/11)-13\cdot(-1)\cdot(-69/11)&=-87,\end{array}\]
are all integers. These observations are in perfect agreement with the main results of the current paper.
## 5 Applications
If \(p_{i}\in\{1,p\}\), where \(p\) is a nonzero integer and \((p,q)=1\) then there are two type of functions \(S_{k}(x)=\frac{px+k}{q}\) and \(T_{k}(x)=\frac{x+k}{q}\). Let us denote by \(m\) the number of \(S\) functions in \(P\). Then \(U_{i}=\frac{q^{i}}{q^{n}-p^{m}}\), for \(i=0,1,\ldots,n\). Denote by \(\sigma(i,j)\) the number of \(S\) functions in the fragment \(B_{i}B_{i+1}\ldots B_{j-1}\) of \(P\). In particular, \(\sigma(i,i)=0\), because it corresponds to the empty fragment of \(P\). Let \(x_{i}\) be the solution of the equation \(B_{i}\circ B_{i+1}\circ B_{i+2}\circ\ldots\circ B_{i+n-1}(x)=x\), where all the indices are taken modulo \(n\). Take \(\alpha=p^{l}\) for some non-negative integer \(l\), and \(\beta=-1\). For this special case the main result of the current paper can be written as \(p^{l}x_{i}-p^{\sigma(i,i+b)}x_{i+b}\in\mathbb{Z}\). This can be visualized by writing \(x_{i}\) as \(p\)-adic numbers in a table and noting that \(p\)-adic digits at the corresponding place values of \(p^{l}x_{i}\) and \(p^{\sigma(i,i+b)}x_{i+b}\) are identical, except for finitely many digits at lower place values. Let us demonstrate this on an example. Let \(q=2\), \(p=11\), \(P=B_{0}\circ B_{1}\circ B_{2}\circ B_{3}\circ B_{4}\circ B_{5}\circ B_{6}\), where \(B_{0}(x)=B_{1}(x)=B_{2}(x)=B_{3}(x)=B_{5}(x)=T_{0}(x)\), \(B_{4}(x)=S_{5}(x)\), and, \(B_{6}(x)=S_{3}(x)\). Here \(n=7\), \(m=2\), \(q^{n}-p^{m}=2^{7}-11^{2}=7\) and \(U_{i}=2^{i}/7\)\((i=0,1,2,\ldots)\). Note that
\[U_{0}-U_{3}=-1,\ \ 11U_{0}-U_{2}=1,\ \ 11^{2}U_{0}-U_{1}=17,\ \ 11^{3}U_{0}-U_{0}=190,\]
are all integers, which is equivalent to say that
\[7|(1-2^{3}),\ \ 7|(11-2^{2}),\ \ 7|(11^{2}-2^{1}),\ \ 7|(11^{3}-2^{0}),\]
respectively. By the main result of the current paper we can say that
\[x_{i}-11^{\sigma(i,i+3)}x_{i+3},\ \ 11x_{i}-11^{\sigma(i,i+2)}x_{i+2},\ \ 11^{2}x_{i}-11^{\sigma(i,i+1)}x_{i+1},\ \ 11^{3}x_{i}-x_{i},\]
are also integers for \(i=0,1,2,\ldots\). The functions \(B_{i}\), the numbers \(x_{i}\), their 11-adic representations (the letter A means digit 10), and the patterns formed by the digits can be seen in the following table:
\[\begin{array}{l}x_{0}=53/7\quad=\ldots\quad 7\quad 9\quad 4\quad 7\quad 9\quad 4 \quad 8\quad 6\\ x_{6}=302/7\quad=\ldots\quad 9\quad 4\quad 7\quad 9\quad 4\quad 7\quad 9\quad 8 \quad 7\\ x_{5}=151/7\quad=\ldots\quad 4\quad 7\quad 9\quad 4\quad 7\quad 9\quad 4\quad 9\quad 9\\ x_{4}=848/7\quad=\ldots\quad 7\quad 9\quad 4\quad 7\quad 9\quad 4\quad 7\quad \quad 4\quad 8\\ x_{3}=424/7\quad=\ldots\quad 9\quad 4\quad 7\quad 9\quad 4\quad 7\quad 9\quad 5\quad 2\quad 4\\ x_{2}=212/7\quad=\ldots\quad 4\quad 7\quad 9\quad 4\quad 7\quad 9\quad 4\quad 8 \quad 1\quad 2\\ x_{1}=106/7\quad=\ldots\quad 7\quad 9\quad 4\quad 7\quad 9\quad 4\quad 7\quad 9 \quad 6\quad 1\\ x_{0}=53/7\quad=\ldots\quad 9\quad 4\quad 7\quad 9\quad 4\quad 7\quad 9\quad 4 \quad 8\quad 6\\ \end{array}\qquad\begin{array}{l}S_{3}=B_{6}\\ T_{0}=B_{5}\\ S_{5}=B_{4}\\ T_{0}=B_{3}\\ T_{0}=B_{2}\\ T_{0}=B_{1}\\ T_{0}=B_{0}\\ T_{0}=B_{1}\\ T_{0}=B_{0}\\ \end{array}\]
More examples of such patterns and applications to the original \(3x+1\) problem are given in [1] and the references therein.
The Finite Cycles Conjecture mentioned at the beginning of this paper claims that the only integer cycles for \(3x+1\) problem are the ones generated by \(x_{0}=0\), \(x_{0}=-1\), \(x_{0}=1\), \(x_{0}=-5\), and \(x_{0}=-17\). These numbers correspond to compositions \(P_{1}=T\), \(P_{2}=S\), \(P_{3}=T\circ S\), \(P_{4}=T\circ S\circ S\), and \(P_{5}=T\circ T\circ S\circ S\circ S\circ S\circ S\). For these compositions \(q=2\), \(p_{i}=1\) or \(3\), and the numbers in the following table are calculated.
\[\begin{array}{|l|l|l|l|l|l|}\hline x_{0}&P&n&q^{n}-p_{0}p_{1}\ldots p_{n-1} \\ \hline 0&P_{1}&1&2^{1}-1=1\\ -1&P_{2}&1&2^{1}-3=-1\\ 1&P_{3}&2&2^{2}-3=1\\ -5&P_{4}&3&2^{3}-3\cdot 3=-1\\ -17&P_{5}&11&2^{11}-3^{7}=-139\\ \hline\end{array}\]
These compositions with integer \(x_{0}\) also show that the main results of the current paper, namely Theorem 3.1 and Corollary 3.2 can not be written as "iff" statements. Indeed, if the numbers \(x_{i}\) (\(i\in\{0,1,\ldots,n-1\}\)) are integers then \(\alpha x_{i}+\beta p_{i}p_{i+1}\ldots p_{i+b-1}x_{i+b}\) is an integer for any choice of integers \(\alpha,\beta,b\), which is not the case for \(\alpha U_{0}+\beta U_{b}\). Nevertheless, Lemma 2.1 can be written as an "iff" statement.
The composition \(P_{5}\) is different from the others in the sense that \(q^{n}-p_{0}p_{1}\ldots p_{n-1}\neq\pm 1\) but \(x_{0}\) is still an integer. The compositions \(P_{i}^{k}\) (\(i=1,2,\ldots,5\); \(k=1,2,\ldots\)), defined recursively by \(P_{i}^{1}=P_{i}\) and \(P_{i}^{k+1}=P_{i}^{k}\circ P_{i}\) (\(i=1,2,\ldots,5\); \(k=1,2,\ldots\)) also have integer \(x_{0}\). Determination of all such compositions with integer \(x_{0}\) or proving that all other compositions correspond to non-integer rational \(x_{0}\) will be helpful for the solution of \(3x+1\) problem. The results of the current paper shed some light on the structure of rational cycles and therefore they might be useful for this purpose.
## 6 Conclusion
In the paper some generalizations of Collatz conjecture or \(3x+1\) problem are studied. Some results are obtained proving that special linear combinations of the terms of rational cycles are integers. Demonstrations of these results on some concrete examples are given. These results are then used to explain some patterns of digits in \(p\)-adic representations of the rational cycles.
|
2307.00645 | Electromagnetic characterization of the LISA verification binary ZTF
J0526$+$5934 | We present an analysis of new and archival data to the 20.506-minute LISA
verification binary J052610.42$+$593445.32 (J0526$+$5934). Our joint
spectroscopic and photometric analysis finds that the binary contains an unseen
$M_1=0.89\pm0.11~{\rm M_\odot}$ CO-core white dwarf primary with an
$M_2=0.38\pm0.07~{\rm M_\odot}$ post-core-burning subdwarf, or low-mass white
dwarf, companion. Given the short orbital period and relatively large total
binary mass, we find that LISA will detect this binary with signal-to-noise
ratio $44$ after 4 years of observations. J0526$+$5934 is expected to merge
within $1.8\pm0.3~{\rm Myr}$ and likely result in a ${\rm D}^6$ scenario Type
Ia supernova or form a He-rich star which will evolve into a massive single
white dwarf. | Alekzander Kosakowski, Thomas Kupfer, P. Bergeron, Tyson B. Littenberg | 2023-07-02T19:47:15Z | http://arxiv.org/abs/2307.00645v2 | # Electromagnetic characterization of the LISA verification binary ZTF J0526\(+\)5934
###### Abstract
We present an analysis of new and archival data to the 20.506-minute LISA verification binary J052610.42\(+\)593445.32 (J0526\(+\)5934). Our joint spectroscopic and photometric analysis finds that the binary contains an unseen \(M_{1}=0.87\pm 0.11\) M\({}_{\odot}\) CO-core white dwarf primary with an \(M_{2}=0.38\pm 0.07\) M\({}_{\odot}\) post-core-burning subdwarf, or low-mass white dwarf, companion. Given the short orbital period and relatively large total binary mass, we find that LISA will detect this binary with signal-to-noise ratio \(2.7\pm 0.6\) after 3 months of observations.
We used archival photometry from ZTF DR16 and ATLAS, together with our new high-speed McDonald light curve, to place constraints on the observed orbital decay of J0526\(+\)5934 and find \(\dot{P}_{\rm obs}=-(1.2\pm 0.2)\times 10^{-11}\), in agreement to within \(1\sigma\) of the expected decay rate based on our photometric and spectroscopic analysis. J0526\(+\)5934 will merge within \(1.9\pm 0.3\) Myr and likely result in a D\({}^{6}\) scenario Type Ia supernova or form a He-rich star which will evolve into a massive single white dwarf.
Compact binary stars (283) -- Gravitational wave sources (677) 0000-0002-4880-8800]Alexander Kosakowski
0000-0002-2880-7885]Thomas Kupfer
0000-0002-4883-2885]P. Bergeron
## 1 Introduction
White dwarfs represent a relatively simple final evolutionary stage for most single-star stellar evolution. Interactions in a binary system complicate this evolution and can result in a wide range of astrophysically interesting systems. For binary evolution, the more massive star will evolve first, potentially leading to a phase common-envelope evolution as it evolves onto the asymptotic giant branch. This process strips the primary of its outer layers and leaves behind a CO-core white dwarf in a compact binary with orbital period ranging from hours to days. Depending on the mass ratio of the resulting compact binary, a second common-envelope phase may occur as the companion fills its Roche lobe near the base of the red giant branch. This double-common-envelope evolutionary process results in a double-degenerate binary with orbital period ranging from less than an hour to a only a few hours (Li et al., 2019). Compact post-common-envelope binaries are excellent systems for studying binary evolution. Recent work by Scherbak & Fuller (2023) used compact eclipsing white dwarf binaries to place constraints on the common envelope ejection efficiency.
Compact binaries with periods less than about 6 h are considered to be merging binaries since the rate of their orbital angular momentum loss caused by gravitational wave emission is sufficient to result in a binary merger within a Hubble time. The merging binaries observed today therefore represent a population of progenitor binaries to merger products, such as AM CVn binaries (Kilic et al., 2016), He-rich stars (Zhang et al., 2014), massive single white dwarfs (Cheng et al., 2020; Kilic et al., 2023), and Type Ia supernovae (Woosley et al., 1986; Fink et al., 2007; Liu et al., 2018; Shen et al., 2018). Characterization of merging white dwarf binaries provides constraints on the formation rates and potential formation channels of these merger products. Many compact white dwarf binaries have been discovered through targeted spectroscopic surveys, such as the ELM Survey (Brown et al., 2010, 2022; Kosakowski et al., 2023), and through large-scale systematic searches for photometric variability in time-domain surveys (Burdge et al., 2020; van Roestel et al., 2022; Ren et al., 2023).
White dwarf binaries are expected to be the dominant source of gravitational wave signal for the Laser Inter
-ferometer Space Antenna (LISA; Amaro-Seoane et al., 2017). The shortest period binaries, with \(P\lesssim 1\) h, emit gravitational waves at mHz frequencies that may be detected by LISA. LISA is expected to detect \(\mathcal{O}(10^{4})\) of these ultra-compact binaries, but only \(\mathcal{O}(10^{2})\) are expected to also be detectable through their electromagnetic radiation, allowing for a multi-messenger approach to studying binary evolution (Nelemans et al., 2001; Korol et al., 2017; Li et al., 2020; Amaro-Seoane et al., 2023). The strongest gravitational wave emitters will act as "verification binaries," which can be used to calibrate the LISA data set in the first few months of operation. So far, about 40 LISA detectable binaries have been characterized through their electromagnetic radiation (see Finch et al., 2023; Kupfer et al., 2023, and references therein).
Here we present an independent discovery and analysis of a new LISA verification binary with orbital period \(P=20.506\) min, J052610.42+593445.32 (J0526+5934). J0526+5934 was originally reported as a candidate ultra-compact binary by Ren et al. (2023) based on periodic photometric variability seen in the Zwicky Transient Facility (ZTF; Bellm et al., 2019; Graham et al., 2019; Masci et al., 2019) data archive. The authors find that J0526+5934 will be detected by LISA with an expected signal-to-noise ratio \(\mathrm{S/N}_{4}=35.788\) after 4 years of observation.
Throughout this work, we adopt the convention that the unseen massive star, which evolved first, is the primary star, while the relatively low mass companion is the secondary, such that \(M_{1}>M_{2}\). In Section 2 we describe our target selection criteria. In Sections 3 and 4 we provide the details of our spectroscopic and photometric analysis. In Sections 5 and 6 we discuss the expected and measured rate of orbital decay of J0526+5934, prospects for LISA detection, and its potential merger outcomes. Finally, we summarize our results in Section 7.
## 2 Target Selection
We selected all targets from the Gaia eDR3 (Gaia Collaboration et al., 2021) white dwarf catalog (Gentile Fusillo et al., 2021) and performed a generalized period search on their associated ZTF DR10 archival light curves using the astropy(Astropy Collaboration et al., 2022) implementation of the Lomb-Scargle periodogram (Lomb, 1976; Scargle, 1982; VanderPlas, 2018). We include trial-frequencies as low as \(P_{\mathrm{min}}=3\) min to identify periodic photometric variability typically seen in compact binaries caused by tidal distortions in an approaching-Roche-filling orbit. Our search made use of the Texas Tech University High Performance Computing Center to efficiently process each light curve.
To increase temporal sampling of the ZTF light curves with multiple measurements in different filters, we median-combined the light curves across each filter by artificially shifting the \(r\)- and \(i\)-band data such that their median magnitude values matched the median \(g\)-band magnitude.
J0526+5934 (Gaia DR3 282679289838317184) was identified in our search as an ultra-compact binary, with dominant frequency \(f_{\mathrm{peak}}\approx 140.445\) cycles d\({}^{-1}\) (\(P_{\mathrm{peak}}\approx 10.253\) min) and amplitude \(A\approx~{}0.05\) mag, suggesting ellipsoidal modulation at true orbital period \(P_{\mathrm{true}}\approx 20.506\) min. Figure 1 presents the ZTF light curve (left:top), its Lomb-Scargle power spectrum (left:middle), and the ZTF light curve phase-folded at the most-probable frequency (left:bottom). We mark the location of J0526+5934 on the Gaia DR3 (Gaia Col
Figure 1: Left: ZTF DR16 light curve of J0526+5934 (top), its Lomb-Scargle power spectrum (middle), and phase-folded ZTF DR16 light curve (bottom). Data points are colored based on the filter used. Green data points represent ZTF_\(g\), red data points represent ZTF_\(r\). Right: Gaia DR3 color-magnitude diagram. The location of J0526+5934 is marked with a red symbol.
laboration et al., 2023) color-magnitude diagram (right) as a red star.
## 3 Spectroscopic analysis
### Keck Archival Spectroscopy
J0526+5934 was originally observed on UT 2020 September 16 with the Keck 10-meter telescope on Maunakea as part of the program ID 2020B-C282. The observations used LRIS (Oke et al., 1995) with the blue-channel 600/4000 grism (600 lines mm\({}^{-1}\); \(\lambda_{0}=4000\) A), 1.0'' slit, and \(2\times 2\) CCD binning, providing a spectral resolution of \(\approx 4.0\) A over the wavelength range \(3250\sim 5550\) A. These observations include ten consecutive spectra with 2-minute exposures over approximately one full binary orbit.
We downloaded the blue optical spectra and their associated calibration data from the Keck Observatory Archive and reduced the data using using standard iraf(Tody, 1993) procedures including image correction, spectral extraction, dispersion correction using HgNeArCdZn arc-lamps, and wavelength calibration using BD28\({}^{\circ}\)4211 standard star observations taken with the same setup.
The optical spectrum of J0526+5934 is dominated by hydrogen absorption features and has relatively shallow He I absorption features at 4912 A, 4471 A, and 4026 A, giving it the DAB classification. We see no evidence of the companion in the Keck spectroscopy.
### Radial Velocity and Kinematics
We estimated the radial velocity for each of the ten blue optical spectra against a zero-velocity low-mass DA white dwarf template spectrum using the cross-correlation package rvsao.xcsao(Kurtz & Mink, 1998) within iraf. We then shifted each of the ten component spectra of J0526+5934 to zero-velocity and co-added them into a single high-quality, zero-velocity spectrum, which we later use to estimate atmospheric parameters. Finally, we obtained precise radial velocity estimates for each component spectrum by using the co-added spectrum as a template for another round of cross-correlation. Our individual radial velocity measurements are presented in Table 1.
We fit a circular orbit to the radial velocity measurements to estimate the orbital period (\(P\)), velocity semi-amplitude (\(K\)), and systemic velocity (\(\gamma\)) of the binary. We find best-fitting parameters \(P_{\rm RV}=20.54\pm 0.12\) min, \(K=549.7\pm 4.7\) km s\({}^{-1}\), and \(\gamma=-40.7\pm 4.1\) km s\({}^{-1}\), roughly twice the orbital period identified through our Lomb-Scargle analysis of the ZTF light curve. Our best-fitting orbital solution is presented in Figure 2.
We estimated Galactic space velocities for J0526+5934 by using our best-fitting systemic velocity and the Gaia DR3 astrometry measurements. We find \(U=47.6\pm 1.9\) km s\({}^{-1}\) (\(U\) positive toward the Galactic center), \(V=-7.3\pm 1.7\) km s\({}^{-1}\), and \(W=3.8\pm 1.1\) km s\({}^{-1}\), corrected for the motion of the local standard of rest (Schonrich et al., 2010), consistent with a Galactic disk population based on the average velocity and dispersion distributions for the Galactic disk and halo from Chiba & Beers (2000).
### Atmospheric Parameters
We estimated the atmospheric parameters of J0526+5934 by fitting a grid of hot subdwarf model atmospheres (Saffer et al., 1994) to the co-added blue optical spectrum and find best-fitting parameters \(T_{\rm eff}=27300\pm 260\) K, \(\log g=6.37\pm 0.03\), and \(\log\frac{N(He)}{N(H)}=-2.45\pm 0.06\), which suggest that J0526+5934 is a post-core-burning subdwarf, or an inflated He-core low-mass white dwarf. We summarize our best-fitting parameters in Table 2. Our best-fitting
\begin{table}
\begin{tabular}{c c} \hline \hline \multicolumn{1}{c}{MJD} & \multicolumn{1}{c}{\(v_{\rm r}\)} \\ \multicolumn{1}{c}{(d)} & \multicolumn{1}{c}{(km s\({}^{-1}\))} \\ \hline
59108.569310 & \(497.88\pm 2.99\) \\
59108.571125 & \(289.56\pm 9.29\) \\
59108.572941 & \(-131.02\pm 7.99\) \\
59108.574759 & \(-494.01\pm 11.18\) \\
59108.576571 & \(-593.67\pm 7.84\) \\
59108.578389 & \(-331.64\pm 6.56\) \\
59108.580201 & \(94.91\pm 8.08\) \\
59108.582019 & \(442.35\pm 15.86\) \\
59108.583831 & \(504.10\pm 6.96\) \\
59108.585649 & \(233.46\pm 10.03\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Radial velocity measurements for J0526+5934 based on our cross-correlation fit.
Figure 2: Best circular-orbit fit (red dashed line) to the Keck LRIS radial velocity measurements of ZTF J0526+5934 (black points).
model is over-plotted onto the Keck blue optical spectrum in Figure 3.
## 4 Photometric Analysis
### Spectral Energy Distribution
A spectral energy distribution (SED) fit was performed to measure the radius and mass of J0526+5934. The angular diameter of the star is measured and, in combination with the Gaia DR3 parallax, we derive the radius of the visible component in J0526+5934. The luminosity and mass are calculated using the atmospheric parameters measured from spectroscopy. This method is described in detail by Heber et al. (2018). Because J0526+5934 is missing archival GALEX UV and SDSS \(u\)-band photometry, we fixed the effective temperature and surface gravity to our spectroscopic values.
Using the functions of Fitzpatrick et al. (2019), we account for interstellar reddening. The color excess \(E\,(44-55)\) is treated as a free parameter and the the extinction parameter \(R(55)\) was fixed to the standard value of 3.02. To estimate the radius we apply \(R=\Theta/2\varpi\), where \(\Theta\) is the angular diameter derived from the SED fit and \(\varpi\) is the parallax extracted from Gaia DR3. The mass follows from the \(M=gR^{2}/G\), where \(g\) is the surface gravity and \(G\) is the gravitational constant. Our fit to the available SED, including Gaia \(G\), \(G_{\rm BP}\), and \(G_{\rm RP}\), PanSTARRS \(grizy\)(Chambers et al., 2016), and (un)WISE \(W1\)(Schlafly et al., 2019), finds \(R_{2}=0.061^{+0.006}_{-0.005}\) R\({}_{\odot}\), corresponding to mass \(M_{2}=0.32^{+0.06}_{-0.05}\) M\({}_{\odot}\).
### Light Curve Modeling
We obtained high-speed \(g^{\prime}\), \(r^{\prime}\), and \(i^{\prime}\)-band follow-up light curves of J0526+5934 using the McDonald 2.1-meter telescope on 2022 September 30, 2022 October 01, and 2022 October 02, respectively.
We used lcurve(Copperwheat et al., 2010) to perform simultaneous \(g^{\prime}\)-, \(r^{\prime}\)-, and \(i^{\prime}\)-band modeling to our McDonald light curves. We fit for the mass ratio (\(q=M_{2}/M_{1}<1.0\)), orbital inclination (\(i\)), scaled companion radius (\(r_{2}=R_{2}/a\)), and time of primary conjunction (\(t_{0}\)). We included Gaussian priors on the surface gravity, effective temperature, velocity semi-amplitude, and radius of the low-mass companion based on the values obtained from our fits to the optical spectroscopy and available SED. We used gravity and quadratic limb-darkening coefficients from Claret et al. (2020) for DA white dwarfs with atmospheric parameters \(T_{\rm eff,2}=27,500\) K, \(\log g_{2}=6.37\) and \(T_{\rm eff,1}=10,000\) K, \(\log g_{1}=8.00\).
We find most-probable model parameters \(q=0.438^{+0.055}_{-0.049}\), \(i=56.8^{+4.7}_{-4.0}\), and \(R_{\rm 2,vol}=0.070\pm 0.005\) R\({}_{\odot}\), where \(R_{\rm vol}\) is the volumetric radius.
\begin{table}
\begin{tabular}{l r} \hline \hline Source ID (Gaia DR3) & 282679289838317184 \\ R.A. (2016.0) & 05:26:10.420 \\ Decl. (2016.0) & +59:34:45.318 \\ Gaia G (mag) & \(17.563\pm 0.003\) \\ Parallax (mas) & \(1.18\pm 0.09\) \\ \hline \(P_{\rm ZTF}\) (s) & \(1230.375241\) \\ \(\dot{P}_{\rm expected}\) (s s\({}^{-1}\)) & \(-(8.56\pm 2.99)\times 10^{-12}\) \\ \(\dot{P}_{\rm measured}\) (s s\({}^{-1}\)) & \(-(1.16\pm 0.23)\times 10^{-11}\) \\ \hline \(T_{\rm eff}\) (K) & \(27,300\pm 260\) \\ \(\log g\) (cm s\({}^{-2}\)) & \(6.37\pm 0.03\) \\ \(\log\frac{N(He)}{N(H)}\) & \(-2.45\pm 0.06\) \\ \(K\) (km s\({}^{-1}\)) & \(549.7\pm 4.7\) \\ \(\gamma\) (km s\({}^{-1}\)) & \(-40.7\pm 4.1\) \\ \hline \(q=\frac{M_{2}}{M_{1}}\) & \(0.438^{+0.055}_{-0.049}\) \\ \(r_{2}=\frac{R_{2}}{a}\) & \(0.292\pm 0.011\) \\ \(r_{2,\rm vol}\) & \(0.266\pm 0.009\) \\ \(i\) (\({}^{\circ}\)) & \(56.8^{+4.7}_{-4.0}\) \\ \hline \(M_{2}\) (M\({}_{\odot}\)) & \(0.380^{+0.067}_{-0.060}\) \\ \(M_{1}\) (M\({}_{\odot}\)) & \(0.868^{+0.114}_{-0.103}\) \\ \(a\) (R\({}_{\odot}\)) & \(0.265\pm 0.011\) \\ \(R_{2,\rm vol}\) (R\({}_{\odot}\)) & \(0.070\pm 0.005\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Fitted and archival parameter values for J0526+5934.
Figure 3: Best-fitting model atmosphere to the co-added optical spectrum of ZTF J0526+5934.
These parameters correspond to stellar masses \(M_{2}=0.380^{+0.067}_{-0.060}\) M\({}_{\odot}\) and \(M_{1}=0.868^{+0.114}_{-0.103}\) M\({}_{\odot}\), in agreement to within 1\(\sigma\) of the mass and radius estimates from our SED fitting. We adopt the light curve modeling solution as the true mass and radius and summarize these parameters in Table 2. Figure 4 presents a corner-plot of our parameter distributions with the the most-probable model over-plotted onto our McDonald light curves.
## 5 Orbital Decay
### Expected Orbital Decay
The orbit of compact binaries decays due to the loss of orbital angular momentum through the emission of gravitational waves. We estimated the magnitude of this effect for J0526+5934 using Equation 1(Landau & Lifshitz, 1975; Piro, 2019)
\[\dot{P}_{\rm GW}=-\frac{96}{5}\frac{G^{3}}{c^{5}}\frac{M_{1}M_{2}(M_{1}+M_{2}) }{a^{4}}P_{\rm orb} \tag{1}\]
where \(a\) is the binary separation. We find that the expected rate of orbital decay in J0526+5934 due to the emission of gravitational waves is \(\dot{P}_{\rm GW}=-(8.14\pm 2.84)\times 10^{-12}\) s s\({}^{-1}\), where the large uncertainties are dominated by our uncertainty in the component masses.
Tidal interactions contribute to the total orbital decay in ultra-compact binaries as orbital energy is used to spin-up the stars in the binary. We ignore the effects of tidal heating and assumed that the stars are tidally locked and estimated the contribution from tidal interactions to the orbital decay of J0526+5934 using Equation 2 (see Equation 6 in Piro, 2019)
\[\dot{P}_{\rm total}=\dot{P}_{\rm GW}\left[1-3\frac{(I_{1}+I_{2})}{a^{2}}\frac{ (M_{1}+M_{2})}{M_{1}M_{2}}\right]^{-1} \tag{2}\]
where \(I_{i}=k_{i}M_{i}R_{i}^{2}\) is the moment of inertia of each star. Burdge et al. (2019) finds \(k_{2}=0.066\) and \(k_{1}=0.14\) based on white dwarf models for less massive stellar components in an ultra-compact binary, while Marsh et al. (2004) finds that \(k\approx 0.2\) is an appropriate estimate for white dwarfs based on the Eggleton zero-temperature mass-radius relation. We used \(k_{1}=k_{2}=0.15\pm 0.05\) and find total orbital decay \(\dot{P}_{\rm GW}+\dot{P}_{\rm tides}=-(8.56\pm 2.99)\times 10^{-12}\) s s\({}^{-1}\), corresponding to tidal contribution \(\frac{\dot{P}_{\rm tides}}{\dot{P}_{\rm total}}=4.9\pm 1.9\%\).
### Orbital Decay Measurements
The orbital decay of compact binaries can be directly measured as an observable through timing offsets in periodic photometric variability, such as with precise eclipse timing measurements over multi-year baselines (see Hermes et al., 2012; Burdge et al., 2019, 2020, 2023). The precise \(t_{0}\) measurement from the ellipsoidal variability in our McDonald light curves presents an opportunity to measure the orbital decay of J0526+5934 when combined with archival light curve data from photometric surveys.
We searched the Palomar Transient Factory (PTF; Law et al., 2009) data archive but find only five epochs of data, which is insufficient for timing measurements. However, both ZTF DR16 and the Asteroid Terrestrial-impact Last Alert System (ATLAS; Tonry et al., 2018; Heinze et al., 2018) provide consistent multi-year baselines for orbital decay measurements of J0526+5934.
We downloaded the ZTF DR16 photometry from the online data archive and split the data into five bins with roughly equal timing baseline, based on the gaps in the data between data releases. We median-combined data across filters for each subset of data to increase temporal sampling. Our ZTF data bins contained \(N=\)[156, 242, 168, 89, 80] data epochs.
For the ATLAS data, we used the ATLAS forced-photometry service (Shingles et al., 2021) to collect the reduced image photometric measurements of J0526+5934. For each filter, we cleaned the ATLAS data by removing data with \(\sigma_{\rm Flux}>30\)\(\mu\)Jy and performing three iterations of 3\(\sigma\) clipping about the mean flux value. We then median-combined data across filters and split the cleaned data set into four bins with
Figure 4: Left: Best-fitting lccurve models over-plotted onto the phase-folded McDonald 2.1-meter telescope \(g^{\prime}\)-band (top), \(r^{\prime}\)-band (middle), and \(i^{\prime}\)-band (bottom) light curves. Right: Parameter distributions for the mass ratio (\(q=M_{2}/M_{1}<1.0\)), orbital inclination (\(i\)), volumetric scaled stellar radius (\(r_{2,\rm vol}=R_{2,\rm vol}/a\)), and stellar mass \(M_{2}\). We mark the upper and lower 1\(\sigma\) error values as the 84.13 and 15.97 percentiles to each parameter distribution, respectively.
roughly equal timing baseline, based on the large gaps in data between data releases, resulting in bins containing \(N=\)[133, 531, 483, 447] data epochs.
We converted each of the ZTF and ATLAS timing measurements to mid-exposure BJD\({}_{\rm TDB}\) to match our McDonald observations and used lcurve to fit for the time of primary conjunction (\(t_{0}\)) for each subset of ZTF and ATLAS data, with the other model parameters fixed at the most-probable values obtained from our McDonald light curve modeling.
We combined our McDonald \(t_{0}\) measurement with the best-fitting ZTF and ATLAS \(t_{0}\) measurements and fit for the observed \(\dot{P}\) using the orthogonal distance regression package within scipy(Virtanen et al., 2020). We included the \(1\sigma\) uncertainties in our \(t_{0}\) measurements and the total timing baseline for each subset of data as uncertainties in our fit.
We find best-fitting observed orbital decay \(\dot{P}_{\rm obs}=-(1.4\pm 0.2)\times 10^{-11}\ {\rm s\ s^{-1}}\). We present our \((O-C)\) diagram with respect to our McDonald \(t_{0}\) measurement in Figure 5. ATLAS measurements are shown as triangular symbols and ZTF DR16 measurements are shown as filled circles. We plot the expected orbital decay curve in blue and the best-fitting observed orbital decay curve in red. We include the \(1\sigma\) uncertainties for each curve as shaded regions. Our individual \(t_{0}\) measurements for each epoch of data, with their respective timing baselines, are presented in Table 3.
## 6 Discussion
### LISA Detection
We used legwork(Wagg et al., 2022) v0.4.6 to estimate the gravitational wave strain and LISA signal-to-noise ratio for J0526+5934 by creating a population of binaries based on the parameter distributions obtained from our photometric and spectroscopic analysis, including the orbital inclination, Gaia distance, and sky position. We find that J0526+5934 has 4-year LISA gravitational wave strain \(h_{0,4}\approx(1.6\pm 0.3)\times 10^{-19}\) and 4-year LISA signal-to-noise S/N \(\approx 28\pm 6\). LISA will detect J0526+5934 after 3-months of observations with signal-to-noise S/N\({}_{0.25}\approx 2.7\pm 0.6\), making J0526+5934 a LISA verification binary.
### Merger Outcome
We find that J0526+5934 will merge within \(\tau_{\rm GW}=1.9\pm 0.3\) Myr due to loss of orbital angular momentum
Figure 5: \((O-C)\) diagram for J0526+5934 including data from ZTF DR16 (filled circles) and ATLAS (triangles), measured against our McDonald \(t_{0}\) (star symbol). We plot the expected (blue) and best-fit (red) \((O-C)\) curves as dashed lines with their \(1\sigma\) uncertainties as shaded regions. The horizontal error bars represent the effective observing baseline for each set of data.
\begin{table}
\begin{tabular}{l l l} \hline \hline \multicolumn{1}{c}{ Source} & \multicolumn{1}{c}{\(t_{0}\) (d\({}^{+a}_{-s}\))} & \multicolumn{1}{c}{\(t_{\rm min}\) (d)} \\ & & \multicolumn{1}{c}{\(t_{\rm max}\) (d)} \\ \hline ATLAS & 2457628.843052\({}^{+16.5}_{-16.4}\) & 2457308.035550 \\ & & 2457870.767799 \\ ATLAS & 2458804.937155\({}^{+7.4}_{-7.2}\) & 2458005.112014 \\ & & 2458604.771709 \\ ZTF DR16 & 2458431.933491\({}^{+5.8}_{-5.9}\) & 2458207.747501 \\ & & 2458608.676285 \\ ZTF DR16 & 2458794.011302\({}^{+5.0}_{-5.2}\) & 2458698.971495 \\ & & 2458967.693652 \\ ATLAS & 2459011.946910\({}^{+6.2}_{-6.1}\) & 2458729.106084 \\ & & 2459324.752035 \\ ZTF DR16 & 2459195.876625\({}^{+6.0}_{-6.3}\) & 2459055.973840 \\ & & 2459325.714903 \\ ZTF DR16 & 2459489.941731\({}^{+7.6}_{-7.4}\) & 2459422.972369 \\ & & 2459688.679649 \\ ATLAS & 2459752.920198\({}^{+7.5}_{-7.5}\) & 2459449.080140 \\ & & 2460056.759440 \\ ZTF DR16 & 2459824.962623\({}^{+7.8}_{-8.3}\) & 2459787.972915 \\ & & 2459937.822458 \\ McDonald & 2459854.910239\({}^{+1.2}_{-1.3}\) & 2459853.834626 \\ & & 2459855.934370 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Best-fitting times of primary conjunction (\(t_{0}\)) based on our light curve modeling to the ZTF DR16, ATLAS forced photometry, and McDonald light curves of J0526+5934, presented in mid-exposure BJD\({}_{\rm TDB}\) timing. We provide the minimum and maximum epoch included for each data set considered in our analysis.
from a combination of gravitational wave emission and tidal interaction. However, given our large mass uncertainties, the merger outcome of J0526+5934 is uncertain.
On the median and upper-end of our mass estimates, we find that the most likely merger outcome of J0526+5934 is a "dynamically driven double-degenerate double-detonation" (D\({}^{6}\)) scenario in which unstable mass transfer ignites a Helium detonation near the surface of the accretor, which triggers a CO-core detonation and results in a sub-Chandrasekhar Type Ia supernova explosion of the accretor (Dan et al., 2012, 2015; Shen et al., 2018; Wong and Bildsten, 2023). In this double-detonation scenario, the low-mass donor may survive its companion's explosion as a hyper-velocity star, retaining its orbital speed from before the explosion (see Shen et al., 2018; Bauer et al., 2021; El-Badry et al., 2023).
On the lower-end of our mass estimates, we find that the merger of J0526+5934 is likely to result in a stable He-rich star (Zhang et al., 2014), such as an R Coronae Borealis type star (Webbink, 1984). This would naturally evolve into a massive CO white dwarf over time, contributing to the large fraction of merger products in the population of massive single white dwarfs (see Cheng et al., 2020; Kilic et al., 2023).
## 7 Summary & Conclusions
In this work, we have presented our spectroscopic and photometric analysis of a new \(P=20.506\) min ultra-compact LISA verification binary, independently discovered in the ZTF data archive and first reported in Ren et al. (2023).
We used archival Keck LRIS spectroscopy to estimate the atmospheric parameters of the visible component and find that, with \(\log g_{2}=6.37\pm 0.03\), the low-mass visible star is a post-core-burning hot subdwarf or an inflated low-mass He-core white dwarf. We performed light curve modeling to new multi-band high-speed photometry from the McDonald Observatory and find mass ratio \(q=0.438^{+0.055}_{-0.049}\), mass \(M_{2}=0.380^{+0.067}_{-0.060}\) M\({}_{\odot}\), and volumetric radius \(R_{2,\rm vol}=0.070\pm 0.005\) R\({}_{\odot}\), consistent with the estimates from our best-fitting SED model, \(M_{2,\rm SED}=0.32^{+0.06}_{-0.05}\) M\({}_{\odot}\) and \(R_{2,\rm SED}=0.061^{+0.006}_{-0.005}\) R\({}_{\odot}\).
We estimated the rate of orbital decay based on our most-probable system parameters and find that J0526+5934 will merge within \(1.9\pm 0.3\) Myr and most likely result in a D\({}^{6}\) scenario supernova explosion or form a He-rich star that eventually evolves into a massive single white dwarf. While our mass estimates are uncertain, which results in large uncertainties in potential merger outcome, future high-speed photometry observations are important for more precisely characterizing the chirp mass, orbital inclination, and observed orbital decay of J0526+5934, which will help characterize the expected LISA gravitational wave signal and provide a precise picture of the eventual fate of J0526+5934.
AK acknowledges support from NASA through grant 80NSSC22K0338. TK acknowledges support from the National Science Foundation through grant AST #2107982, from NASA through grant 80NSSC22K0338 and from STScI through grant HST-GO-16659.002-A. We thank Andreas Irrgang for the development of the spectrum and SED-fitting tools and his contributions to the model atmosphere grids.
This work was supported in part by NSERC Canada and by the Fund FRQ-NT (Quebec).
The authors acknowledge the High Performance Computing Center at Texas Tech University for providing computational resources that have contributed to the research results reported within this paper.
Based on observations obtained with the Samuel Oschin 48-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grant No. AST-1440341 and a collaboration including Caltech, IPAC, the Weizmann Institute for Science, the Oskar Klein Center at Stockholm University, the University of Maryland, the University of Washington, Deutsches Elektronen-Synchrotron and Humboldt University, Los Alamos National Laboratories, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW.
This work has made use of data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) project. The Asteroid Terrestrial-impact Last Alert System (ATLAS) project is primarily funded to search for near earth asteroids through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575; byproducts of the NEO search include images and catalogs from the survey area. This work was partially funded by Kepler/K2 grant J1944/80NSSC19K0112 and HST GO-15889, and STFC grants ST/T000198/1 and ST/S006109/1. The ATLAS science products have been made possible through the contributions of the University of Hawaii Institute for Astronomy, the Queen's University Belfast, the Space Telescope Science Institute, the South African Astronomical Observatory, and The Millennium Institute of Astrophysics (MAS), Chile.
This research has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck
Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration.
Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration.
Struve (ProEM), Keck:I (LRIS) astropy (Astropy Collaboration et al., 2013, 2018, 2022), iraf (Tody, 1986, 1993) legwork (Wagg et al., 2022), lcurve (Copperwheat et al., 2010), scipy (Virtanen et al., 2020),
|
2306.10019 | Ethical Considerations Towards Protestware | A key drawback to using a Open Source third-party library is the risk of
introducing malicious attacks. In recently times, these threats have taken a
new form, when maintainers turn their Open Source libraries into protestware.
This is defined as software containing political messages delivered through
these libraries, which can either be malicious or benign. Since developers are
willing to freely open-up their software to these libraries, much trust and
responsibility are placed on the maintainers to ensure that the library does
what it promises to do. Using different frameworks commonly used in AI ethics,
we illustrate how an open-source maintainer's decision to protest is influenced
by different stakeholders (viz., their membership in the OSS community, their
personal views, financial motivations, social status, and moral viewpoints),
making protestware a multifaceted and intricate matter. | Marc Cheong, Raula Gaikovina Kula, Christoph Treude | 2023-05-27T10:59:48Z | http://arxiv.org/abs/2306.10019v2 | # Ethical Considerations Towards Protestware
###### Abstract
A key drawback to using a Open Source third-party library is the risk of introducing malicious attacks. In recently times, these threats have taken a new form, when maintainers turn their Open Source libraries into protestware. This is defined as software containing political messages delivered through these libraries, which can either be malicious or benign. Since developers are willing to freely open-up their software to these libraries, much trust and responsibility are placed on the maintainers to ensure that the library does what it promises to do. This paper takes a look into the possible scenarios where developers might consider turning their Open Source Software into protestware, using an ethico-philosophical lens. Using different frameworks commonly used in AI ethics, we explore the different dilemmas that may result in protestware. Additionally, we illustrate how an open-source maintainer's decision to protest is influenced by different stakeholders (_viz._, their membership in the OSS community, their personal views, financial motivations, social status, and moral viewpoints), making protestware a multifaceted and intricate matter.
"When people feel they are not being heard, they may resort to different measures to get their message across. In the case of programmers, they have the unique ability to protest through their code."
Kula & Treude (2022)[1]
"Consequently, he found himself confronted by two very different modes of action; the one concrete, immediate, but directed towards only one individual; and the other an action addressed to an and infinitely greater... but for that very reason ambiguous... He had to choose between those two. What could help him to choose? "
Sartre (1946)
_Existentialism Is a Humanism_ (trans.: Mairet) [2]
## I Introduction
In this article, we articulate the motivations behind maintainers who turn their Open Source Software (OSS) into protestware. Although ethics in computing is not new, the phenomenon of Protestware is unique, in that the power of responsibility is placed on individuals (i.e., sometimes a single library maintainer), as opposed to the often-diffused responsibilities behind the deployment of AI and other technologies, due to the participation of more than one person. We then explore the dilemmas that a library maintainer may face. We also discuss potential guidelines and larger ethical implications for the open source, industry, research, and education sectors.
## II Background
### _Context_
In March 2022, the maintainer of node-ipc [3], a widely used software library, intentionally introduced a vulnerability into their code. If the code was run within Russia or Belarus, it would attempt to replace all files on the user's device with a heart emoji.1 This critical security flaw (i.e., CVE-2022-23812 [4]) highlights the trend of programmers intentionally sabotaging their code for political purposes, a practice known as "_protestware_[1].
Footnote 1: [https://techcrunch.com/2022/07/27/protestware-code-sabotage/](https://techcrunch.com/2022/07/27/protestware-code-sabotage/)
The malicious code was intended to overwrite arbitrary files depending upon the geolocation of the user's IP address: in essence, attacking software in specific locations. Specifically, the affected versions 10.1.1 and 10.1.2 of the library check whether the host machine has an IP address in Russia or Belarus, and if so, overwrites every file it could with a heart symbol. Version 10.1.3 was released soon after without this destructive functionality, while Versions 10.1.1 and 10.1.2 were removed from the NPM registry.
Responses from the community varied, including frustrations that led to insightful discussions. One example from a contributor on the GitHub Discussions channel is shown below [5]:
_I'm very happy to see that the principles and character of many in tech (FOSS especially) remain clear enough to recognize how completely wrong this was. Of course, if the marketplace of current things keeps hammering away at this, it will benefit a small number of corporate giants (misplaced trust/safety). I hope we all start seeing these patterns as we grapple with a general blurring of lines between tools for marketing and weapony. It's essential to ask: what's the outcome and who benefits? I like to ask the faux ideologues "who agrees with you?" "Isn't it strange how well aligned you are with a small number of very visible, influential, and powerful organizations?" "What's the fight and who is on which side, again?" It's about competency, not power. Power feeds and is fueled by egocentrism (plainly, weak vanity). Competery comes from discovering your natural gifts and applying them._ (sic.)
Another user from that GitHub Discussion quoted how this affected the Open Source Community [5]:
_The trust factor of open source, which was based on goodwill of the developers is now practically gone, and now, more and more people are realizing that
one day, their library/application can possibly be exploited to do/say whatever some random dev on the internet thought was 'the right thing to do'_. (sic.)
The maintainer in question defended his module on GitHub, saying that _"this is all public, documented, licensed and open source"_[1]. Earlier, there were more than 20 issues flagged against node-ipc about its behavior. Some of the comments referred to the creation as _"protestware, while others might call it malware"_[1].
We present another case where the protestware does not have malicious intent, but aims at increasing awareness. The same maintainer of the node-ipc library then created the pencencontwar library [6]. As explained by the maintainer, it serves as a non-violent protest against Russia's aggression. Instead of malicious deletion of files, the module adds a message of peace on users' desktops [7]. The maintainer was quoted in the README file2:
Footnote 2: [https://github.com/RIAEvangelis/pencontwar](https://github.com/RIAEvangelis/pencontwar)
_I pledge that this module, to the best of my knowledge and skills, does not do any damage to anyone's data. If you do not like what this module does, please just lock your dependencies to any of my work or other's which includes this module, to a version you have code reviewed and deemed acceptable for your needs. Also, please code-review your other modules for vulnerabilities._
### _Characterization_
Protestware can take three forms [1]:
**malignant protestware**: which intentionally damages or takes control of a user's device without their knowledge or consent;
**benign protestware**: which raises awareness about a social or political issue without causing harm (e.g., changes to license files3); and
**developer sanctions**: where programmers' accounts are suspended by internet hosting services (e.g., GitHub suspending Russian accounts4).
Footnote 3: [https://github.com/RIAEvangelis/pencontwar](https://github.com/RIAEvangelis/pencontwar)
Protestware can manifest in various ways, such as project documentation (e.g., README banner), communication (e.g., log messages), environment (e.g., injected code on target machines), or output (e.g., file deletions). The latter two forms are often considered security vulnerabilities,5 while protest through documentation or log messages is not typically classified as requiring security advisories. Protestware can have a wide-ranging impact on numerous stakeholders, including other contributors to the same project, direct and indirect users of the project, and even the entire open-source community and its newcomers.
Footnote 3: [https://github.com/RIAEvangelis/pencontwar](https://github.com/RIAEvangelis/pencontwar)
Footnote 4: [https://github.com/RIAEvangelis/pencontwar](https://github.com/RIAEvangelis/pencontwar)
The rise of protestware raises the question of whether it is ethical to intentionally worsen something in order to make a point. This issue is particularly significant in software ecosystems where code is frequently reused [8], as these ecosystems rely on the trust and reliability of the code being used. If a programmer introduces protestware into their code, it can introduce a ripple effect: compromising the security and stability of the software for all users who reuse that code, potentially affecting individuals and businesses that rely on the software, as well as further dependencies.
### _Protestware: Beyond Computing Ethics?_
The literature for computing and AI ethics is rapidly growing, predominantly on sociotechnical systems and "artificial moral agents" [9] such as algorithmic recommender systems, self-driving cars, algorithmic policing and law enforcement, and AI-based person recognition.
What sets our proposed analysis of protestware apart from existing computing ethics studies is that the stakeholders involved have a more personal, direct, and explicit involvement. To illustrate: consider a company (_BigCorp_) whose self-driving car who has injured a pedestrian: it is hard to see which individual programmer or engineer is responsible for the injury, based on the 'diffusion' of responsibility through _BigCorp_'s organisational structure. Contrast this with an individual volunteer (_Violet_), of an open source software library, who exhibits a form of activism by adding a few lines of code to prevent her software library from working in certain regions.
Protestware ethics is an extension to 'traditional' AI ethics: the power of responsibility is placed on - or diffused amongst - individuals, as opposed to large AI corporations. In order to think about new guidelines for deciding ethical actions and consequences for _Violet_, we will need to briefly explore the landscape of applied ethics.
## III Ethics: A Primer
There exist various ethical theories, each with their own pros and cons [10], which sometimes even conflict with each other in terms of their application and evaluation. Ethics, simply put, is about doing the "_right_" things. However, there is no clear answer to what make things "right" or "moral". Enter _applied ethics_ - the application of "one moral theory or other... upon the applied ethics problem at hand, in the hopes of producing a resolution" [11].
The field of medicine was one of the forerunners of this (e.g., by asking, _Using common principles of medical ethics, how do we do no harm to patients under our care?_). This idea was quickly adapted into -- and gained traction within -- research in AI and computing in recent years (e.g., by similarly asking, _How do self-driving cars do no harm to pedestrians and drivers we are responsible for, if those same principles of medical ethics are adopted and adapted?_). Philosophers from Immanuel Kant and Jeremy Bentham in the 18th century, to Tom Beauchamp and James Childress in the 20th century, have proposed different approaches to the question of how to do there, we discuss three popular approaches, taking a leaf from the current state of computing ethics [9].
### _Duty Ethics_
The ethical theory of _deontology_ - or 'duty ethics' - basically posits that a moral agent6 has a "sense of duty" or requirement to do the right thing, which guides their actions. Immanuel Kant's 'Categorical Imperative' offers a beautiful application of this, as summarised succinctly by Rachels: "When you are thinking about doing something, ask what rule you would be following if you actually did it... Then ask whether you would be willing for your [rule]... to become a universal law" [10]. In our example for _Violet_, she ought to think about what would happen if _all_ programmers did the way she did, with no exception: would she be able to live with the consequences on a universal level? We could quickly see that there are exceptions to this: what happens if, say, by following her hypothetical universal law, one programmer's actions end up disabling life support machines in hospitals?
Footnote 6: Simply put, a ‘moral agent’ is usually a person who has agency to decide how to act.
Following this maxim, hypothetically, if all 2,215,398 packages in the npmjs ecosystem decided to inject malicious code into their systems, it would break all libraries, affecting all libraries in the ecosystem, as well as all the potential applications that adopt the libraries into the ecosystem. Since JavaScript is the top ranked language on GitHub, this is a significant portion of GitHub projects as well. Although her intent was to protest against persons or groups this would disqualify the code from being openly distributed. Hence, beyond _moral_ harm, it will also disqualify license usage, which also has _legal_ implications for open source software usage. From a market report7 and according to a Red Hat 2019 report, OSS IT infrastructure is used by 53% of organisations, 43% integrations, and 42% in digital transformation. Permissive licenses are used by 76% of OSS and only 24% are copyleft.
Footnote 7: [https://www.fortunebusinessinsights.com/open-source-services-market-106469](https://www.fortunebusinessinsights.com/open-source-services-market-106469)
**Duty Ethics:** An example of applying the Kantian Categorical Imperative to the dilemma would be, e.g., _What if all programmers regard malicious code injection as ethically imperative?_ One can argue that the library will ultimately violate the tenets of OSS, leading to a ban on the library and the user. Hence, by implication, malicious code injection is not ethical.
### _Consequentialist ethics_
Next, another ethical theory which appeals to many computing professionals, due to its'mathematical' nature, is _utilitarianism_ - under the broad banner of _consequentialist_ ethics. Simply put, it is the consequences that are to be the yardstick by which to measure one's initial actions; and it is one's responsibility to maximise the overall happiness (or 'utils' if you wish, mathematically-speaking) across all stakeholders, and to minimise any unhappiness [10]. First proposed by philosophers such as Jeremy Bentham, it is thus easy to understand the attractiveness of this theory, as it boils down to an optimisation problem of 'utils'. However, translating it into practice is much harder than it looks on the surface: for starters, how could Violet measure the overall nett gain of her activism? When the 'utils' are divided across the entire population, does an individual merely get an infinitesimal \(+0.0001\)? Also, who gets to be the arbiter of the quantity of the individual utilies?. Most crucially, how would she quantify the overall nett loss of unintended consequences? (Again with our life support machine example from before: is the unintended death of a person worth \(-10,000?\)\(-100,000\)? Some might decide that is is simply _unquantifiable_!)
In terms of the consequences, the risk of losing any of the users of the library, potential community of contributors, and also their standing in the OSS community, are all net losses that need to be quantified. Also, since the library will violate the OSS definition, it will no longer be listed on the registry, which will lead to a community-wide loss of happiness. Is this worth the risk?
**Consequentialist Ethics:** An example of a utilitarian's reasoning would take the form of, e.g., _Potentially risking the ban of the library might be a bigger consequence (higher nett negative 'utils') than trying to target a subset of users to send a message (smaller nett positive 'utils'), leading to an overall negative in the balance of probabilities._ To put this reasoning in plain language: although the damage might have short term benefits, the long term effects and consequences are much larger.
### _Principilism_
A more pragmatic approach will be to follow a set of fixed principles, in the namesake philosophical framework of _principlism_. Here, we draw inspiration from Beauchamp and Childress' landmark _Principles of Biomedical Ethics_, which posits four key principles: respect for autonomy; nonmaleficence (doing no harm); beneficence (doing good); and justice [12]. All these principles have been in use in biomedical ethics, and have seen promise in evaluating issues in technological ethics.
**Principle 1 -- Respect for autonomy:** this principle involves respecting the freedom and autonomy between users, maintainer, and the broader OSS community. This lies at the heart of OSS, per e.g., the Free Software Foundation [13].
**Respect for autonomy:** In considering this principle, we have identified at least four parties involved: the users of a library; the maintainer; the contributor; and finally the ecosystem as a community of OSS developers. Since OSS is driven by volunteer contributions, would curtailing any party's autonomy - from denying them use of the software, to, say, causing them to work on weekends in order to revert changes to repositories or fix production code - impact their freedom, or worse,change any of the parties motivations?
Overall considerations will involve respecting the freedom of choice for all parties involved.
**Principle 2 -- Nonmaleficence:** this principle requires that harm _should not_ be caused to stakeholders. This includes indirect harm caused to others in the OSS community based on their "assumed belief, group membership, or behavior" [14].
**Nonmaleficence:** This, we reason, would only apply to the benign form of protestware. To recap, these forms will not cause undue impact, when compared to their malignant counterparts which may lead to security vulnerabilities. Critical questions when evaluating this principle includes, say, _How does a protestor managing between sending a message, while not causing harm to any of the stakeholders_. There are examples of README placements of protestware 8: while delicate, this is a strategy is to use documentation and communication channels, as opposed to modification of the actual code.
Footnote 8: [https://github.com/svshmanskyy/StandWithUkraine](https://github.com/svshmanskyy/StandWithUkraine)
**Principle 3 -- Beneficence:** this principle stipulates that consequence of an action should result in good or benefit. Note that this does not just consider the provision of benefits, but also, "balancing benefits against risks and costs" [12]
**Beneficence:** As we have seen in our treatise on utilitarianism, the benefits of protestware could be difficult to quantify at this stage. Nonetheless, an argument can be made for the benefits of fund-raising efforts such as sponsors or ad campaigns to create awareness on the political situation. This could be in the form of incentives to users, and to contributors of the source code. For historical context, however, a common incentive in sabotaging-one's-own-code was to help sponsor struggling maintainers who wanted to protest against their software being used by large corporations9.
Footnote 9: [https://www.independent.co.uk/tech/developer-sabotages-code-protest-github-colors-faker-b1990161.html](https://www.independent.co.uk/tech/developer-sabotages-code-protest-github-colors-faker-b1990161.html)
**Principle 4 -- Justice:** The effects of actions must be just and fair in terms of the distribution of "benefits, risks, and costs" [12]
**Justice:** Finally, the argument for justice requires not just fairness of outcomes, but also its _distribution_, which includes the risks and costs [12]. In a historical context, it is prudent to consider the protest against unfair usage of OSS by the industry (which precedes protestware), raised in _Beneficence_ above. The original _raison d'etre_ involves getting justice against the actual corporations that use the library. However, at the heart of this principle, other users and stakeholders may be disproportionately affected by this maneuver which leads it into question.
Think of these as valuable _heuristics_ for which we could evaluate the fitness of an action we are taking. Again, despite our best intentions, we might run into trouble fairly quickly. For example, what do we do when we do not satisfy all the listed heuristics; Violet might (indirectly) cause harm and deny autonomy to innocent users who are affected by her code destruction, while still advocating for perceived justice and benefcence, based on the cause of her activism?
## IV Promoting Ethical Responsibility
Assessing the ethical implications of an open source maintainer's decision to convert their software into protestware is a multifaceted and intricate matter. For starters, the ethical frameworks may not agree with each other: as we have seen, the Kantian Categorical Imperative (duty ethics) might be a straightforward "don't do it".
Drawing upon principalism, however, the nuance is visible. From the standpoint of autonomy, maintainers possess the right to make choices regarding _their own_ creations. However, this decision may conflict with _end users'_ autonomy, as it could restrict their ability to utilize the software without unforeseen -- or disastrous -- consequences. The principles of benefence and non-maleficence are also crucial to consider: although protestware might advance a greater good by raising awareness or advocating change, it could simultaneously inflict harm upon users who depend on the software for essential functions (again, with our life support machine example raised several times before). The justice principle underlines the importance of examining the fairness and equitability of deploying protestware as a protest method. Hence it is vital to evaluate whether such actions disproportionately impact specific groups (including, e.g., time and effort to rectify deleterious code behavior) or inadvertently create disparities in software resource access. Bearing these ethical principles in mind, determining the suitability of transforming OSS into protestware necessitates a thorough analysis of the potential benefits, drawbacks, and broader societal ramifications of such a decision.
In this section, we present various initiatives which can be implemented to promote a more balanced perspective on protestware -- with an emphasis on its potential risks -- and encouraging maintainers to consider alternative methods for expressing their concerns. We also suggest directions for future work, including examining the role of OSS governance, policy, and the'social license to operate', before finally identifying ways to protect users from protestware threats.
Responsibility in the OSS CommunityTo minimize the moral dilemmas (and concrete implications) associated with protestware, maintainers should be encouraged to prioritize the needs of their users and contribute to the greater good of the community. Cultivating a sense of community and fostering strong relationships with stakeholders -- from end users to fellow developers -- can help maintainers understand the potential consequences of their actions and work together to develop ethical guidelines. Establishing communication channels and fora for discussion allows maintainers to express their concerns without resorting to protestware in the first instance, ensuring the well-being of both the community and its users. Future work should investigate how the community
can cultivate a culture of care10 and moral responsibility and explore existing channels that allow maintainers to voice their protests in a non-jeopardizing way.
Footnote 10: Other frameworks of ethics, including care ethics, are also considered in current research into technology.
Safeguards against ProtestwareFuture work could also explore the use of machine learning techniques, particularly natural language processing and code analysis algorithms, to detect early indicators of potential protestware in software repositories. Such techniques, in the same vein as, e.g., automated auditing for privacy and security issues, could better equip end users or developers to protect themselves from potential threats, ultimately enhancing the overall security and trustworthiness of the open-source ecosystem.
Enabling Healthy Channels for ProtestIt is important for maintainers to feel that their concerns are being addressed through appropriate channels. Good governance and a fair system of representation can alleviate the need for protestware by offering maintainers alternative avenues to voice their opinions and effect change within the OSS community. Future research should explore how existing channels for maintainers to voice their protests or grievances can be improved or expanded.
Education of Ethical ResponsibilityWhile this article is a good first step to foster awareness, ethics educational programs, focusing on ethical responsibility and social impact, can help maintainers better understand the potential consequences of using protestware and foster a more nuanced perspective on this issue. By providing guidance on ethical decision-making and highlighting alternative methods for expressing concerns, ethics education can play a vital role in promoting a more balanced and responsible approach to protestware within the OSS community. Future work should examine why some developers might feel far removed from ethical considerations and how educational programs can effectively address this issue.
## V Navigating the Ethical Landscape of Protestware in OSS
The OSS ecosystem, particularly with the emergence of protestware, poses diverse ethical challenges for various stakeholders. In the following, we discuss the implications for different stakeholders, emphasizing the importance of awareness and providing examples of ethical frameworks that could be applied in these situations.
Maintainers, when confronted with protestware, play a pivotal role in shaping the OSS landscape. It is crucial that they clearly communicate the intended use and restrictions of their software in the documentation and/or terms of service while adhering to applicable laws and regulations. As an example, duty ethics highlights the importance of maintainers' moral obligations, such as transparency, honesty, and legal compliance.
Contributors, as creators of public goods, must exercise responsibility, consideration, and ethics in their actions, particularly when engaging with OSS projects. It is essential for contributors to be aware that any project they contribute to could potentially transform into protestware, and they may have limited control over the project's direction despite their contributions. Consequentialist ethics, for instance, emphasize the need to assess the potential consequences of one's actions, urging contributors to be mindful of the projects they engage with, maximizing positive outcomes, and minimizing potential harm. Awareness of protestware's implications and the potential risks of contributing to projects that may adopt such a stance is key to making informed decisions.
Newcomers to the OSS community should familiarize themselves with its principles, values, and the ethical implications of protestware. OSS relies on community-driven efforts, making it essential for newcomers to be respectful, collaborative, and helpful. Awareness of the presence of protestware and potential consequences is critical. As an example, the principle of autonomy in biomedical ethics underscores the importance of respecting the choices and values of other members of the community.
End users and industry must be vigilant about the potential risks and benefits associated with using OSS tools, acknowledging that any project could potentially transform into protestware. A key ethical implication for end users and industry is understanding that an OSS project's maintainer might have different ethical priorities, which in extreme cases could lead to the creation of protestware. The risk is especially pronounced if end users or industry already rely on the software, and then it turns into protestware. Industry, in particular, must be cautious when using OSS in a professional setting, ensuring compliance with company policies and regulations. By assessing risks, addressing potential security vulnerabilities, and contributing back to projects when possible, the industry can apply ethical principles like benefcience and nonmaleficence from biomedical ethics, as an example, to promote positive outcomes and minimize harm to stakeholders. Awareness of the potential impact of protestware and the varying ethical priorities of project maintainers is crucial for responsible decision-making within the industry.
Educators have a responsibility to discuss the implications and risks of OSS, including protestware and its legal and ethical consequences. Duty ethics, as an example, encourages educators to impart knowledge of moral obligations and duties as software creators and users. Guided by consequentialist ethics and biomedical principles, students should be encouraged to consider the long-term impacts and ethical implications of their work, particularly with regard to protestware. The promotion of awareness among students is a key educational goal.
Researchers play a vital role in identifying and mitigating protestware risks by developing methods to analyze code for vulnerabilities or detect patterns in malicious software behavior. By investigating the pros and cons of protestware as a political protest tool, researchers can apply ethical frameworks such as consequentialist ethics and biomedical principles to balance potential benefits and harms while considering justice and fairness within the OSS community. |
2310.15687 | Reducing residential emissions: carbon pricing vs. subsidizing retrofits | In this paper, we compare different mitigation policies when housing
investments are irreversible. We use a general equilibrium model with
non-homothetic preferences and an elaborate setup of the residential housing
and energy production sector. In the first-best transition, the energy demand
plays only a secondary role. However, this changes when optimal carbon taxes
are not available. While providing subsidies for retrofits results in the
lowest direct costs for households, it ultimately leads to the highest
aggregate costs and proves to be an ineffective way to decarbonize the economy.
In the second-best context, a phased-in carbon price outperforms the
subsidy-based transition. | Alkis Blanz, Beatriz Gaitan | 2023-10-24T09:59:53Z | http://arxiv.org/abs/2310.15687v1 | # Reducing residential emissions: carbon pricing vs. subsidizing retrofits
###### Abstract
In this paper, we compare different mitigation policies when housing investments are irreversible. We use a general equilibrium model with non-homothetic preferences and an elaborate setup of the residential housing and energy production sector. In the first-best transition, the energy demand plays only a secondary role. However, this changes when optimal carbon taxes are not available. While providing subsidies for retrofits results in the lowest direct costs for households, it ultimately leads to the highest aggregate costs and proves to be an ineffective way to decarbonize the economy. In the second-best context, a phased-in carbon price outperforms the subsidy-based transition.
We are very grateful to Matthias Kalkuhl, Kai Lessmann, Ottmar Edenhofer for useful discussion and comments. This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101056891 - CAPABLE - ClimAte Policy AcceptaBiLity Economic framework. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.
Introduction
Reducing the carbon emissions in the housing sector requires large upfront investments. Investment needs include retrofitting the existing housing stock and constructing new, more energy-efficient buildings to meet housing demand (IEA, 2021). However, vast evidence indicates that current investment levels are insufficient, increasing the possibility of a carbon lock-in (Cabeza et al., 2022). In this situation, investments are inconsistent with climate targets, leaving only expensive reductions through stranded assets as an option for emission reductions. Preventing a carbon lock-in in residential housing calls for ambitious mitigation policy that can stimulate and coordinate various types of investment.
In the context of housing investments, the optimal instrument choice for mitigation is unclear. Housing-related investments are different. Investments in housing are irreversible (Miles, 2009) and therefore cannot be converted after construction. Furthermore, climate policy in housing is challenging from a political economy perspective, as shelter is a basic need for humans. Implementing ambitious climate policy can be difficult to implement. Imposing high carbon taxes on households increase housing costs and utility bills, which may increase poverty. Furthermore, if the direct costs for households are too high, households may not be able to invest sufficiently in energy efficiency and the expansion of renewable energy.
This paper addresses this gap and compares different mitigation policies when housing investments are irreversible. We build a multisector general equilibrium model with a special focus on the different investment decisions affecting the decarbonization of the residential housing sector. The model combines an elaborate setup of the residential housing market alongside energy markets. We separate investments to reduce the energy intensity of the existing housing stock from investments in the construction of new, more energy-efficient buildings. Furthermore, we allow for investments in fossil and non-polluting capital used in energy production. We compare different carbon pricing scenarios with transitions that rely on housing-related investment subsidies and calibrate the model to the case of Germany. Thereby, we isolate the impact of policy constraints for the green transition.
When policymakers face no constraints on the availability of mitigation policies, the conventional wisdom holds and a uniform carbon price is the cost-effective instrument for the green transition (Pigou, 1932; Goulder and Parry, 2008; Fischer and Newell, 2008). In the first-best scenario, energy demand plays only a secondary role.
However, this changes when optimal carbon taxes are not available. When it is not possible to directly increase fossil resource prices, climate policy has to rely on reductions in energy demand to reduce fossil fuel consumption in housing. Due to the irreversibility constraint, the only option is to excessively subsidize investments in energy efficiency, making the subsidy-based transition very costly from a social perspective.
This paper builds on several strands of the literature. First, we add to the literature that analyzes optimal timing and allocation of abatement investments across sectors. Vogt-Schilb et al. (2018) find that it is optimal to start with significant short-term abatement investment because we cannot switch overnight to carbon-free technologies. It matters in which sectors the short-term effort abatement investment happens (Vogt-Schilb and Hallegatte, 2014; Lecocq et al., 1998; Vogt-Schilb and Hallegatte, 2014). We extend this literature by explicitly considering housing investments related to the energy demand of the housing stock. We are able to show that housing investments differ from investments in industrial capital. In contrast to industrial capital, investing in housing capital directly increases utility at the cost of having a higher energy demand. The role of energy efficiency investments is to solely reduce these direct costs of housing investments. One contribution is to show how climate policy optimally affects the different investment incentives over time.
Furthermore, we relate to the large literature that focuses on the energy efficiency gap (Jaffe and Stavins, 1994; Allcott and Greenstone, 2012; Gerarden et al., 2017), the difference between optimal and actual energy use. While the existing literature considers both behavioral failures and market failures as potential causes for inefficient energy use, it focuses largely on static frameworks. We complement this literature by analyzing the intersection within a dynamic, general-equilibrium setting. Thereby, we are able to consider the interaction of energy demand in housing with climate policy and macroeconomic activity. We contribute to this literature by analyzing the energy efficiency gap in a dynamic setting within the context of reducing residential carbon emissions.
The closest paper to ours is Rozenberg et al. (2020). The authors compare the impact of different mitigation policies in a multisector general equilibrium model with irreversible investment and a climate constraint. We consider our paper to be complementary to their analysis. Rozenberg et al. (2020) discuss a potential trade-off between the political feasibility and cost-effectiveness of mitigation policies
based on the premature retirement of fossil capital. Instead, we consider housing costs, including energy bills, as the main source of transition costs for households. Thereby, we expand their insights for energy production by focusing on the case of residential housing.
The paper is structured as follows. Section 2 describes the model environment. Section 3 summarizes the calibration, while section 4 presents the main analysis. Finally, section 5 concludes.
Model Environment
This section presents the dynamic multi-sector general equilibrium model used for analyzing the transition towards a low carbon economy. The model combines an elaborate setup of the housing and energy sectors, as well as an optimizing government as in Kalkuhl et al. (2012). The dynamic setting allows for studying the interaction between sectors and actors. The decisions of households determine how energy-intensive their housing demand is. Energy producers decide on how carbon-intensive energy production is, and the government sets policy instruments optimally to achieve a climate target.
### Households
The economy is populated by \(n\) households who have preferences of type:
\[\max\sum_{t=0}^{T}\frac{1}{\left(1+\rho\right)^{t}}\frac{\left(c_{t}^{\phi} \left(h_{t}-\bar{h}\right)^{1-\phi}\right)^{1-\eta}}{1-\eta}, \tag{1}\]
where \(\rho\) is the pure rate of time preference, \(\eta^{-1}>0\) is the elasticity of intertemporal substitution and \(\phi\) is a share parameter.1 Households derive utility from final consumption \(c_{t}\) and from housing services \(h_{t}\). Housing services are subject to a subsistence level \(\bar{h}\) that captures the minimal need for shelter for humans. Thus, preferences are non-homothetic, as in (Geary, 1950; Stone, 1954).
Footnote 1: Depending on the application, we either use the pure time preference rate \(\rho\) or the discount factor \(\beta\). Recall that they are directly related: \(\rho=(1-\beta)/\beta\).
Households have access to a wide portfolio of assets. These assets include industry capital used in the production of the final good and electricity, as well as different assets related to the production of housing services. As the focus lies on aggregate dynamics, we assume that all households are property owners and produce their own housing services. Households produce housing services according to:
\[h_{t}=land_{t}^{a_{h}}(\bar{k}+k_{t}^{H})^{1-a_{h}} \tag{2}\]
The production of housing services requires \(land_{t}\) and housing capital \(\bar{k}+k_{t}^{H}\). We distinguish between old housing capital (\(\bar{k}\)) with a high energy intensity and capital \(k_{t}^{H}\) with a low energy efficiency. The total amount of land is assumed to be fixed.
Thus, by expanding the housing stock, households can consume more housing services and thereby increase their utility. However, the construction of new buildings affects the energy demand of housing \(ene_{t}\). The energy demand of the house covers both the appliances of the house and heating. Housing-related energy demand is described by:
\[ene_{t}=\kappa_{o}(k^{E})\bar{k}+\kappa_{N}k_{t}^{H} \tag{3}\]
Depending on the energy efficiency of new buildings \(\kappa_{N}>0\), expanding the housing stock will increase the total energy demand of the housing stock. Additionally, the total energy demand depends on the energy demand of the old housing stock \(\bar{k}\). Especially in urban areas, old buildings are maintained so that they remain in use. The parameter \(\kappa_{N}>0\) and the function \(\kappa_{o}\), explained below, describe the energy efficiency of each housing stock. We assume that households pay maintenance costs on the existing housing stock to overcome its depreciation so that the old stock remains constant. The existence of maintenance costs is a common feature of housing models (Piazzesi and Schneider, 2016; Henderson and Ioannides, 1983; Li and Yao, 2007). Essentially, this simplification captures the fact that part of the housing stock is of such high value to society that it will not be demolished even if the energy intensity of these buildings is high. Instead, we allow for investments in energy efficiency capital \(k^{E}\). The stock of efficiency capital captures all kinds of retrofits, such as improvements to the insulation of walls, windows, and the roof. Thus, investments in efficiency capital reduce the energy intensity \(\kappa_{o}\) of the existing stock, which is:
\[\kappa_{o}(k^{E})=\frac{\bar{\kappa}}{k_{t}^{E}}+\kappa_{N} \tag{4}\]
where \(\bar{\kappa}\) is a positive constant. Thus, there are limits to the improvements in energy efficiency of the existing housing stock. In detail, the function form implies that in the limit, the energy demand of old buildings cannot exceed the energy efficiency of new buildings \(\kappa_{N}\). Investments in housing capital and efficiency capital determine the total energy demand of the housing stock. Increasing the production of housing services by expanding the housing stock decreases the overall energy intensity, as new buildings are considered to be more energy efficient but increases the overall energy demand. In contrast, efficiency capital does not increase the production of housing services but decreases the energy demand of the housing stock.
The energy demand for housing determines utility bills and, thus, housing costs. Depending on their investment choices into housing and efficiency capital, utility bills will be higher or lower. The energy demand related to housing can be satisfied by using electricity and a fossil resource. We assume that electricity \(e_{t}\) and the fossil resource \(res_{t}\) are imperfect substitutes for households.
\[ene_{t}=\left(a_{ene}e_{t}^{\frac{\sigma_{ene}-1}{\sigma_{ene}}}+(1-a_{ene})res_ {t}^{\frac{\sigma_{ene}-1}{\sigma_{ene}}}\right)^{\frac{\sigma_{ene}}{\sigma_{ ne}-1}} \tag{5}\]
The elasticity of substitution between electricity and the fossil resource is denoted \(\sigma_{ene}>0\), and \(a_{ene}\) and \((1-a_{ene})\) are their respective share parameters. Households pay for electricity and for the use of the fossil resource. The use of the fossil resource may be subject to a carbon price \(p_{HC,t}\). Therefore, the expenditure side of the households consists of consumption expenditures, investment \(inv_{t}^{i}\) in the portfolio of industry capital, land, and the two stocks of housing-related capital, and the energy costs attached to them.
Land, industry capital (final good capital (\(k_{t}^{Y}\)) and fossil and renewable energy capital (\(k_{t}^{F}\) and \(k_{t}^{N}\))), housing, and efficiency capital evolve according to
\[land_{t+1}= i_{t}^{Land}+land_{t} \tag{6a}\] \[k_{t+1}^{Y}= i_{t}^{Y}+(1-\delta_{Y})\,k_{t}^{Y}\] (6b) \[k_{t+1}^{F}= i_{t}^{F}+(1-\delta_{F})\,k_{t}^{F}\] (6c) \[k_{t+1}^{N}= i_{t}^{N}+(1-\delta_{H})\,k_{t}^{N}\] (6d) \[k_{t+1}^{H}= i_{t}^{H}+(1-\delta_{F})\,k_{t}^{H}\] (6e) \[k_{t+1}^{E}= i_{t}^{E}+(1-\delta_{N})\,k_{t}^{E} \tag{6f}\]
where \(i_{t}^{j}\) denotes the respective investment, and \(\delta_{j}\) is the rate of capital depreciation. In a dynamic general equilibrium model, agents must be indifferent between investing in industry capital, housing capital, and energy efficiency capital. Thus, households will consider not only energy and land prices but also the interest rate when making investment decisions. This is reflected in the non-arbitrage conditions, which can be found in the appendix.
Housing-related investments differ from investments in industry capital. Investments in housing capital and energy efficiency capital are irreversible as in Arrow and Kurz (1970). Once households invest in housing, capital cannot be transformed into consumption or industry capital. The irreversibility is summarized by:
\[inv_{t}^{H}\geq 0 \tag{7}\]
\[inv_{t}^{E}\geq 0 \tag{8}\]
### Production
On the production side of the economy, we assume the production of a final good and energy. The final good is used for consumption and for investments. The production technology of the final good producer is described by:
\[Y_{t}=\left(aZ_{t}^{\frac{q-1}{\sigma}}+\left(1-a\right)E_{Y,t}^{\frac{q-1}{ \sigma}}\right)^{\frac{\sigma}{\sigma-1}} \tag{9}\]
For production, the final good producer relies on two intermediate inputs. The final good producer combines final electricity \(E_{Y,t}\) from the electricity producer and a capital-labor composite. The capital-labor composite equals:
\[Z_{t}=K_{Y,t}^{a_{Z}}\left(A_{Y,t}L_{t}\right)^{1-a_{Z}} \tag{10}\]
where \(K_{Y,t}\) and \(L_{t}\), respectively, denote the capital and labor employed in the production of \(Y_{t}\), and \(A_{Y,t}\) is exogenous labor augmenting technological change. We follow Barrage and Nordhaus (2023) so that \(A_{Y,t}\) evolves according to
\[A_{Y,t}=A_{Y,t-1}/\left(1-a_{Y}e^{-b_{Y}(t-1)}\right) \tag{11}\]
where \(a_{Y}\) and \(b_{Y}\) are positive constants. The production side of the economy allows for an elaborate description of the electricity sector that resembles in many aspects the approach employed in Kalkuhl et al. (2012). The final electricity producer demands electricity from fossil (\(E_{F,t}^{d}\)) and non-polluting renewable sources (\(E_{N,t}^{d}\)). The
production function of the final energy firm is:
\[E_{t}=\left(a_{E}\left(E_{F,t}^{d}\right)^{\frac{\sigma E-1}{\sigma_{E}}}+\left(1 -a_{E}\right)\left(E_{N,t}^{d}\right)^{\frac{\sigma E-1}{\sigma_{E}}}\right)^{ \frac{\sigma E}{\sigma_{E}-1}} \tag{12}\]
where \(a_{E}\) is a share parameter and \(\sigma_{E}\) is the elasticity of substitution between \(E_{F,t}^{d}\) and \(E_{N,t}^{d}\). The role of the final energy firm in our setting is that of an electricity provider. The firm does not produce the electricity itself by operating wind farms and power plants, but buys the electricity from fossil and renewable electricity producers. Thus, the profits of the final electricity firm are described by:
\[\pi_{E,t}=p_{E,t}E_{t}-p_{F,t}E_{F,t}^{d}-p_{N,t}E_{N,t}^{d} \tag{13}\]
where \(p_{E,t}\) denotes the final price of energy and \(p_{F,t}\) and \(p_{N,t}\), respectively, denote the prices of fossil and renewable energy.
The fossil energy producer uses capital and fossil resources to produce electricity. The production function of the polluting electricity firm is:
\[E_{F,t}=\left(a_{F}K_{F,t}^{\frac{\sigma_{F}-1}{\sigma_{F}}}+\left(1-a_{F} \right)Res_{F,t}^{\frac{\sigma_{F}-1}{\sigma_{F}}}\right)^{\frac{\sigma_{F}}{ \sigma_{F}-1}} \tag{14}\]
where \(a_{F}\) is a are share parameters, and \(\sigma_{F}\) is the elasticity of substitution between\(K_{F,t}\) and \(Res_{F,t}\). Based on the setup in Kalkuhl et al. (2012), we assume that the use of a fossil resource is the source of carbon emissions and not the use of a dirty capital stock. We abstract from fossil resource production within the economy. Instead, we assume a small-open economy that imports the fossil resource at a price \(p_{R,t}\). Climate policy can affect the full price of the fossil resource by imposing a carbon price \(p_{C,t}\). Thus, the profits of the polluting fossil electricity firm are described by:
\[\pi_{F,t}=p_{F,t}E_{F,t}-R_{t}^{F}K_{F,t}-(p_{R,t}+p_{C,t})Res_{F,t} \tag{15}\]
In contrast to the polluting firm, the renewable energy producer does not rely on fossil resources to produce electricity. Producing non-polluting electricity is a function of capital. The production technology and profits of the non-polluting electricity firm are described by:
\[E_{N,t}=A_{N,t}K_{N,t} \tag{16}\]
\[\pi_{N,t}=p_{N,t}E_{N,t}-R_{t}^{N}K_{N,t} \tag{17}\]
\(A_{N,t}\) is exogenous technological change that evolves according to
\[A_{N,t+1}=A_{N,t}/\left(1-g_{N}\right). \tag{18}\]
Given a rental rate of capital \(r_{t}\), the cost of producing renewable energy declines at the exogenous rate \(g_{N}\). This specification resembles the decrease in cost of the backstop technology employed in Barrage and Nordhaus (2023).
### Government
The primary focus of the government in our setup is climate policy. As in Kalkuhl et al. (2012), the government is an agent that, acting as a Stackelberg leader of its own economy, chooses policy instruments optimally under a given set of constraints. We assume that the government has a fixed emission target in the form of a carbon budget and a set of policy tools to achieve the target. For expositional simplicity, we assume that carbon emissions are directly associated with the use of the fossil resource both in housing and energy production. By assuming a common emission target, we fix the level of ambition across scenarios and allow for a comparison of the effectiveness of different policy instruments. The climate target is given by:
\[\overline{M}=\sum_{t=0}^{T}(Res_{F,t}+nres_{t}) \tag{19}\]
The main climate policy tool is a carbon price that taxes the use of the fossil resource. However, we allow for differentiated carbon prices in housing and industry. Carbon pricing is the sole revenue source for the government in our setting. The budget constraint of the government in the baseline analysis is:
\[\Gamma_{t}+n\tau_{INVE,t}inv_{t}^{E}=p_{C,t}Res_{F,t}+np_{HC,t}res_{t} \tag{20}\]
where \(\tau_{INVE,t}\) is a subsidy to energy efficiency capital investment. Depending on the set of policies considered, the revenues from carbon pricing can be used either to pay
transfers to households \(\Gamma_{t}\) or to subsidize investments into efficiency capital.2
Footnote 2: Theoretically, it could also be an option to subsidize investments into housing capital, but as it is never optimal in our setting, we omit it from the budget constraint for expositional simplicity.
### Market clearing
This section briefly presents the market clearing conditions in our setup. Capital market clearing equals
\[K_{Y,t}=nk_{t}^{Y},\,K_{F,t}=nk_{t}^{F},\,K_{N,t}=nk_{t}^{N}; \tag{21}\]
Market clearing in electricity equals
\[E_{Y,t}+ne_{t}=E_{t}; \tag{22}\]
The market clearing in fossil energy is:
\[E_{F,t}^{d}=E_{F,t}; \tag{23}\]
Similarly, the market clearing in renewable energy equals
\[E_{N,t}^{d}=E_{N,t}; \tag{24}\]
Finally, the balance of payments is balanced if
\[Y_{t}= n\left(c_{t}+i_{t}^{Y}+i_{t}^{F}+i_{t}^{N}+i_{t}^{H}+i_{t}^{E} +\delta_{H}\overline{k}\right) \tag{25}\] \[+p_{R,t}\left(n\times res_{t}+Res_{F,t}\right)\]
## 3 Calibration
We calibrate the model to the German economy and indicate the parameter values used throughout the simulations in Table 1.
### Consumers
For the share \(\phi\) of final good consumption, we use German input-output data for the year 2014 from the World Input-Output Database (WIOD, refer to Timmer
\begin{table}
\begin{tabular}{l l r} \hline \multicolumn{1}{c}{Parameter} & description & value \\ \hline \hline \(a\) & share of capital-labor composite \(Z\) & 0.95 \\ \(\sigma\) & elasticity of substitution \(Z-E_{Y}\) & 0.40 \\ \hline \(a_{Z}\) & share of capital \(K_{Y}\) in \(Z\) & 0.35 \\ \(A_{Y,0}\) & labor productivity at \(t=0\) & 1.00 \\ \hline \(a_{E}\) & share of fossil energy \(E_{F}\) & 0.57 \\ \(\sigma_{E}\) & elasticity of substitution \(E_{F}-E_{N}\) & 4.70 \\ \hline \(a_{F}\) & share of capital \(K_{F}\) & 0.80 \\ \(\sigma_{F}\) & elasticity of substitution \(K_{F}-Res_{F}\) & 0.20 \\ \hline \(A_{N,0}\) & capital \(K_{N}\) productivity at \(t=0\) & 0.69 \\ \hline \(\phi\) & share of final good consumption & 0.80 \\ \(\bar{h}\) & housing services minimum consumption & 1.62 \\ \(a_{H}\) & share of land in housing services & 0.35 \\ \hline \(a_{ene}\) & share of electricity \(e\) in housing energy & 0.14 \\ \(\sigma_{ene}\) & elasticity of substitution \(e-res\) & 0.89 \\ \(\kappa_{N}\) & new housing capital energy intensity & 0.02 \\ \(\bar{\kappa}\) & parameter in \(\kappa_{o}\left(k_{t}^{E}\right)\) function & 0.005 \\ \hline \(\eta^{-1}\) & elasticity of intertemporal substitution & 1.00 \\ \(\rho\) & consumers rate of time preference & 0.02 \\ \hline \(\delta_{Y}=\delta_{F}=\delta_{N}\) & industry depreciation rate of capital & 0.10 \\ \(\delta_{H}\) & depreciation rate of housing capital & 0.02 \\ \(\delta_{E}\) & depreciation rate of energy efficiency capital & 0.03 \\ \(Land=n*land\) & Aggregate land endowment & 501.23 \\ \(k_{0}^{E}/\overline{k}\) & Ratio \(k_{0}^{E}\) to \(\overline{k}\) & 0.08 \\ \hline \(a_{Y}\) & constant in labor augmenting technological change & 0.0859 \\ \(b_{Y}\) & growth parameter in labor augmenting technological change & 0.0072 \\ \(g_{N}\) & growth parameter of technological change & 0.01 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Calibration
et al. (2015)). Let \(x_{c}\) denote aggregate household expenditures except for real estate activities and energy-related expenditures (mining and quarrying, manufacture of coke and refined petroleum products, and electricity, gas, steam, and air conditioning supply) from the WIOD. Let \(x_{h}\) denote the expenditure on real estate activities, and \(x_{ene}\) denote energy-related expenditures from the WIOD. Aggregate expenditure, thus, equals \(x\equiv x_{c}+x_{h}+x_{ene}.\) The share \(x_{c}/\left(x_{c}+x_{h}\right)\) equals \(0.81\), we round this number and set \(\phi=0.80\). The housing services composite is similar to Combes et al. (2021) who find a share parameter for capital equal to \(0.65\) using French data. For lack of a German estimate we set \(\left(1-a_{h}\right)\) equal to \(0.65\) so that the share of land \(a_{h}\) equals \(0.35\).
For the housing energy parameters \(a_{ene}\) and \(\sigma_{ene}\), we focus on heating space and hot water, which together account for about \(80\%\) of total residential energy use in Germany (refer to IEA (2020)). For the housing fossil energy share parameter \(\left(1-a_{ene}\right)\), we employ the estimates of direct residential fossil energy (coal, gas, and oil) from the Institute for Housing and Environment (IHE, refer to Loga et al. (2012)). To account for the large share of fossil energy in district heating, we add \(70\%\) of residential district heating energy (as reported by the IHE) to the direct fossil energy use and generate a total fossil energy use.3 The ratio of total fossil energy use to total energy use equals \(0.854\). Since more than \(70\%\) of district heating is produced using fossil resources, we round up \(0.854\) and set \(\left(1-a_{ene}\right)\) equal to \(0.86\), so that the share of electricity (\(a_{ene}\)) equals \(0.14\).
Footnote 3: More than \(70\%\) of district heating is produced using fossil resources according to the Federal Ministry for Economic Affairs and Climate Action of Germany (2021).
Let \(s_{e}\) denote the household expenditure share of electricity out of total household energy expenditure. Let \(\varepsilon_{res}\) denote the households' fossil resource partial own price elasticity of demand. With a CES function \(\varepsilon_{res}=-s_{e}\sigma_{ene}\) (cf. Allen (1938) p. 373) where \(\sigma_{ene}\) is the elasticity of substitution between electricity and fossil resources by households. Since natural gas is the largest source of energy for space and water heating in Germany, we use Nilsen et al. (2012) estimate of the short-run price elasticity of demand for natural gas of German households. Their estimate equals \(-0.131\). We set \(s_{e}=\left(1-0.854\right)\), setting \(\varepsilon_{res}=-0.131\) and using \(\varepsilon_{res}=-0.131=-s_{e}\sigma_{ene}=-\left(1-0.854\right)\sigma_{ene}\) implies \(\sigma_{ene}=0.89\), which is the value we use for the elasticity of substitution between electricity and fossil resources used by households.4
We set the rate of time preference \(\rho\) equal to 0.02, a value often used elsewhere (see Barro and Sala-i Martin (2003)). In view of Guvenen (2006) findings, we set the inverse of the elasticity of intertemporal substitution \(\eta\) equal to 1.
Footnote 1: The \(\eta\)-dependent elasticity is not a constant, but it is not a constant, but it is not a constant.
### Depreciation and production
Davis and Van Nieuwerburgh (2015) survey the macroeconomic housing literature and find that the estimates of the depreciation rate of housing structures are in the range of 0.01 to 0.03. Based on this, we set the depreciation rate of housing capital (\(\delta_{H}\)) equal to 0.02. For the depreciation rate of energy efficiency capital, (\(\delta_{E}\)), we use the depreciation rate of electricity, oil, and wood heaters of 0.03 used by Nesbakken (2001) who analyzes the energy consumption of the heating of space for the case of Norway. For the rate of industry capital depreciation (\(\delta_{K}\)), we employ the value of 0.10 used by Kiyotaki et al. (2011), a value that is also consistent with the long-run rate of capital depreciation in manufacturing found in Albonico et al. (2014). The separation of housing capital leads to larger estimates for the depreciation of manufacturing capital than the values typically employed for aggregate capital.
Regarding the elasticity of substitution between the capital-labor composite and electricity of the final good (\(\sigma\)), we employ estimates of Koesler and Schymura (2015). Their estimate of all industries is equal to 0.38. We round this number and set \(\sigma=0.4\). The share of the capital-labor composite of the final good is proxied by making a sectoral aggregation of the input-output matrix of the WIOD. We aggregate the input-output matrix into four sectors, namely i) fossil resources, ii) electricity, gas, steam, and air conditioning supply, iii) real estate activities, and iv) all the remaining sectors which we think of as the final good. We set the share of the capital-labor composite in the production of the final good (\(a\)) equal to the share of value-added in energy expenditures (i and ii) plus value added. This leads to \(a=0.95\).
Regarding the capital share of the capital-labor composite (\(a_{Z}\)), we use Valentinyi and Herrendorf (2008) estimate. They find that the omission of intermediate inputs leads to biased values if capital and labor shares are directly estimated from input-output tables. They solve this by calculating the amount of capital and labor embodied in intermediate inputs and impute this to sectoral capital and labor shares. No estimates for Germany are available, but the US and German economies
are sufficiently similar such that we use their estimate. Valentinyi and Herrendorf estimate capital shares of agriculture, manufactured consumption, services, equipment investment, and construction; and consider various aggregations of these five sectors. Their aggregation of agriculture, manufactured consumption, and services leads to a capital share of \(0.35\). We set the capital share equal to this value (\(a_{Z}=0.35\))
For the elasticity of substitution between fossil and clean energy, (\(\sigma_{E}\)) and the respective share parameters of the energy composite, we use the German estimates of Stoeckl and Zerrahn (2020). We use the average of their current and future elasticity of substitution estimates (the latter takes into account capacity constraints), the average equals \(4.7\), we thus set \(\sigma_{E}=4.7\). We take a similar average from Stoeckl and Zerrahn in the case of the share of clean energy (\(1-a_{E}\)) leading to \(0.43\), and thus set the share of fossil energy (\(a_{E}\)) equal to \(0.57\).
In the case of the elasticity of substitution between capital and the fossil resource (\(\sigma_{F}\)) and the capital share parameters (\(a_{F}\)) in the production of fossil energy, we use the values used by Kalkuhl et al. (2012) and thus set \(\sigma_{F}=0.15\), and \(a_{F}=0.8\).
The parameters \(a_{Y}\) and \(b_{Y}\) in Table 1 reproduce the values employed by Barrage and Nordhaus (2023), and \(g_{N}\) decreases the cost of clean energy by \(1\)
The rest of the parameters \(\bar{h}\), \(\bar{\kappa}\), \(\kappa_{N}\), \(A_{N,0}\), and the \(Land\), and the ratio \(k^{E}/\bar{k}\) of respective capital endowments are set so that a steady-state solution, absent of policy and technological progress, satisfies the following properties.
According to the German Federal Statistical Office (Statistisches Bundesamt (2022)), housing costs (including energy) were on average \(23.3\%\) of disposable income of households. We solve the steady state of the model without policy so that the expenditure on housing services and energy on disposable income equals \(0.233\).
In Germany during the year 2019, a \(70m^{2}\) well-insulated apartment building with gas heating had an average heating cost of \(485\) euros, instead, the average heating costs of a \(70m^{2}\) apartment in a badly insulated building amounted to \(1,030\) euros (see CLEW (2020)). We use this information and solve the steady state of the model without policy so that the ratio of the energy efficiency of old housing capital to that of new housing capital, that is \(\left(\bar{\kappa}/k_{E}+\kappa_{N}\right)/\kappa_{N}\) equals \(1,030/485\).
According to Federal Ministry for Economic Affairs and Climate Action of Germany (2022), the share of renewables in the German electricity sector was \(41\%\) in the year
2021. We use this value and solve the steady state of the model without policy so that the ratio of clean energy to clean and fossil energy \(E_{N}/(E_{N}+E_{F})\) equals 0.41.
The emissions generated in the German electricity and building sectors (residential, commercial, and military), respectively, amounted to 247 and 115 million tonnes of CO\({}_{2}\) equivalents in 2021 (refer to Statista (2022a)). Of the building emissions, 76% were residential emissions; in other words, residential emissions amounted to about \(115*0.76=87.4\) million tonnes of \(CO_{2}\) equivalents (see Statista (2022b)). We solve the steady state of the model without policy so that the ratio of residential fossil consumption to residential plus electricity fossil consumption \(n*res/(n*res+Res_{F})\) equals 87.4/(87.4+247) =0.26
The Institute for Housing and Environment (see Loga et al. (2012)) provides the area of single and multifamily housing units in Germany. They divide housing into different groups depending on when they were built. We aggregate the single and multifamily houses that were constructed until 1978, the year in which the Ordinance on Thermal Insulation went into effect. We consider those houses constructed until 1978 to be the old housing capital stock with a higher energy intensity and those constructed after to be of low energy intensity. The share structures constructed before 1978 equals 66%. We solve the steady state without policy so that the ratio \(\bar{k}/\left(\bar{k}+k^{H}\right)=0.66\).
In the steady-state solution without policy, we set the price of land equal to one so that the aggregate land endowment can be determined. The values of \(\bar{h}\), \(\bar{\kappa}\), \(\kappa_{N}\), \(A_{N,0}\), and the \(Land\), and the ratio \(k^{E}/\bar{k}\) are indicated in Table 1.
## 4 Results
The transition to a carbon-free economy depends on ambitious climate policy. In the absence of climate policy, the economy continuously relies on the use of fossil resources for both housing and energy production. The aim of this section is to study various transitions towards a carbon-free economy. All transitions are optimal given the respective constraints, as all agents, including the government, behave optimally. However, in all scenarios, the government has access to distinct sets of policy instruments. Specifically, we analyze the first-best transition without limitations on the availability of different policy instruments and compare it to several second-best
settings. The main restriction we consider is on carbon pricing in the housing sector. In detail, we compare the first-best transitions to transitions where carbon pricing in housing is phased-in or is entirely unavailable.
When analyzing the first-best transition, we rely additionally on analytical derivations from the model setup. Thereby, we are able to isolate how climate policy can incentivize the investment shifts necessary for decarbonizing the economy. Additionally, we can confirm that the conventional wisdom holds in our setup. The optimal instrument is a uniform carbon price in all sectors of the economy. Interestingly, the irreversibility constraint does not affect optimal instrument choice. Instead, the constraint matters for the housing-related energy demand across the transition path. To analyze the optimal instrument choice and the implications of the irreversibility constraint, we rely on numerical simulations of the model.
### Optimal Transition
The analysis begins by comparing the optimal transition to the no-policy benchmark. In all scenarios, the economy grows due to exogenous technological progress. The increase in economic output and household prosperity lead to a higher demand for energy and housing. Thereby, the demand for the fossil resource increases as well. The fossil resource enters both the supply, as well as the demand side of the economy. Fossil energy producers use the resource alongside capital to produce energy. Additionally, households use the fossil resource alongside electricity to satisfy the energy demand of their housing demand. In the laissez faire, these decisions are not affected by climate policy.
The transition requires changing the price of using the fossil resources, as the externality linked to the use of the resource is not accounted for. However, apart from the resource use, internalizing these external costs may also influence investment behavior of households. In our setup, households invest in both industry and housing capital. Industry capital covers capital that relates to energy and final good production. Housing and efficiency capital are relevant for producing housing services. These two types of investment exhibit fundamental differences and react differently to climate policy. Through analytical derivations of the model, we can explore this nexus in more detail.
At the core of our general equilibrium model is a rich investment portfolio. As these investments are interdependent, households choose investments such that they are in
different between the different investment options. Conceptually, investing in capital for energy production, whether it be fossil or non-polluting, provides an alternative to investing in physical capital. All three are forms of industrial capital. Households choose consumption and investment in the laissez-faire equilibrium such that the values of physical capital, as well as clean and fossil capital, are equalized:
\[\psi_{t}=\psi_{t}^{C}=\psi_{t}^{F}=\lambda_{t} \tag{26}\]
where \(\psi_{t}\) is the value of investments in physical capital, \(i\in(F,N)\) are the shadow values of investments in fossil and non-polluting capital, and \(\lambda_{t}\) is the shadow value of income. In the end, fossil and clean capital are simply another way to invest in physical capital. As perfect capital mobility exists, the return of these assets is determined essentially by the Euler equation:
\[R_{t+1}= \frac{1}{\lambda_{t+1}}\left[\frac{\psi_{t}}{\beta}-\left(1- \delta_{Y}\right)\psi_{t+1}\right]=\frac{u_{c_{t}}}{\beta u_{c_{t+1}}}-\left(1- \delta_{Y}\right) \tag{27}\]
Without climate policy, the marginal productivity of fossil and clean capital, as well as their returns, are equal:5
Footnote 5: The only possible difference in returns are differences in depreciation rates.
\[R_{t}=R_{t}^{F}=R_{t}^{N} \tag{28}\]
Climate policy affects investment in energy-related capital stocks indirectly through its effect on incentives for energy producers. Without a carbon price, the fossil energy producer chooses fossil resource use by setting its value marginal productivity equal to the exogenous price \(p_{R,t}\). However, the price of fossil energy is too low from a societal perspective since it is not high enough to satisfy the emissions budget. With a carbon price, the demand for non-polluting capital increases. Through a relative price change, electricity producers rely less on fossil energy, which increases the incentive to move capital to renewable energy production. The change in marginal productivity of fossil and non-polluting capital affects the return on investments and, thus investment behavior of households. Optimal climate policy shifts investments from fossil capital to non-polluting capital.
Housing investments differ from investments in industrial capital. An increase in
housing capital leads to a higher level of housing services, making housing investments a direct source of utility rather than increasing income in the future. The implicit return of investing in housing is utility. Non-homothetic preferences determine the increase in utility from additional housing consumption and, thereby, the return from investing in housing capital. Housing services are subject to a subsistence level. Depending on household income levels, the increase in housing capital increases utility differently. For example, an additional unit of housing consumption increases utility more strongly at low-income levels as households spend a relatively larger share of their income on housing. Since an increase in housing capital increases utility more strongly at low-income levels, it becomes more attractive to invest in housing capital. Thus, investing in housing capital becomes less attractive over time as households become wealthier.
However, investments in housing capital have direct costs. Increasing the stock of housing capital also increases the aggregate energy demand of the housing stock. This energy demand is met by households using electricity and the fossil resources. Consequently, an increase in housing capital results in higher energy expenditures for households. This increase in energy expenditures lowers the return of an increase in housing capital (cf. non-arbitrage conditions in the appendix). The size of this increase depends on the energy efficiency of newly constructed buildings. With a higher energy efficiency of new buildings, the impact of these direct costs decreases.
The rich investment portfolio in our setup allows for an alternative that lowers the direct costs of housing capital. By retrofitting the existing housing stock, households can directly lower the housing-related energy demand. This decrease in energy demand can offset the direct costs of housing investments. Increasing the energy efficiency capital stock lowers the direct costs of housing capital by lowering the overall energy expenditures of households. Because the housing stock becomes less energy-intensive, households need less electricity and fossil resources to meet the energy demand. Expanding the housing stock with new, energy-efficient buildings affects less the energy expenditure of households.
Another difference from industrial capital, in our setting, is that housing investments are irreversible. Irreversibility refers to the fact that once capital has been invested in a certain activity, it cannot be transformed into consumption goods (Arrow and Kurz, 1970). Housing is reversible only at very high costs, which is equivalent to irreversible investment (Miles, 2009). Therefore, households need to anticipate that
investing in housing-related capital means that it cannot be converted back into consumption or industry capital. The irreversibility constraints have a direct effect on the shadow value of housing and efficiency capital. The shadow values of housing capital and efficiency capital can be expressed as:
\[\psi_{t}^{H}=\lambda_{t}+\phi_{t}^{H}, \tag{29}\]
and
\[\psi_{t}^{E}=\lambda_{t}+\phi_{t}^{H}, \tag{30}\]
where \(\phi_{t}^{i}\), \(i\in\{H,E\}\) is the multiplier of the respective irreversibility constraint of housing and efficiency capital (for more details, refer to the appendix).
If the irreversibility constraint is binding, then \(\phi_{t}^{i}>0\) which creates a wedge between the shadow value of housing-related capital and industry capital. However, when the irreversibility constraints are not binding, the shadow values of all types of capital are equal. To determine whether these constraints are binding, it is useful to analyze how climate policy affects the incentives to invest in both types of housing capital.
In contrast to industry capital, climate policy directly affects the incentives for investing in housing-related capital. In the absence of climate policy, the price of fossil resources for heating does not account for external costs associated with the burning of fossil fuels. This implies that the private and social value of energy demand in housing differ (In appendix A, we provide expressions for the shadow value of energy demand \(\nu_{t}^{ene}\) in both cases. Without climate policy, the shadow value of energy demand is lower. Thereby, a marginal increase in housing capital is less costly in terms of increased energy demand:
\[\frac{\partial h^{ene}}{\partial k_{t+1}^{H}}= \frac{1}{\nu_{t+1}^{ene}}\left[\chi_{t+1}\frac{\partial h^{k}}{ \partial k_{t+1}^{H}}+\left(1-\delta_{H}\right)\psi_{t+1}^{H}-\frac{\psi_{t}^ {H}}{\beta}\right], \tag{31}\]
where \(\chi_{t}\) equals the marginal utility of housing services and \(\delta_{H}\) is the depreciation rate of housing capital. The direct costs of investing in housing capital are too low from a social perspective. In this situation, households reasonably invest more in housing capital, as this increases utility. The rise increase in energy costs due to the higher energy demand is less significant due to the low price of both electricity and fossil fuels. Therefore, compared to the scenario with optimal climate policy, households over-invest in housing capital.
From a societal perspective, the low direct costs of housing investments make it less attractive to invest in reducing the energy demand. Investing in efficiency capital solely lowers energy expenditures by lowering the energy demand of the existing housing stock. Without climate policy, the marginal benefit of decreasing the energy demand is smaller:
\[\frac{\partial h^{ene}}{\partial k_{t+1}^{E}}= \frac{1}{\nu_{t+1}^{ene}}\left[\left(1-\delta_{E}\right)\psi_{t+1} ^{E}-\frac{\psi_{t}^{E}}{\beta}\right], \tag{32}\]
where \(\delta_{E}\) is the depreciation rate of efficiency capital. In the laissez faire, the return to investing in retrofitting the existing stock is too low. Therefore, the stock of efficiency capital is lower than in the optimal policy case. Climate policy affects housing investments by increasing the shadow value of energy demand. From this, we can derive first insights for the irreversibility constraints. First, as the stock of efficiency capital is always lower in the no-policy scenario, the irreversibility constraint on efficiency capital will not be a barrier to the transition to a carbon-free economy. It will always be optimal to expand the stock of efficiency capital. Therefore, in the subsequent analysis, we only focus on the irreversibility constraint on housing capital investments.
The irreversibility constraint on housing investments may be binding, as the stock of housing capital is higher without climate policy. In principle, it may be optimal to have a lower energy demand once the carbon budget is introduced, as energy is more expensive. Decreasing the housing capital stock partially mitigates this problem by reducing the energy demand and, thus, energy expenditures. However, housing capital differs from industrial capital. Lowering the housing capital stock decreases housing services and, consequently, the level of utility. The extent of this loss of utility depends on non-homothetic preferences. Furthermore, there is an alternative available in the situation. By Investing in efficiency capital, households effectively use a backstop technology. When the irreversibility constraint binds, households can invest in efficiency capital and directly decrease the pressure of the constraint. In our setting, only households only want to decrease sharply housing capital, if the energy demand is too high.
Essentially, the question is what households do when they realize that the housing stock is too large. The first option is to decrease the energy demand by decreasing the housing capital at the expense of utility. If this is the cheapest option, then the irreversibility constraint is a barrier. The second option is to forego present
consumption in favor of investing in efficiency capital. Investing in retrofitting the existing stock may be a suitable alternative to running down the housing capital stock. Theoretically, investing in industrial capital through the expansion of capital in non-polluting electricity lowers the costs of electricity. In our general equilibrium model, these options interact. It is challenging to determine analytically if the constraint is binding. Thus, we explore whether the constraint is binding through the numerical simulation of the model.
The last part of the analytical derivations focuses on optimal climate policy. Given the differentiated impact on investments, it may be optimal to use differentiated carbon prices for households and the industry. Furthermore, decarbonizing electricity production may facilitate the decarbonization of the housing sector if electricity becomes cheaper. This leads to the question of whether it is optimal to implement different carbon prices for housing and energy production.
We find that the conventional wisdom, namely the least-cost theorem (Baumol and Oates, 1971) holds: the optimal carbon price is equal for households and industry. The intuition is as follows. In our setting, emissions are associated with the use of the fossil resource. During the transition, the substitution possibilities vary on the production and the household side. While in the industry, the expansion of renewable energy is the main option, for housing, electricity is the primary alternative. At the same time, reducing the energy demand in final goods production and housing may be necessary. In order to incentivize these changes, a shift in relative prices is necessary. A uniform carbon price optimally achieves this shift. The optimal carbon tax rate is given by:
\[\tau_{t}=\tau_{t}^{Y}=\frac{\mu_{t}^{R}}{u_{c_{t}}p_{R,t}}=\tau_{t}^{h} \tag{33}\]
In turn, the optimal carbon price equals \(p_{C,t}=\tau_{t}p_{R,t}\). The level of the carbon tax is determined by the shadow value of the carbon budget \(\mu_{t}^{R}\) divided by the marginal utility of consumption \(u_{c_{t}}\). This is standard in the literature (i.e., Barrage (2018); Kalsbach and Rausch (2021)). The only difference is that in our setting, it additionally depends on the (exogenous) price of the resource \(p_{R,t}\). The transition to a carbon-free economy features a gradual decline in the use of the fossil resource both in energy production and in housing. A gradual increase in carbon prices ensures that the decline in fossil resources is optimal.6 The transition relies on a shift in investment behavior, which we analyze in more depth in the numerical simulation.
Footnote 6: In the appendix, we show that this increase is constant and equal to the pure rate of time preference.
Figure 1: Housing capital stock relative to the no-policy benchmark
Figure 2: Housing capital investments in the optimal policy scenario and the no-policy baseline. Investments are expressed as a ratio over industry capital investments.
Within the numerical simulation, the climate target is to reduce carbon emissions by 50 percent within three decades. In our setting, this is equivalent to reducing fossil resource use by the same amount. Figure 4 describes capital accumulation dynamics throughout the transition for various forms of industry capital and for efficiency capital. The appendix includes graphs with further variables such as resource use, carbon prices, and renewable energy. The variables are presented as deviations from the no-policy path. Since there is exogenous technological progress, we compare the first-best transition to the outcome, where there is no an emission budget constraint but only technological progress during the same period.
During the first-best transition, energy production steadily decreases the fossil energy share and relies increasingly on renewable energy. By imposing a carbon tax on the fossil resource, climate policy increases the costs of the fossil energy producers and subsequently the price of fossil energy. As a result, the final energy producer substitutes fossil energy with renewable energy. This is reflected by a strong increase in non-polluting capital, accompanied by a decline in fossil capital. After a decade, the fossil capital stock is already half of what it is in the no-policy scenario. At the same time, the stock of non-polluting capital is 60 percent greater than the respective stock in the laissez-faire path.
The higher energy costs also influence the production of the final good. As the green transition requires taxing a previously cheaper good, energy, it results in a loss in output. Higher energy prices alter energy demand in final good production, which affects output. While there is a decline in output during the transition, the decline is not driven by overall lower investments into industry capital. Interestingly, industry capital investments do not change significantly. In fact, during the optimal transition, industry capital investments closely resemble those from the no-policy path. Next, we turn to the investment dynamics in the housing sector. Retrofitting becomes more attractive through climate policy, resulting in an increase in the energy efficiency capital stock (\(k_{t}^{E}\)). After two decades, the efficiency capital stock is 50 percent higher than in the no-policy case. The increase in efficiency capital occurs in two phases. Initially, it is optimal to rapidly increase the stock of efficiency capital, while the carbon price remains low. Afterwards, the stock of efficiency capital increases only gradually until it reaches a plateau. Overall, the energy demand of the housing stock decreases by 30 percent over this period. However, energy demand does not only depend on changes in the stock of efficiency capital. Additionally, investments in housing capital affect the aggregate energy demand of the housing stock. Figure 1
shows the evolution of the housing capital stock relative to the no policy benchmark. In the optimal climate policy scenario, the housing stock is 20 percent lower in the long run. To analyze the role of the irreversibility constraint in more detail, it is useful to look at investment rather than the stock of housing capital.
Figure 2 describes the dynamics of housing investments for both the no-policy and optimal-policy paths. Due to the irreversibility constraint, we focus on investments themselves rather than the housing capital stock. Housing capital investments are expressed as a ratio over industry capital investments. Without climate policy, housing investments steadily increase until they reach their peak. Afterwards, households do not invest further into housing capital. With advancements in technology, households become richer and eventually reach a point where additional housing service do not provide sufficient utility to outweigh increased energy expenditures. The total expansion of housing capital is significant without climate policy. At their peak, housing investments are one-tenth of physical capital investments.
In contrast, with optimal carbon pricing in place, the total expansion of the housing stock is limited. When considering the external costs of fossil fuel use, the direct costs of expanding the housing stock make it less attractive to invest in housing capital. Consequently, housing investments peak earlier and at a lower level. Since the increase in energy demand is more costly, households reach faster the point where an increase in utility from additional housing services is not worth it anymore. Thus, it is optimal to stop earlier investing further in housing capital.
Additionally, housing investments start later than in the no-policy case and only after a few periods. This indicates that the irreversibility constraint is binding. It is optimal to reduce sharply residential energy demand in the first period. Households achieve this in two ways. First, households initially run-down part of the housing stock. Since the irreversibility constraint prevents negative investments, the only option is depreciation. Furthermore, the presence of the irreversibility constraint triggers the large investments in efficiency capital. Thus, in the first-best transition, it is optimal to reduce energy demand in the short term by both reducing the housing capital stock and sharply increasing the amount of retrofits.
### Second-best Transition
Implementing optimal carbon pricing schemes may not be feasible. Imposing high carbon prices on households may cause resistance and thus may lack political sup
port. Households may have misperceptions about the effectiveness or fairness of carbon taxes (Douenne and Fabre, 2022). In addition, depending on their political background, policymakers may prefer subsidies or regulatory policies to the introduction of taxes. Given that high carbon prices are needed to decarbonize the housing sector, we compare the first-best transition with transitions, where carbon prices for households can only slowly be introduced or are not available at all. Thus, we consider a phased-in carbon price scenario and one scenario, where only subsidies for efficiency capital are used. Note that these transitions are optimal in the sense that all actors, including the government, behave optimally. Thus, the government sets taxes and subsidies optimally given the constraints it faces.
In all scenarios, we do not affect climate policy on the production side. The aim is to consider different policy scenarios for the housing sector while leaving energy production unaffected. We achieve this by separating the carbon budget. The carbon budget for the industry is set to the optimal burden in the first best. Since we do not
Figure 4: Evolution of the different capital stocks in the optimal climate policy scenario in deviation from the no-policy benchmark.
impose any restrictions on climate policy in energy production, the resulting carbon price path for the industry is identical to the first best. This allows us to isolate the effects of different restrictions on climate policy on the household side. This approach is inspired by climate policy on the European level, where separate emission trading systems are being established for firms and households. In the present analysis, our focus is on climate policy on the household side. In detail, we consider the scenario of phased-in carbon prices as in Rozenberg et al. (2020), as well as investment subsidies for efficiency capital.7
Footnote 7: In B appendix, we include a graphical comparison of investment patterns for housing capital and efficiency capital for all scenarios. The scenarios include the no-policy benchmark, first-best climate policy, and the described second-best settings.
The phased-in carbon price scenario focuses on short-term restrictions on the level of carbon prices. Figure 5 compares the phased-in carbon price for households to the optimal carbon price. The underlying intuition is that a slower implementation of carbon pricing will lead to lower immediate costs for households and thus increase political support compared to a higher carbon price regime. This approach is common for the introduction of carbon price schemes. Examples include Phased-in carbon
Figure 5: Optimal versus phased-in carbon prices. Variables are expressed as a ratio over fossil resource price.
prices reduce emissions more slowly in the short run, because we restrict them in the first decade. During this period, the carbon price is fixed to a lower level than the first-best carbon price. In detail, due to the phased-in carbon price, the price of fossil resource use increase by 25 percent. Afterwards, the government optimally sets the carbon price.
Additionally, we consider the case where carbon pricing is not available at all for the household side. In this case, the alternative is investment subsidies. In principle, the government can subsidize both the construction of new energy-efficient buildings and the retrofit of the existing stock. However, subsidizing investments in housing capital is never optimal in our setting. While new construction increases the energy efficiency of the overall housing stock, it also increases the energy demand. Since a higher level of energy demand makes the transition more costly, it is not useful from a climate policy perspective to subsidize housing capital investments. Thus, we solely consider subsidies to energy efficiency capital in the case where carbon pricing for housing is not available.
From the first-best transition, we understand that the irreversibility constraint on housing capital interacts with the necessity to reduce energy demand. One solution is to lower the energy demand of the existing stock. To measure the difference in energy efficiency between old and new buildings, we use the energy efficiency gap.8 When the efficiency gap reaches zero, the stock of old buildings has the same energy intensity as new buildings. Thereby, the energy efficiency gap determines the limits to investments in efficiency capital. Particularly in subsidy-based transitions, the energy efficiency gap may act as a barrier for climate policy.
Footnote 8: The energy efficiency gap is defined as the energy efficiency ratio of old to new buildings \(EEG=\frac{\kappa_{N}+\frac{\mathrm{n}}{k_{L}^{\mathrm{E}}}}{\kappa_{N}}\). We normalize it by the initial level of the ratio.
Figure 6 describes the evolution of the energy efficiency gap in housing for both the first-best and second-best transitions, as well as the no-policy benchmark. Without climate policy, the energy efficiency gap grows over time. Energy costs decrease over time, reducing the return to investing in energy efficiency and, thereby, the level of efficiency capital. In contrast, all scenarios with climate policy result in a significantly smaller energy efficiency gap. However, energy demand plays a fundamentally different role in carbon price and subsidy-lead transitions. In transitions with carbon pricing, energy demand plays a secondary role. While energy demand is relevant for the irreversibility constraint, the reduction of fossil resource use is
managed by affecting the price directly. If carbon pricing is not available, lowering the energy demand becomes the primary tool to reduce fossil fuel consumption in housing. Thus, investments in efficiency capital are essential. Notably, even in this scenario, it is never optimal to close the efficiency gap completely. We will analyze next the second-best transitions in greater depth.
We start with the case of phased-in carbon prices. As the carbon budget is identical to the first best, carbon prices must be higher once the policy constraint on the level of carbon taxes is no longer binding. Thus, the phased-in carbon price scenario focuses on a shift in the level of ambition during the transition. In exchange for a lower level of ambition in earlier periods, the level of ambition increases more afterwards to meet the climate target. The lower carbon price in the initial period leads to relatively lower energy expenditures for households. While the direct burden on households is lower, the lower carbon price weakens the price signal to increase energy efficiency investments.
When carbon prices are phased in, the investment pattern for efficiency capital differs from the first best. While in the first best, it is optimal to increase investments in energy efficiency sharply in the initial period, the initial expansion in the phased-in scenario is much smaller. In contrast to the first best, there is a second wave of
Figure 6: Evolution of the Energy Efficiency Gap in the first-best, second-best scenarios and the no-policy benchmark
investment in energy efficiency in the last period of the constrained carbon prices. Specifically, households invest a second time in efficiency capital. Notably, the magnitude of the increase in investment is almost identical to the initial expansion. The underlying intuition is that it is optimal for households to split investments into efficiency capital, as the initial increase addresses the comparatively lower carbon price. The second increase takes advantage of the fact that the burden is still low from the initial carbon price scheme, but due to technological progress, households are richer than in the initial period.
In contrast, when the government uses subsidies for efficiency capital, investment dynamics differ both qualitatively and quantitatively. First, the price of the fossil resource cannot be directly influenced. Reducing the use of fossil resource requires a reduction in energy demand. By reducing the energy demand of the housing stock, households need less electricity and less fossil resource to meet their energy demand. Consequently, very large investments in efficiency capital are needed to decrease the energy demand of the housing stock sufficiently. It is optimal to invest early and strongly in efficiency capital. In detail, the investments are five times higher than in the first-best scenario. After the large investments in the early periods, energy efficiency investments are zero thereafter.
Next, we turn to investments in housing capital. Compared to the first-best investment pattern, investments in housing capital are moved forward. While similar in magnitude to the first best, investment in housing capital occurs after the investment hike in efficiency capital, but as long as carbon prices are constrained. Due to the lower carbon price, the irreversibility constraint on the housing stock is less binding in the early periods. In contrast, housing capital investment is larger when climate policy relies on efficiency capital subsidies. Intuitively, when the energy demand of existing stock decreases due to the large expansion of efficiency capital, the direct cost of investing in housing capital is smaller. Hence, it is more attractive to expand the stock of housing capital to receive additional utility.
The irreversibility constraint is directly related to investments in housing capital. In the first best, households want to decrease housing capital stock during the initial period once they realize that the stock is excessively large, resulting in inefficiently high energy expenditures. Ideally, households immediately want to reduce the size of the housing stock, but the irreversibility constraint is preventing this. In the phased-in carbon price scenario, the irreversibility constraint is less problematic. Due to the
lower carbon prices in the earlier periods, the need to reduce the energy demand in the first period is lower. In contrast, the irreversibility constraint is a key barrier in the transition that relies on subsidies. The underlying reason is that this transition has to use energy demand reduction as the primary tool to reduce fossil fuel use. Due to the irreversibility constraint, reducing the housing stock is no option, making investments in efficiency capital necessary. However, the large amounts of efficiency-related investments crowd out other types of investments, such as investments in industry capital.
Apart from the investment dynamics, the different policies affect welfare. Table 2 summarizes the cumulative welfare levels relative to the no-policy benchmark. In our setting, all transitions result in a welfare loss compared to the no-policy baseline. Achieving net zero emissions requires factoring in previously ignored external costs. This increases the relative price, directly or indirectly, of using the fossil resource both in housing and in energy production. The cost-effective way to achieve this is to impose a uniform carbon price on both housing and energy production. When comparing different constrained policy scenarios to the first best, the scenarios can be ranked according to their welfare impact. While the welfare is on lower levels in both second-best transitions, the phased-in carbon price transition leads to a higher welfare level than the subsidy scenario.
From a political economy perspective, transition costs are relevant. The transition creates direct costs for households by increasing both housing and energy costs,
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Uniform & Phased-in & Subsidy \\ \hline Welfare & -0.0443 & -0.0449 & -0.0565 \\ Energy Costs & 0.597 & 0.616 & -0.243 \\ Housing Costs (net) & 0.022 & 0.023 & 0.01 \\ Housing Costs & 0.0553 & 0.0587 & -0.001 \\ Disp. Income & 0.0057 & 0.006 & -0.018 \\ Transfers & 0.015 & 0.015 & -0.003 \\ Output & -0.03 & -0.03 & -0.034 \\ Energy Prod. & 0.058 & 0.058 & 0.044 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of uniform carbon price, phased-in carbon price and the investment subsidy for efficiency capital. Changes in cumulative welfare, energy costs, net housing costs, housing costs (incl. energy), disposable income, output and energy production relative to the laissez-faire scenario. Household transfers are expressed as percent of income (net of transfers). All variables are discounted at the internal discount rate
making energy demand more costly. We compare the cumulative housing and energy costs, during the transition. These costs do not correspond to the ranking in welfare. Although a uniform carbon price is the most cost-effective option, it creates substantial direct costs for households. Total housing expenses rise by 5 percent compared to the no-policy case. For the phased-in carbon price, the rise in housing costs is of a similar magnitude. In contrast, if the transition relies on subsidies for efficiency capital, the direct costs of households are lower than in the no policy case. The underlying driver is the difference in cumulative energy costs. While in both carbon pricing scenarios utility bills are 60 percent higher than in the no-policy benchmark, they are 25 percent lower in the subsidy-based transition. Hence, the least preferable option from a welfare perspective leads to the smallest direct costs of all scenarios. The underlying reason is that the energy demand is drastically reduced, which has a significant impact on housing costs.
Redistributing carbon pricing revenues is an effective tool to lower the burden on households. The source of transfers are revenues from carbon pricing, both industry and housing. As the revenues from carbon pricing from the industry are identical in all scenarios, we focus only on the revenues from carbon pricing in housing. Thereby, we can measure the fiscal burden of the subsidy scenario. Subsidizing efficiency capital is costly and the government has to use lump-sum taxes to finance the subsidies. In the carbon price scenarios, households receive transfers that are equal to 1.5 percent of their cumulative income. At the same time, in the subsidy scenario, households have to pay taxes equal to 0.3 percent of their income. Consequently, households have a lower disposable income in the subsidy-based transition.This indicates how costly it is to decarbonize the economy through investment subsidies when the irreversibility constraint is binding. The need to invest in efficiency capital is largest in the first period, where revenues of carbon pricing in energy production are at their lowest level. Consequently, the government has to impose high taxes on households to finance the subsidies that are necessary to incentivize the large increase in efficiency capital due to the binding irreversibility constraint.
Relying on investment subsidies has important implications for housing demand. In detail, we find a housing-related rebound effect in the subsidy-based transition.9 While the housing stock declines compared to the no policy benchmark, it is significantly higher than in the carbon price scenarios. This indicates that due to higher aggregate energy efficiency, households increase their housing consumption relative
to the social optimum. Thereby, instrument choice not only matters for climate policy, but potentially affects also aggregate housing demand.
## 5 Conclusion
Due to a lack of investments, the building sector could become a barrier to the transition to a carbon-free economy. Investment needs include the construction of new, energy-efficient buildings, along with the retrofitting of existing housing stock. In addition to housing-related investments, the green transition requires investments in renewable energy. Climate policy must incentivize and coordinate these various investments while recognizing the direct burden on households. In this nexus, we compared different transitions based on the availability of specific instruments in residential housing. We rely on a general equilibrium model with an elaborate setup of housing, energy production, and an optimizing government that chooses policy instruments optimally, given a set of constraints.
Climate policy has to not only incentivize these investments, but also coordinate between the different types of investments. Additionally, climate policy in housing can be difficult to implement, as it creates direct costs for households. In this nexus, we compared different transitions based on the availability of specific instruments in housing. We rely on a general equilibrium model with an elaborate setup of housing, energy production, and an optimizing government that chooses policy instruments optimally, given a set of constraints.
If there are no constraints on mitigation policies, the conventional wisdom holds. The first-best policy is to impose a uniform carbon price on both households and the industry. Despite differences in housing investments, optimal climate policy remains unchanged. Energy demand only plays a secondary role in the optimal transition, as renewable energy expansion is not restricted in our setting. When comparing second-best transitions, the situation changes. While energy demand is not problematic with phased-in carbon prices, this is no longer the case in the subsidy-led transition. Due to the binding irreversibility constraint, the only option is to heavily subsidize investments in energy efficiency to reduce fossil fuel consumption. While providing subsidies for retrofits results in the lowest direct costs for households, it ultimately leads to the highest aggregate costs. Therefore, subsidies to efficiency investments prove to be an ineffective way to decarbonize the economy.
We leave several important aspects for future research. This paper is on the role of energy demand, but other investments can additionally affect the substitution possibilities. For example, expanding infrastructure that enables district heating, especially in urban areas. Furthermore, the housing sector is highly heterogeneous and is characterized by many barriers and additional market failures, including principal-agent problems and behavioral failures. This could be an promising aspect to consider, since the present analysis relies on perfect commitment and perfect foresight and thus anticipation of climate policy. In addition, market failures may limit the expansion of renewable energy and the amount of retrofits per year. Although we have an understanding of these obstacles in isolation, studying them in a general equilibrium context is essential for effective climate policy. A better understanding of their interactions will enhance climate policy's ability to navigate the complex task of decarbonizing the housing sector. |
2306.08656 | Augment then Smooth: Reconciling Differential Privacy with Certified
Robustness | Machine learning models are susceptible to a variety of attacks that can
erode trust, including attacks against the privacy of training data, and
adversarial examples that jeopardize model accuracy. Differential privacy and
certified robustness are effective frameworks for combating these two threats
respectively, as they each provide future-proof guarantees. However, we show
that standard differentially private model training is insufficient for
providing strong certified robustness guarantees. Indeed, combining
differential privacy and certified robustness in a single system is
non-trivial, leading previous works to introduce complex training schemes that
lack flexibility. In this work, we present DP-CERT, a simple and effective
method that achieves both privacy and robustness guarantees simultaneously by
integrating randomized smoothing into standard differentially private model
training. Compared to the leading prior work, DP-CERT gives up to a 2.5%
increase in certified accuracy for the same differential privacy guarantee on
CIFAR10. Through in-depth persample metric analysis, we find that larger
certifiable radii correlate with smaller local Lipschitz constants, and show
that DP-CERT effectively reduces Lipschitz constants compared to other
differentially private training methods. The code is available at
github.com/layer6ailabs/dp-cert. | Jiapeng Wu, Atiyeh Ashari Ghomi, David Glukhov, Jesse C. Cresswell, Franziska Boenisch, Nicolas Papernot | 2023-06-14T17:52:02Z | http://arxiv.org/abs/2306.08656v2 | # Augment then Smooth: Reconciling Differential Privacy with Certified Robustness
###### Abstract
Machine learning models are susceptible to a variety of attacks that can erode trust in their deployment. These threats include attacks against the privacy of training data and adversarial examples that jeopardize model accuracy. _Differential privacy_ and _randomized smoothing_ are effective defenses that provide certifiable guarantees for each of these threats, however, it is not well understood how implementing either defense impacts the other. In this work, we argue that it is possible to achieve both privacy guarantees and certified robustness simultaneously. We provide a framework called DP-CERT for integrating certified robustness through randomized smoothing into differentially private model training. For instance, compared to differentially private stochastic gradient descent on CIFAR10, DP-CERT leads to a 12-fold increase in certified accuracy and a 10-fold increase in the average certified radius at the expense of a drop in accuracy of 1.2%. Through in-depth per-sample metric analysis, we show that the certified radius correlates with the local Lipschitz constant and smoothness of the loss surface. This provides a new way to diagnose when private models will fail to be robust.
## 1 Introduction
Machine learning (ML) models are becoming increasingly trusted in critical settings despite an incomplete understanding of their properties. This raises questions about the _trustworthiness_ of those models, encompassing aspects such as privacy, robustness, and more. Society at large might expect _all_ of these properties to hold simultaneously as ML's influence on everyday life expands, but each aspect is challenging enough that scientists and practitioners still mostly grapple with them individually. Relatively little research has been done on the intersectionality of trustworthy ML requirements, since each aspect seems to push us in orthogonal research directions.
We aim to reconcile two key objectives of trustworthy ML, namely _privacy_ and _robustness_. Privacy in the context of ML manifests as the requirement that a model does not leak information about the data it was trained on [34], such as revealing whether or not certain data points were included in the training dataset [43] or what characteristics they exhibit [17]. In our study, robustness refers to the requirement that a model's prediction should not change when its test inputs are perturbed, even in the worst case when perturbations are chosen adversarially [2; 45; 19].
The current gold standard for providing privacy guarantees is differential privacy (DP) [13]. In ML, DP produces mathematically rigorous privacy guarantees by limiting the impact of each individual training data point on the final model. This is achieved by clipping per-sample gradients, and adding a well-calibrated amount of noise to all model updates. Clipping serves to bound the _sensitivity_ of the training algorithm, while the addition of noise ensures that training will be more likely to output similar models whether any of the individual data points are added to or removed from the training dataset. However, clipping and adding noise can impede the convergence of models [47] and yield decision boundaries that are less smooth [20], negatively impacting robustness [16].
These findings call for integrating robustness measures into private training, yet this remains challenging because most methods to increase robustness use random or adversarial augmentations of training data points, which both conceptually and practically do not align well with DP training. Conceptually, augmenting an input increases the sensitivity of private training to it, and thereby provides additional avenues for information leakage. From a practical viewpoint, since gradients are computed on a per-example basis for DP, augmentations drastically increase the time and memory costs of training.
To bridge the gap between robust and private ML model training, we evaluate the certified robustness of private models and improve it by integrating state-of-the-art techniques [51; 38] with DP training. CR provides probabilistic guarantees that perturbations of a certain magnitude will not change a model's prediction, regardless of what attack strategy (known or yet unknown) is used to modify the test inputs, and thereby provides future-proof robustness guarantees. A common approach for certifying robustness is _randomized smoothing_, where a classifier's outputs are averaged over a distribution surrounding the test point [28; 29; 10].
While DP and CR are the most promising standards for providing future-proof privacy and robustness guarantees respectively, their intersection has seen little attention. Recent works [37; 36; 46] propose adding noise or adversarial examples while training to improve CR guarantees, but lack the flexibility to incorporate state-of-the-art methods for non-private training [51; 38], and usually rely on training additional network components. We aim to provide CR guarantees within the standard DP training framework, overcoming several challenges in doing so.
Present Work.We study the possible pitfalls of combining DP and CR in a systematic manner. Through our analysis and ablation studies combining randomized smoothing techniques with DP training, we show that standard DP training of ML models is insufficient to provide strong CR results. We propose DP-CERT, an adaptable framework for integrating CR into standard DP training which effectively incorporates augmentations while managing the additional privacy risks. Compared to private training without augmentations, DP-CERT achieves better robustness on MNIST, Fashion-MNIST, and CIFAR10, and even surpasses the state-of-the-art for robustness on the latter dataset under the same privacy guarantee. Finally, we analyze CR on a per data point basis rather than averaged across test datasets. Using the gradient norm, Hessian spectral norm, and local Lipschitz constant, we find that the certifiable radius has a negative log-linear correlation with these quantities, and compare their distributions across training methods. We conclude with concrete recommendations of best practices for the community to achieve CR and DP simultaneously.
## 2 Preliminaries
Problem Setup.Consider a classification task with \(Y\) classes from a dataset \(D=\{(x_{i},y_{i})\}_{i=1}^{n}\), where \(x_{i}\in\mathbb{R}^{d}\) and \(y_{i}\in\{1,...,Y\}\) denote the \(i\)-th input and label. Let \(f_{\theta}:\mathbb{R}^{d}\rightarrow\{1,...,Y\}\) be a neural network with parameters \(\theta\), and \(F_{\theta}\) denote the soft classifier which outputs the probability distribution, such that \(f_{\theta}(x)=\operatorname*{arg\,max}_{y\in\{1,...,Y\}}F_{\theta}(x)_{y}\), where \(F_{\theta}(x)_{y}\) denotes the model probability of \(x\) being a member of class \(y\).
Differential Privacy and DPSGD.We rely on the rigorous framework of differential privacy (DP) [14] to obtain models with privacy guarantees. DP ensures that a model's weights at the end of training will be similar in distribution whether or not a particular data point was included in the training set. More formally, let \(D\) and \(D^{\prime}\) be two potential training datasets for a model \(f_{\theta}\) that differ in only one data point. The training mechanism \(M\) guarantees \((\varepsilon,\delta)\)-DP if for all possible sets of outcomes \(S\) of the training process, it holds that \(\Pr\left[M(D)\in S\right]\leq\varepsilon^{\varepsilon}\Pr\left[M(D^{\prime}) \in S\right]+\delta\). The parameter \(\varepsilon\) specifies the privacy level, with smaller \(\varepsilon\) yielding higher privacy, while \(\delta\) quantifies the probability of the algorithm violating the \(\varepsilon\) privacy guarantee.
To obtain a differentially private variant of stochastic gradient descent (SGD), two modifications need to be made [1]. First, the individual gradients of each data point are clipped to a norm \(C\) to limit the sensitivity of the model update caused by each data point. Second, choosing a noise level \(\rho\), noise from \(\mathcal{N}(0,\rho^{2}C^{2}\mathbf{I})\) is added to the aggregated gradients to prevent the changes to the model from revealing too much information about individual data points. We detail the resulting algorithm DPSGD (1) and a more thorough introduction to DP in Appendix A.1.
Certified Robustness.Adversarial examples are a well-studied phenomenon in ML, in which an input to a model is perturbed in ways that do not alter its semantics yet cause the model to misclassify the perturbed input [2; 45; 19]. Formally, for a given labeled datapoint \((x,y)\) and classifier \(f\), an \((\ell_{p},\zeta)\)-adversary aims to create an adversarial example \(x^{\prime}\) such that \(\|x^{\prime}-x\|_{p}<\zeta\) and \(f(x^{\prime})\neq y\). Despite much research, the most common defense against adversarial examples remains adversarial training [19; 58; 39]. While adversarial training improves robustness to known algorithms for finding adversarial examples, it does not guarantee that a model will be robust to all adversarial examples (e.g., those crafted with other attack algorithms). This motivates the development of techniques that can provide certifiable guarantees of robustness to adversarial examples by providing a lower bound \(r\) on the distance between a correctly classified input and any adversarial example that may be misclassified [51; 38]. This lower bound is also known as the certification radius.
Randomized Smoothing.One popular approach for establishing certified robustness (CR) guarantees is through probabilistic robustness verification which, with high probability, verifies that no adversarial examples exist within a certain radius of the original input [30]. The most commonly studied method for providing a probabilistic robustness verification is through smoothing a classifier [28; 29; 10] by averaging the class predictions of \(f\) using a smoothing distribution \(\mu\),
\[\hat{g}(x)=\operatorname*{arg\,max}_{c\in[\mathcal{V}]}\int_{\zeta\in \text{supp}(\mu)}\mathbb{I}[f(x+\zeta),c]\mu(\zeta)d\zeta, \tag{1}\]
where \(\mathbb{I}[a,b]=1\iff a=b\) and \(0\) otherwise [30]. As computing the integral in Equation (1) is intractable, Monte Carlo sampling is used. We denote the approximation of \(\hat{g}\) given by Monte Carlo sampling as \(g\). One can certify at different radii through the choice of smoothing distribution \(\mu\). Smoothed classifiers are evaluated in terms of their certified accuracy--the fraction of samples correctly classified when certifying robustness at a given radius \(r\).
A tight \(\ell_{2}\) radius was obtained by Cohen et al. [10] when using isotropic Gaussian noise \(\mu=\mathcal{N}(x,\sigma^{2}\mathbf{I})\), where \(\sigma\) is a hyperparameter that controls a robustness/accuracy tradeoff. In particular, Cohen et al. [10] proved that for any base classifier \(f\), the Gaussian smoothed classifier \(g\) is robust around an input \(x\) with radius \(r=\frac{\sigma}{2}(\Phi^{-1}(p_{A})-\Phi^{-1}(p_{B}))\) where \(p_{A}\) and \(p_{B}\) denote the probabilities of \(c_{A}\) and \(c_{B}\), the most and second-most probable classes returned by \(g(x)\), and \(\Phi^{-1}\) is the inverse of the standard Gaussian CDF. In fact the exact probabilities \(p_{A}\) and \(p_{B}\) are not needed and one can use lower \(\underline{p_{A}}\leq p_{A}\) and upper \(\overline{p_{B}}\geq p_{B}\) bounds instead, approximated by Monte Carlo sampling.
The output of the smoothed classifier \(g(x)\) is approximated by aggregating the predictions of a base classifier \(f(x+\eta)\) for \(\eta\sim\mathcal{N}(0,\sigma^{2}\mathbf{I})\). As a high dimensional standard Gaussian assigns almost no mass near its mean \(0\), ensuring that \(g(x)\) is accurate at large certification radii _requires the base classifier \(f\) to be accurate on Gaussian perturbed data_[18].
## 3 Method
Training machine learning models to be both differentially private and certifiably robust poses several challenges. The gradient clipping and noise addition used in DPSGD harms the convergence rate of training [9; 47; 4], while restrictive privacy budgets may further require stopping training prior to convergence. Robustness on the other hand suffers for models that are not converged, as having large gradients at test points makes finding adversarial examples easier [16].
Another challenge surfaces around the use of adversarial training [19] or augmentations of datapoints [10] along with DPSGD. As shown by Cohen et al. [10], data augmentations used for training can enhance a model's CR, however, it is crucial to ensure that augmented data points do not leak private information about the original. Previous works on the combination of DP and CR have proposed adding noise or adversarial examples during training, but deviate from the standard DPSGD
template to address the privacy risks [37; 36; 46]. These approaches add trainable model components increasing the overall complexity [37; 46], or lack the flexibility to incorporate the latest advancements in adversarial training methods [36]. For a more detailed description of these related works and comparison to our method, please see Appendix B.
We aim to make CR feasible within the standard training procedure of DPSGD, with state-of-the-art convergence and proper accounting for additional privacy risks by introducing the DP-CERT framework. In this section, we describe DP-CERT, how it effectively manages training with augmented samples while preserving privacy, and how it enables the integration of recent advancements in adversarial training and regularizers to enhance certifiable robustness [40; 29; 56]. Our training framework consists of three stages, summarized in Figure 1: augmentation multiplicity as the foundational stage, plus regularization and adversarial training as two optional stages. After the model is trained, randomized smoothing is used at inference time. We present four instantiations of the framework: DP-Gaussian, DP-SmoothAdv, DP-Stability, and DP-MACER, employing different techniques at each stage.
Augmentation Multiplicity.For each data point \((x_{i},y_{i})\), we obtain \(K\) augmented data points \((x_{i}^{j},y_{i})\), where \(j\in\{1,...,K\}\) and \(x_{i}^{j}\) is the \(j\)-th augmented data point. For notational convenience, we use \(x_{i}^{0}\) to denote the original data point \(x_{i}\). As shown by Cohen et al. [10], training with Gaussian data augmentation can enhance a model's certified robustness. When not using adversarial training, we define \(x_{i}^{j}=x_{i}+\eta_{j}\), \(\eta_{j}\sim\mathcal{N}(0,\sigma^{2}\mathbf{I})\) for \(j\neq 0\).
An important component of our DP-CERT is how we handle training with augmented data points. We adopt augmentation multiplicity, introduced in [11] and previously unused in studies of CR for DP, which involves averaging the gradients of multiple augmentations of the same training sample before clipping. Since all downstream impact to the model weights from sample \(x_{i}\) is contained in this averaged gradient, clipping it provides a finite sensitivity as required for the Sampled Gaussian Mechanism used in DPSGD [32], and no additional privacy cost is incurred. The model updates can be expressed as follows
\[\theta^{t+1}=\theta^{t}-\lambda_{t}\Bigg{[}\frac{1}{B}\sum_{i\in B_{t}}\text{ clip}_{C}\Bigg{(}\frac{1}{K+1}\sum_{j=0}^{K}\nabla_{\theta^{t}}L_{\text{CE}}(x_{i}^ {j},y_{i})\Bigg{)}+\frac{\rho C}{B}\xi\Bigg{]}. \tag{2}\]
\(\theta^{t}\) denotes the model parameters at iteration \(t\), \(\lambda_{t}\) is the learning rate, \(B\) is the batch size, \(C\) is the clipping bound, \(K\) is the number of augmentations, \(\rho\) is the noise multiplier, \(\xi\sim\mathcal{N}(0,\mathbf{I})\), and \(\nabla_{\theta^{t}}L_{\text{CE}}(x_{i}^{j},y_{i})\) is the gradient with respect to data point \((x_{i}^{j},y_{i})\). Note that \(j\) starts from 0, which means we include the _original samples_ along with the augmented ones in model training.
Regularization.We propose adapting stability and consistency regularization to private training in order to minimize the distance between the output probability of the original and augmented examples, hereby improving the robustness to input noise. Stability training [29] adds a smoothed cross-entropy loss as regularization. Inspired by TRADES [57], we instead use the Kullback-Leibler (KL) divergence with a hyperparameter \(\gamma\) controlling the strength of the regularization as:
\[L_{\text{stability}}(x_{i},y_{i})=\sum_{j}L_{\text{CE}}(x_{i}^{j},y_{i})+\gamma D _{\text{KL}}\big{(}F_{\theta}(x_{i})||F_{\theta}(x_{i}^{j})\big{)}. \tag{3}\]
Consistency regularization [25] is a similar technique that instead minimizes the KL divergence between \(\hat{F}_{\theta}(x_{i})\) and \(F_{\theta}(x_{i})\), where \(\hat{F}_{\theta}(x)=\frac{1}{K}\sum_{j}F_{\theta}(x_{i}^{j})\) is the average output probability of all
Figure 1: The DP-CERT training framework for providing strong CR guarantees within DPSGD.
smoothed samples. The loss can be expressed as
\[L_{\text{consistency}}(x_{i},y_{i})=\sum_{j}L_{\text{CE}}(x_{i}^{j},y_{i})+ \gamma D_{\text{KL}}\big{(}\hat{F}_{\theta}(x_{i})||F_{\theta}(x_{i}^{j})\big{)}. \tag{4}\]
Additionally, we propose integrating MACER [56], an alternative training modification to directly optimize the certified accuracy at larger robustness radii without requiring the costly process of adversarial training. MACER achieves this by decomposing the error of a smoothed classifier into a classification error term and a robustness error term, the latter reflecting whether or not the smoothed classifier was able to certify robustness for a given radius.
Adversarial Training.To achieve better certified accuracy, we incorporate adversarial training by deploying existing attacks to create adversarial examples. Specifically, we integrate SmoothAdv [40] into private training, which, given original data \((x,y)\), optimizes
\[\operatorname*{arg\,max}_{\|x^{\prime}-x\|_{2}\leq\epsilon}\Big{(}-\log \operatorname*{\mathbb{E}}_{\eta\sim N(0,\sigma^{2}I)}[F_{\theta}(x^{\prime}+ \eta)_{y}]\Big{)}, \tag{5}\]
to find an \(x^{\prime}\)\(\epsilon\)-close to \(x\) that maximizes the cross entropy between \(g_{\theta}(x^{\prime})\) and label \(y\). Using Monte Carlo sampling, Objective (5) can be optimized by iteratively computing the approximate gradient
\[\nabla_{x^{\prime}}\Big{(}-\log\Big{(}\frac{1}{K}\sum_{j=1}^{K}F_{\theta}(x^{ \prime}+\eta_{j})_{y}\Big{)}\Big{)}. \tag{6}\]
where \(\eta_{1},...,\eta_{K}\sim\mathcal{N}(0,\sigma^{2}\mathbf{I})\). The approximate gradient is then used to update \(x^{\prime}\), with the final \(x^{\prime}\) used as examples within augmentation multiplicity.
### Metrics for Interpreting Robustness
To elicit some insights into why certain training methods may produce better-performing models than others, we investigate several per-data point metrics associated with robustness, the _input gradient norm_, _input Hessian spectral norm_, and _local-Lipschitz constant_, and study their relationships with CR. The first two metrics measure the local smoothness of the loss landscape with respect to the input space. Taylor's approximation can be used to show a direct link between these two metrics and the worst-case change in loss from small input perturbations. Due to this connection, prior works directly regularized them in order to train more robust models [22; 24; 33].
Gradients and Hessians are highly local quantities that are only connected to robustness through Taylor's approximation at small radii around the input data point. Consequently, they may not be informative at larger radii used to certify robustness. Thus, we also compare models using an empirical estimate of the average local Lipschitz constant of the model's penultimate layer. By viewing the network as a feature extractor composed with a linear classifier, using the penultimate layer captures the worst-case sensitivity of the feature extractor to perturbations of the data. This metric was initially proposed by Yang et al. [54] to investigate adversarial robustness and is given by
\[\frac{1}{n}\sum_{i=1}^{n}\max_{x^{\prime}_{i}\in B_{\infty}(x,\zeta)}\frac{\| f(x_{i})-f(x^{\prime}_{i})\|_{1}}{\|x_{i}-x^{\prime}_{i}\|_{\infty}}, \tag{7}\]
where the maximum is approximated in the same manner as is used for adversarial example generation, typically projected gradient descent (PGD) [31].
## 4 Experiment Setup
We evaluate the effectiveness of DP-CERT on multiple image classification datasets, including MNIST [27], Fashion-MNIST [53], and CIFAR10 [26]. More detailed data statistics can be found in Appendix C.1. We demonstrate that DP-CERT consistently outperforms the undefended differentially private baselines, and establishes the state of the art for certified \(l_{2}\) defense under a DP guarantee, via randomized smoothing on CIFAR-10.
### Baselines Methods
Our comparison methods include non-private training (_Regular_) for an accuracy baseline, _DPSGD_ to observe the CP properties of standard DP training, and per-sample adaptive clipping (_PSAC_) [52], described in Appendix A.2, which achieves better convergence than DPSGD, all else held equal. For DPSGD and PSAC, we adopt the same settings as Xia et al. [52], who exhibited state-of-the-art performance for DP optimization on various tasks. We additionally compare against prior approaches to integrate CR with DP guarantees, namely TransDenoiser [46], SecureSGD [37] and StoBatch [36] on CIFAR10. We refer to Tang et al. [46] for details of their experimental setting.
### Evaluation Metrics
First, we report the natural accuracy (_Acc_) on the test dataset without randomized smoothing for inference as a measure of convergence. Following previous works, we report the _approximate certified accuracy_, which is the fraction of the test set that can be certified to be robust at radius \(r\) using the CERTIFY procedure introduced by [10]. We also include the average certified radius (_ACR_) [56] which represents the average certified radius returned by CERTIFY which serves as an additional metric for better comparison of CR between two models [49; 58]. ACR is calculated as
\[\text{ACR}=\frac{1}{|D_{\text{test}}|}\sum_{(x,y)\in D_{\text{test}}}\text{ CR}(f,\sigma,x)\cdot\mathbb{I}[\hat{f}(x),y^{\prime}], \tag{8}\]
where \(D_{\text{test}}\) is the test dataset, and CR denotes the certified radius provided by CERTIFY.
### Implementation and Hyperparameters
For all experiments on MNIST and Fashion-MNIST, we train a four-layer CNN model, with the settings use by Tramer and Boneh [47]. On CIFAR10, we fine-tune a CrossViT-Tiny [8], pretrained on ImageNet1k [12]. We set the learning rate as 0.001 and train the models for 10 epochs. The rest of the hyperparameters are the same as used by [6]. For evaluation, we use CERTIFY with parameters \(n=10,000\), \(n_{0}=100\), and \(\alpha=0.001\), following previous work [10; 40]. Our implementations of DP-SmoothAdv, DP-Stability and DP-MACER are adapted from the original codebases [41; 29; 56], and we use the default hyperparameters reported in the original papers. We set the number of augmentations \(K\) to 2 for MNIST and Fashion-MNIST, and 1 for CIFAR10, as they bring a better trade-off between certified accuracy and efficiency (see Section 5.2). By default, we compare models using the same privacy guarantee: \((\varepsilon=3.0,\delta=10^{-5})\).
For each model configuration, we consider three models trained with different noise levels \(\sigma\in\{0.25,0.5,1.0\}\) for smoothing at training time, and during inference we apply randomized smoothing with the same \(\sigma\) as used in training. The randomized smoothing results for TransDenoiser, SecureSGD and StoBatch are directly copied from Tang et al. [46]. For CIFAR10, TransDenoiser takes a VGG16 model [44] pretrained on a public dataset, then fine-tunes the classifier and the denoisers jointly. For a fair comparison, we use a much smaller network, CrossViT-Tiny [8], in DP-CERT, achieving a similar accuracy on the CIFAR10 test set. For more experimental details, please refer to Appendix C.2. All experiments were conducted on a cluster of 8 Nvidia V100 GPUs. All the training and inference procedures are implemented based on Pytorch v1.13.0 [35] and Opacus v1.3.0 [55], and our code is provided as supplementary material.
## 5 Experimental Evaluation
### Comparative Study
In Table 1, we compare our baseline methods and DP-CERT instantiations by their natural accuracy, ACR, and certified accuracy for radii greater than 0.25. The best ACR and certified accuracy for each \(\sigma\) are displayed in **bold**, while close runner-ups are underlined. In Figure 2 we plot the certified accuracy as the certified radius is increased on CIFAR10. Similar results for MNIST and Fashion-MNIST are displayed in Figure 7 in Appendix D.1, where we also compare DP-CERT to TransDenoiser, SecureSGD, and StoBatch on CIFAR10 in Figure 8.
Discussion.Table 1 and Figure 2 show that all instantiations of DP-CERT significantly outperform the baseline methods in terms of the approximate certified accuracy and ACR. Generally, DP-CERT's natural accuracy is marginally lower than the PSAC baseline, but its ACR and certified accuracy do not fall off drastically as \(\sigma\) is increased. Hence, there is still a tradeoff between natural accuracy and CR. Our well-converged baselines show that DPSGD training does not always lead to worse CR compared non-private training, which should be contrasted with previous studies on the adversarial robustness of DPSGD [50; 3; 60]. Still, DPSGD alone, even when well-converged, does not provide strong CR, demonstrating the need for DP-CERT's improvements.
Figure 8 in Appendix D.1 shows that all variants of DP-CERT achieve the state-of-the-art certified accuracy on CIFAR10 with a much smaller pre-trained model compared to [46; 36]. Since we do not rely on an additional denoiser, the inference is much faster with DP-CERT.
Practical recommendation.Contrary to the previous findings in non-private training [40; 56], all variants of DP-CERT have close performance. Because DP-SmoothAdv incurs significantly larger training overhead, we recommend _not_ using it in private training. For _training from scratch_, _DP-Gaussian_ is recommended since it offers competitive results while being the most straightforward to implement and fastest to train as it does not rely on adversarial examples. For fine-tuning a pre-trained
\begin{table}
\begin{tabular}{l l r r r r r r r r r} \hline \hline \multirow{2}{*}{\(\sigma\)} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{MNIST} & \multicolumn{2}{c}{Fashion-MNIST} & \multicolumn{2}{c}{CIFAR10} \\ & & Acc & ACR & \(r=0.25\) & Acc & ACR & \(r=0.25\) & Acc & ACR & \(r=0.25\) \\ \hline \multirow{8}{*}{0.25} & Regular & 99.14 & 0.581 & 83.6 & 89.28 & 0.359 & 55.5 & 94.79 & 0.055 & 9.5 \\ & DPSGD & 98.13 & 0.606 & 88.3 & 85.87 & 0.343 & 53.2 & 89.74 & 0.023 & 3.3 \\ & PNAC & **98.25** & 0.608 & 88.5 & **86.34** & 0.320 & 49.0 & **89.81** & 0.020 & 2.8 \\ \cline{2-10} & DP-Gaussian & 98.13 & 0.735 & 95.7 & 84.76 & 0.545 & 75.8 & 87.61 & 0.246 & 41.8 \\ & DP-SmoothAdv & 98.08 & **0.742** & **96.0** & 83.97 & **0.554** & **75.9** & 87.89 & **0.275** & **44.3** \\ & DP-Stability & 97.86 & 0.738 & 95.9 & 84.19 & 0.551 & 75.7 & 88.53 & 0.246 & 41.6 \\ & DP-MACER & 98.13 & 0.736 & 95.6 & 84.79 & 0.545 & 75.8 & 87.52 & 0.246 & 41.7 \\ \hline \multirow{8}{*}{0.5} & Regular & 99.14 & 0.308 & 31.8 & 89.28 & 0.331 & 34.9 & 94.79 & 0.092 & 9.7 \\ & DPSGD & 98.13 & 0.344 & 50.0 & 85.87 & 0.309 & 29.8 & 89.74 & 0.057 & 9.8 \\ & PSAC & **98.25** & 0.383 & 55.9 & **86.34** & 0.298 & 27.5 & **89.81** & 0.056 & 9.8 \\ \cline{1-1} \cline{2-10} & DP-Gaussian & 97.74 & 1.246 & 94.7 & 82.42 & 0.879 & 73.0 & 87.48 & **0.288** & **35.5** \\ & DP-SmoothAdv & 97.66 & **1.258** & **94.8** & 82.65 & **0.894** & 73.0 & 87.54 & 0.263 & 31.9 \\ & DP-Stability & 97.62 & 1.248 & 94.6 & 82.25 & 0.876 & 72.7 & 88.56 & 0.282 & 35.4 \\ & DP-MACER & 97.75 & 1.246 & 94.7 & 82.50 & 0.880 & **73.1** & 87.36 & 0.287 & 35.2 \\ \hline \multirow{8}{*}{1.0} & Regular & 99.14 & 0.257 & 10.7 & 89.28 & 0.342 & 21.2 & 94.79 & 0.079 & 9.7 \\ & DPSGD & 98.13 & 0.260 & 10.4 & 85.87 & 0.338 & 13.5 & 89.74 & 0.029 & 5.9 \\ & PSAC & **98.25** & 0.213 & 20.0 & **86.34** & 0.328 & 11.5 & **89.81** & 0.023 & 4.2 \\ \cline{1-1} \cline{2-10} & DP-Gaussian & 96.33 & **1.262** & **85.6** & 80.96 & 1.101 & **65.4** & 88.55 & **0.299** & **25.4** \\ \cline{1-1} & DP-SmoothAdv & 96.54 & 1.249 & 85.0 & 80.93 & 1.096 & 64.7 & 87.37 & 0.237 & 21.0 \\ \cline{1-1} & DP-Stability & 96.48 & **1.262** & 84.9 & 80.66 & 1.084 & 65.1 & 89.09 & 0.294 & 25.0 \\ \cline{1-1} & DP-MACER & 96.31 & **1.262** & 85.5 & 80.83 & **1.102** & 65.3 & 88.40 & 0.255 & 22.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of accuracy, ACR and the certified accuracy at radius 0.25 between baselines and instances of the DP-CERT framework on MNIST, Fashion-MNIST and CIFAR10.
Figure 2: Approximate certified accuracy comparison on CIFAR10.
model, _DP-Stability_ is recommended since it has the highest natural accuracy in all variants while offering competitive certified accuracy.
### Studying the Impact of Model Architectures and Augmentations
In this section, as an ablation, we examine the effect of different model variants and hyperparameters. All experiments are run on Fashion-MNIST with \(\sigma=0.5\); results for MNIST and other value of \(\sigma\) are given in Appendix D.2. We combine consistency regularization and PSAC with DP-Gaussian and DP-SmoothAdv to study their effect on certified accuracy and radius. Figure 3 shows that neither of these techniques improves CR. We also train models with different numbers of augmentations and compare their CR. Figure 3 shows that the certified test accuracy is unchanged as the number of augmentations increases, consistent with the observations made by Salman et al. [40]. We emphasize that using no augmentations at all, i.e. plain DPSGD, performs much worse (Table 1). Since fewer augmentations better preserves the natural accuracy and incurs less training overhead, we recommend using a minimal number of augmentations.
### Fine-Grained Metric Analysis
Certified accuracy and ACR are both metrics averaged over the test set. However, robustness is inherently sample based, since it examines how resilient the model is against perturbations tailored to individual samples. Therefore, in this section we conduct an in-depth analysis of the distributions of the per-sample metrics introduced in Section 3.1 to provide a deeper understanding of the connections between model properties and CR. We pose two research questions below and answer them using data visualization.
**RQ1: _How are the training dynamics of different methods reflected in their metric distributions?_** We calculate the three metrics for each test set data point, visualize their distribution in histograms, and compare across baselines and our proposed methods. For a detailed analysis we focus on a single setting here - MNIST and \(\sigma=0.5\) in Figure 4. For comparison, we visualize the histograms for different datasets and \(\sigma\)'s in Figures 11 - 15 in Appendix D.3. In Figure 3(a), Regular training results in an approximately log-normal distribution for input gradient norms with a mode far greater than for DPSGD variants. Meanwhile, DPSGD is bimodal with some inputs having very large gradient norms which are potentially vulnerable to adversarial examples. This likely arises as a consequence of the clipping employed in the DPSGD training algorithm which effectively down-weights the contributions of hard examples and up-weights the contribution of easy examples [42]. Rarer samples, which would dominate a minibatch gradient for Regular training, are not learned and still have large input gradients at the end of training. PSAC mitigates this issue slightly by explicitly up-weighting hard examples, resulting in a distribution closer to Regular training. DP-Gaussian, on the other hand, shifts the distribution towards lower norm values. Comparing variants of DP-CERT in Figure 3(b), DP-Stability has a significantly higher input gradient norm, input Hessian spectral norm and lower local Lipschitz constant than the other three variants. This echoes the observation that TRADES [57] style training results in significantly lower local Lipschitz constants [54].
**RQ2: _How do the metrics correlate with the certified radius on a per-sample basis?_** We perform two analyses to visualize the correlation. In Figure 5, we first group examples by their certified radii, then for each group we compute their average metric values and take the logarithm. Across training methods, we see a clear negative correlation between the log metric values and certified radius, which means that examples robust to input noise tend to have lower metric values. However, different methods exhibit different levels of correlation, which is closely related to their average metric value. For example, DP stability on average has a much higher input gradient norm and a much lower local Lipschitz constant than other methods at the same certified radii.
Figure 3: Ablation study for consistency regularization, PSAC, and augmentations.
In Figure 6 we select three certified radius thresholds \(\tau\in\{0.5,1.0,1.5\}\), and separately plot the log of local Lipschitz constants of examples above and below \(\tau\) on the top and bottom rows of the subfigures respectively. Corresponding plots for the other two metrics are visualized in Figure 18 and 19 in Appendix D.3. We can draw similar conclusions to those from Figure 5. First, the examples with certified radii _below_ the threshold have _higher_ average local Lipschitz constant. Second, as we increase the threshold \(\tau\), more examples with higher local Lipschitz constant end up below the certified radius threshold. Since the local Lipschitz constant is derived using a PGD attack, we anticipate that it naturally correlates with the robustness to adversarial examples. For further comparison, we present the FGSM accuracy on MNIST and Fashion-MNIST under the attack strength in \(\{0.0005,0.01,0.1,0.5,1\}\) in Figure 17 in Appendix D.3. Consistent with the ranking of the average local Lipschitz constant, DP-Stability consistently outperforms other approaches, while DP-Gaussian, DP-SmoothAdv and DP-MACER achieve similar adversarial accuracy over different attack margins.
Figure 4: Per-sample metric comparisons on MNIST with \(\sigma=0.5\)
Figure 5: The input Hessian spectral norm (left), input gradient norm (middle), and local Lipschitz constants (right) calculated at different certified radii, on MNIST with \(\sigma=0.5\).
**Summary.** In summary, we find that the local Lipschitz constant of test examples has a closer connection to CR than the gradient norm, or Hessian spectral norm for models trained with DP guarantees.
## 6 Conclusion
We achieve better certified robustness with DPSGD training through augmentations and randomized smoothing, reconciling two crucial objectives for trustworthy ML, namely privacy and robustness. To overcome the theoretical and practical challenges that arise from the combination of both approaches, we rely on state-of-the-art DP training with augmentations that does not incur additional privacy costs. We employ various regularizations, and adversarial training methods to enhance robustness. Our resulting DP-CERT framework is modular and supports multiple combinations of these methods. Through our extensive experimental study, we confirm that DPSGD training alone, even with state-of-the-art convergence, does not provide satisfactory certified robustness. However, introducing a small number of computationally inexpensive augmentations into training, such as adding Gaussian noise, suffices to yield strong privacy protection and certified robustness. By thoroughly analyzing per-sample metrics, we show that the certified radius correlates with the local Lipschitz constant and smoothness of the loss surface; this opens a new path to diagnosing when private models will
Figure 6: Comparing the distribution of local Lipschitz constants among baselines and proposed methods on MNIST under \(\sigma=0.5\). In each subfigure, the examples are classified into ones with certified radius above the threshold \(\tau\) (top row), and below the threshold (bottom row). We display three thresholds, \(\tau\in\{0.5,1.0,1.5\}\), and show the logarithmic metric values for all methods.
fail to be robust. To conclude, our findings yield concrete recommendations for the community to simultaneously achieve CR and DP, providing a valuable contribution towards more trustworthy ML. When training from scratch, Gaussian augmentations (not adversarial) should be used with DPSGD, and randomized smoothing applied at inference time. For fine-tuning pretrained models, adding stability regularization also helps accuracy, and leads to much lower local Lipschitz constants.
Acknowledgments.DG, FB, and NP would like to acknowledge sponsors who support their research with financial and in-kind contributions: CIFAR through the Canada CIFAR AI Chair, NSERC through a Discovery Grant, the Ontario Early Researcher Award, and the Sloan Foundation. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute.
|
2306.07565 | deController: A Web3 Native Cyberspace Infrastructure Perspective | Web3 brings an emerging outlook for the value of decentralization, boosting
the decentralized infrastructure. People can benefit from Web3, facilitated by
the advances in distributed ledger technology, to read, write and own web
content, services and applications more freely without revealing their real
identities. Although the features and merits of Web3 have been widely
discussed, the network architecture of Web3 and how to achieve complete
decentralization considering law compliance in Web3 are still unclear. Here, we
propose a perspective of Web3 architecture, deController, consisting of
underlay and overlay network as Web3 infrastructures to underpin services and
applications. The functions of underlay and overlay and their interactions are
illustrated. Meanwhile, the security and privacy of Web3 are analyzed based on
a novel design of three-tier identities cooperating with deController.
Furthermore, the impacts of laws on privacy and cyber sovereignty to achieve
Web3 are discussed. | Hao Xu, Yunqing Sun, Zihao Li, Yao Sun, Lei Zhang, Xiaoshuai Zhang | 2023-06-13T06:28:40Z | http://arxiv.org/abs/2306.07565v1 | # deController: A Web3 Native Cyberspace Infrastructure Perspective
###### Abstract
Web3 brings an emerging outlook for the value of decentralization, boosting the decentralized infrastructure. People can benefit from Web3, facilitated by the advances in distributed ledger technology, to read, write and own web content, services and applications more freely without revealing their real identities. Although the features and merits of Web3 have been widely discussed, the network architecture of Web3 and how to achieve complete decentralization considering law compliance in Web3 are still unclear. Here, we propose a perspective of Web3 architecture, deController, consisting of underlay and overlay network as Web3 infrastructures to underpin services and applications. The functions of underlay and overlay and their interactions are illustrated. Meanwhile, the security and privacy of Web3 are analyzed based on a novel design of three-tier identities cooperating with deController. Furthermore, the impacts of laws on privacy and cyber sovereignty to achieve Web3 are discussed.
Web3 architecture, overlay and underlay, decentralized infrastructure, blockchain, DAO
## I Introduction
Web3, an emerging term of decentralized world-wide-web (WWW) based on distributed ledger technology (DLT) and crypto economy, has been foreseen as a dynamic-driven factor for the next generation of the Internet. Web3 is seen as a catalyst for the future Internet to provide content, services, and applications for users without centralized servers. Since the introduction of blockchain by Bitcoin in 2008, the decentralized network started its unprecedented journey and has been thriving for more than a decade. With the advances of blockchain, cryptocurrencies and decentralized autonomous organizations (DAO) have shifted the world to embrace the value of decentralization to deconstruct the well-established centralized WWW ecosystems with the decentralized governance, underlay and overlay network infrastructures, as shown in Fig. 1, detailed in the following sections.
Currently, Web3 has reached the moment towards inclusive top-down solutions and the soil for its growth in industrial, commercial and public networks without the involvement of any centralized things, solidifying the lifeline of Web3 value and consensus. However, such a top-down architecture of Web3 has not been sculptured with considering its challenges as well as the interactions of network infrastructure, DLT, security and privacy, judicature, etc., comprehensively.
### _Challenge and opportunity_
With a great boost on security, privacy and cyber sovereignty (cyber sovereignty refers to the cyber boundary established by a country or region for exercising national control and implementing specific legislation) of user data, the challenges faced in achieving Web3 and opportunities are significant.
#### I-A1 Web3 is running on centralized things!
"Read, write and own" endorses the fundamental value in Web3; however, if the access to the space of Web3 is denied, ownership means little or nothing to the owner who is blocked from accessing the WWW. Meanwhile, the value of privacy offered by Web3 becomes void if the user can be tracked at the beginning and the end of Internet access. It is necessary to ensure the user will never be unplugged from the network or illegally tracked due to centralization causes. Most importantly, Web3 shall secure itself from running the whole network on the infrastructure offered by centralized resource controllers.
Another distinct challenge is the authentication in access control of Web3 because all identities in the decentralized network are anonymous, i.e., the authentication should not
Fig. 1: Our contribution to the Web3 network architecture.
reveal any personal information of Web3 users. However, the authentication information of users is known by the central controller in the centralized model. Therefore, the architecture to achieve anonymous authentication for decentralized Web3 should be further investigated.
#### I-A2 Opportunities
Since the current web structure is highly centralized, Web3 could facilitate the shift from centralized Internet to decentralized Internet based on DLT, distributed network, NEAT (Network Encrypted Address Translation, detailed later), etc. Such a self-goverance evolution may enable people to access and own Internet resources more freely and equally, boosting investments in Web3 network infrastructure, and owning the actual Web3 network. Privacy is also an opportunity as anonymity may challenge the legislation and jurisdiction. As a nature of Web3, anonymous identities can protect users' real identities to avoid censorship when users are involved in various activities and applications. It is inspiring to enable a fully private and connected universe for all via encrypted address, a.k.a. BCADD, and the encrypted infrastructure through the ideology of decentralized and encrypted infrastructure. In this case, anyone who onboards the Web3 network can have permissionless access to the Web3 infrastructure.
Apart from technological innovations, Web3 has the potential to provide new opportunities for legal governance of cyberspace due to its privacy-driven design. The core privacy issue in Web 1.0/2.0 is centralized services since service providers may exploit the surplus of online content creators without permission infringing users' privacy and data protection rights, and even act as unsupervised police. Web3, embedded a native decentralization and encryption, offers users more control over their personal data and privacy in the Internet access where they own their autonomy to make choices, which is essentially aligned with the objectives of GDPR (General Data Protection Regulation) in the EU.
### _Motivation and contribution_
There will be emerging scenarios relying on decentralization as its core value. Hence, it is necessary to prepare the existing network, security and privacy infrastructure to embrace the world of decentralization, meaning the infrastructure as a whole needs to stand with decentralization value rather than an unavoidable connection with centralization. Therefore, we propose the enabling decentralization infrastructure controller, deController, for Web3 native infrastructure.
This paper contributes to Web3 in three aspects: (a). the Web3 network architecture with the detailed description of deController consisting of the overlay and underlay network; (b). the security, privacy and identity in a fully decentralized manner; (c). the operational principles regarding law and governance for Web3 infrastructure as shown in Fig. 1.
## II Web3 Outlook in network and services
Compared with the centralized network, such a decentralized network structure brings different considerations in Web3 such as where the data are stored, how to ensure the data validity, etc. On the other hand, existing peer-to-peer routing and network protocols, such as Chord and Distributed Hash Table (DHT) can enable overlay connectivity.
### _Web3 architecture overview_
The network architecture of Web3 is depicted in Fig. 2. Compared with the network architecture of Web 1.0/2.0 using a centralized web server to provide web services as shown in the left of Fig. 2, the Web3 server runs in a more decentralized manner. Specifically, the Web3 server only provides frontends of services while data storage and backends of applications are provided in a distributed manner. Users can access an application via the blockchain address of the corresponding smart contracts, in which the application backend is contained. Blockchain addresses can be routed by the Web3 network in accessing the application. The data content of users and applications (images, voice, videos, etc.) may be stored in a distributed storage to avoid data corruption or loss. Lawful agreements on access control policies can be applied to user data stored by service providers.
To protect the real identities of users, a 3-tier identity architecture is proposed in Section II-B to avoid personal information leakage and identity tracing. In addition, the data of identity mapping to the network and transactions between users and applications, such as payments and records of purchased items/services, can be recorded by DLT in public ledgers as they are small data compared to the content data. Such records can only be written into public ledgers after being verified by consensus mechanisms in the Web3 network, so the records are transparent, undeniable and immutable. Therefore, users and service providers cannot forge records or distort
Fig. 2: Architecture overviews of Web3 and Web 1.0/2.0.
the existing records. Even if applications are shut down by service providers, users' assets in applications are kept in public ledgers, where users can access their assets seamlessly at their own discretion. Such a feature is difficult to be natively supported by applications in Web 1.0/2.0 since user data is fully controlled by service providers in centralized servers.
### _Overlay and underlay decentralization of Web3 network_
The decentralization of network has never been easier with the help of blockchain. Regardless of the consensus type, each blockchain full node operates a full stack of networking and servicing protocols, making them a perfect nexus for the decentralized network. In fact, the existing blockchain delivery network makes the suitable alternative for the Web3 overlay network, as shown in Fig. 1. The underlay can be regarded as the 5-layer common computer network to provide physical network connections for the overlay. NEAT is used to resolve the association of BCADD with any network device identifiers, network ports and domain names. By linking the BCADD to specific identifiers, deController is able to lookup the BCADD globally and establishes the overlay network in any given underlay network. With the underlay network used by the blockchain delivery network, the underlay network will grow in the decentralization's interest, hence becoming the decentralized underlay network operated under the principle of fully decentralized infrastructure, which is illustrated in detail later in Section III.
In the Web3 context, the role of underlay network nodes overlaps with the blockchain nodes in the overlay owned by different stakeholders such as companies and organizations. These nodes also play the pivotal role of supplying computing power and networking capacity of the blockchain network. In fact, blockchain nodes can also provide the necessary overlay tunneling and routing capabilities, hence becoming the pillar of the Web3 overlay network.
#### Ii-B1 Decentralized Applications and Services
The Web3 features the owner economy, which boosts the decentralized applications (dApp). The dApp is a smart contract powered autonomous code running on decentralized networks. Once the code is deployed on the blockchain, it becomes a public asset for any entities within the network. However, the dApp only works as an agent passing on the value between users; it cannot offer demanding services, e.g., video streaming, chatting room or online gaming. To enrich the context of Web3 ecosystem, service providers can use dApp to securely provide services to users using encrypted identities and exchange tokens, hence becoming a decentralized service provider.
#### Ii-B2 Decentralized Network Infrastructure
In the scope of network infrastructure, the aforementioned Web3 network architecture is logically divided into two layers, the underlay and the overlay, as shown in Fig. 1. Similar to the traditional network, the underlay in Web3 architecture can be divided into multiple segments, which are later tagged by the overlay blockchain node with the optimal topological resolution. The entities within each segment perform particular network functions in a decentralized way, which is critically different from traditional networks. As mentioned, two decentralization manners, P2P (Peer-to-Peer) and DAO2DAO [1] (federated), can be exploited in underlay depending on the function performed. For example, multiple computing servers organized by a DAO in the edge network segment can provide route optimization service for the Web3 overlay network, collaborating with different DAOs in a decentralized manner, while the entities' data flow can be organized in a P2P manner that matches an optimal route offered by the Web3 overlay network, in order to manage the packet delivery for users.
The overlay is built above the underlay to control and manage this decentralized network in the Web3 architecture [2]. Generally, the main entity in the overlay is the controller in charge of all the network management functions including authentication (identity and access), data packet routing, computing resource allocation, etc. These functions will be elaborated in Section IV.
#### Ii-B3 Integration: An identity prospect of view
Since Web3 aims for a decentralized network where users can control their data and identity revocation or reservation, most user identities are self-sovereign identities (self-sovereign refers to empowering users to control their own identity information in cyberspace) rather than centralized or federated identities. However, a hierarchy and decentralized identity management infrastructure is necessary to construct a uniform identity authentication scheme that crosses the different worlds and domains.
A 3-tier identity management is proposed to bridge the real identity to virtual identity from the perspective of users and services in Fig. 3. A real user identity can be linked to several virtual user identities to represent the user in Web3 networks. Meanwhile, a virtual user identity can derive the identities of multiple applications and services since a user may operate different applications and services. Therefore, a user's identity in the real world is regarded as the first level identity, named RealID. RealID is confidential and never revealed in Web3. The second level identity is the address of the user's wallet, which is called BCADD [2]. BCADD is derived from RealID locally by a one-way function and used by network operators. The third level identity is regarded as the application ID, which is used as the identity in different services, named APPID. APPID is derived from the current BCADD together with properties of the service by one-way function or verified random function (VRF) [3]. The APPIDs are the self-sovereign identities in the applications, end-to-end routing, and services of Web3 but both BCADDs and APPIDs are hardly being traced without the parameters of the one-way function. Since the overlay network hosted by blockchain networks can lookup every BCADDs in a global view and route every traffic between them, the direct connections between two encrypted identities can be established. Hence, the user can use a unified address authentication based on the public-key identity.
As the decentralization, anonymity and privacy are the prominent factors of Web3, where public keys are used as identities, CAs (certificate authorities) are not required for the authenticity endorsement of identities and ownership of public keys. However, the use of public-key-based identity poses a security threat to the regulatory management of citizen networks, as they are fully anonymous and self-issued. It is a challenge to obtain the real identity of users without
knowing a prior identity association to the public-key-based address. Therefore, the regulator should require mandatory registrations of active public-key-based identities to comply with the regulation. On the other hand, the legal interception can also be implemented into the deController through steering and duplication of traffics.
Here, we first define the visibility of the three layers' identities followed by their security levels. RealID is only held by the user and registered at the regulatory body where necessary. BCADD is public to the cryptocurrency system and other authorized infrastructure, including the Mobile Network Operator (MNO) and Internet Service Providers (ISP). When the BCADD is derived from the RealID, the user can decide to involve more information in the BCADD using VRF and zero-knowledge proof (ZKP) [4] via the regulatory body. As required in Web3, personal information could not be revealed in the network. However, when any information is needed in the network or applications, the regulatory body can apply zero-knowledge proof on the user's registered information and publish a proof to the network. In this way, the user can prove to the Web3 application that it has the information or attribute as required. For example, a user can state its age is over 18 without revealing the actual age using ZKP and VRF in the statements linked to BCADD. ZKP and VRF enable service providers to verify the authenticity of the statement using the BCADD. In addition, APPID is visible to any service providers on Web3. To resist tracking attacks and protect users' contextual privacy, the identity information should be updated in the user-defined privacy time slot. To be consistent with regular updates of wallet addresses, once the BCADD is updated, the APPID should also be updated at the service provider. In this case, the APPID of the same user may be linked to different wallets to impair the service consistency and interruptions.
By having the 3-tier identity hierarchy in Fig. 3, it is ensured that different identities in different domains with different security levels. BCADD should be known to the operator to determine if a subscription for the network access is valid. The APPID is used as both the network interface indicator and the service authentication account. Since APPID is derived from BCADD, the authentication of APPID by the AAA (Authentication, Authorization, and Accounting) server can be done over the distributed network provider when the routers and RAN (Radio Access Network) authenticate and trust the BCADD. The detailed architecture of BCADD in-network accessing and APPID in application service with security and privacy authentication are shown in Fig. 4 and described as follows.
### _Security and privacy_
In Web3 infrastructure, all the blockchain nodes and user identities should be registered and updated to the blockchain platform by sending a bootstrapped transaction. The transaction should include the blockchain node's BCADD to support network service, or APPID together with the access control information to support application services. The access control information may include a Non-Fungible Token (NFT) or other legacy server addresses. NFT has been recognized as a unique identifier of key digital assets, so the possession of certain assets represents the privilege of objects in the form of ownership in a manner of attribute-based access. Such adoption of the ownership concept can be migrated into access control aspects, where the ownership represents the access privilege. After registration, once the blockchain node requires a service from a third-party application server (AS), it will initiate a decentralized mutual authentication [2] of APPID between them. To be compatible with the protocol in [2], we let all the routers check the APPIDs directly and pass the packets transparently to continue the authentication between users and AS. After the authentication procedures are finished, the AS will check the authenticated APPID's corresponding authority by searching the access control information in the blockchain platform. When the APPIDs are renewed together with BCADD, users can decide to keep service consistency by notifying the new APPID in the previous session or the old APPID in the new session. The checking procedure executed by the first router is as follows. Authentication between the user and the communications network is first implemented to authenticate BCADD and support communications. By the derivation relation between BCADD and APPID, the APPID
Fig. 4: Security and Privacy architecture.
Fig. 3: Three tiers of identity.
can be verified and authenticated given BCADD. Once the user can prove to the router that the APPID's holder has a valid BCADD and has initiated a valid transaction for this session without revealing any information, the first router will forward the message to the destination.
The traditional AAA server still exists in Web3 in case decentralized authentication is incompatible with any third-party AS or user. A legacy registered AAA server can run the General Bootstrapping Architecture (GBA) protocol [5] to generate a secure channel between the user and AS. The legacy AAA server should also register to the blockchain platform to be compatible with the blockchain network infrastructures, as shown in Fig. 4. The blockchain in the Web3 network can be regarded as a random oracle to execute computation under public supervision. Meanwhile, new privacy-preserving techniques, like public verifiable ZKP, can be introduced and implemented in the blockchain platform to provide Web3 transparent and regulated privacy protection.
## III DAO for decentralized communication infrastructure: Reshaping the underlay network
As aforementioned, decentralization is identified as the core interest of Web3, led by the blockchain (DLT), dApp, DeFi, and DAO. Communities of decentralization have become the beneficiary of decentralized networks, regardless of the fact that the current whole network is built on top of centralized communication infrastructures. However, dApps cannot be considered fully decentralized with their roots in centralized infrastructures. Therefore, there is a requirement that the underlay of the whole decentralized network, namely communication infrastructure operators, become decentralized.
### _Motivation of DAO-based infrastructure operator_
With the requirement of full decentralization, DAO has the potential to fully decentralize the infrastructure operator. Unlike the traditional telecommunication business entities (e.g., state-owned, private-owned, public-limited, and limited liability companies), the DAO-based infrastructure operator has decentralized structures in essence. Firstly, the DAO-based infrastructure operator can flatten out the entire corporate management structure. There is no centralized management role to really control the organization. Instead, the vital decision can be proposed and made by every member of the organization, namely, DAO stakeholders.
Secondly, the organization's rules are encoded using innovative contract technology in a permissionless blockchain. Traditional organizations do not have to maintain complex and costly administrative departments. DAOs also make it virtually impossible to commit fraud since every transaction is open to public and consortium scrutiny. Another feature of a DAO is that the decisions are executed automatically via votes on the blockchain using smart contracts, which are transparent and non-repudiate. Once a proposal has been successfully voted upon, change occurs automatically without the need for further human involvement.
DAOs represent a radical rethink of how infrastructure can be structured and operated, including changes in ownership, governance, decision-making and profit distribution. Decentralized infrastructure operators can not only inspire the investment in Web3 infrastructures, but also reshape legal consortium through the use of smart contracts, as shown in Fig. 1. With the demand of full decentralization, DAO could extend to telecommunication infrastructural operators [6], operating the entire underlay and overlay network nodes with its own natural resources, such as spectrum, computing resources and energy. Furthermore, DAOs are always motivated to add more value to their content and services created in Web3. However, the value based on decentralization and consensus cannot be secured if the underlay network and storage are built upon the centralized infrastructure. Therefore, another major motivation for DAOs to invest in decentralized infrastructures is to protect their key assets in Web3, while making communal profits from serving Web3 people in the future. Although DAO-based infrastructure operators have many advantages, one of the biggest challenges is the risks of legal compliance to cyber sovereignty and data protection law when infrastructure operators become decentralized and multinational.
### _A legal view on decentralized infrastructure of Web3_
As illustrated in Fig. 4, the decentralized underlay of Web3 significantly impacts law, privacy and cyber sovereignty. Although the decentralized infrastructure has the potential to address the cybersecurity and sovereignty risks associated with data cross-board flow, there are still some potential frictions between the decentralized underlay and the current legal system.
Firstly, full anonymization is still difficult to be achieved since operators or governments are possible to retrieve personal data through a combination of data from network activities even though the 3-tier identity is applied. However, the recovery possibility is also necessitated by the government's legitimate surveillance requirements, which can be a tool for cyberspace regulation. In such a scenario, the government should define anonymization in data protection laws clearly [7] and obey the purpose limitation principle by complementing legislation to mitigate risks.
The second potential friction is users can actually own partial Web3 network and contribute to it under the decentralized infrastructure. However, they may make themselves "network operators" or "data processors" within the meaning of the Cybersecurity Law or Data Protection Law (e.g., GDPR). Thus, they theoretically have to bear the corresponding legal responsibilities for data protection. Such a design does not fully consider the challenges posed by decentralization and the decentralized infrastructure. Therefore, it leads to the critical reflection of regulatory philosophy in this decentralized privacy-friendly architecture, which requires a new legal paradigm of cyberspace regulation. Government-led regulatory impact sandboxes could act as stabilizers to calibrate the law and technology for industry compliance, maximizing the compatibility of Web3 with existing legal systems and regulatory regimes.
Moreover, the decentralized infrastructure may result in data flowing to different jurisdictions, indicated in Fig. 4, creating jurisdictional conflicts and jeopardizing national cybersecurity
and sovereignty. The legal consortium introduced in Fig. 1 may enable different jurisdictions to reach a consensus by smart contracts on issues of judicial jurisdiction. Therefore, it may eliminate unauthorized cross-border movements of data and ensure national cybersecurity and sovereignty.
## IV The Web3 deController for Network infrastructure: Reshaping the overlay network
When a user accesses a Web3 application as shown in Fig. 5, the overlay of deController routes the encrypted application address to the corresponding smart contract deployed by the service provider. Then, a link from the user to the application can be established via the underlay of deController. After that, the user can authenticate the application and then use the smart contract to access the application via the BCADD. The overlay network offers ultimum connectivity to decentralized users and services. However, it is still a challenge to bridge the decentralized services to the underlay physical network in a decentralized manner. In the following, we propose our deController, the nexus for the decentralized overlay and universal network underlay.
### _Identity and access with decentralized identity manager_
One fundamental function for decentralized controllers is the authentication, including identity and service access. As discussed in Fig. 1 and Fig. 3, we introduce a hierarchy and decentralized identity management infrastructure, where RealD, BCADD and APPID are used to achieve authentication in a privacy-preserving way.
With these BCADDs, one critical innovation for Web3 architecture is to enable access with BCADDs, which should be achieved in the overlay network with the help of an embedded deController, shown in Fig. 5. Users can access decentralized services starting from bottom to top. The left part contains network functions. Meanwhile, the smart contracts are shown on the right for user mobility and identity association updates with minor status changes recorded on the blockchain. Therefore, the blockchain is intended for small data such as identity associations and topological updates. Specifically, the controller acts as the agency for entities to interact with the blockchain and relay the information to entities who may not support blockchain access, thus building the encrypted tunnel between two entities. Hence, the native interpretation of encrypted identities can significantly improve the security, integrity and scalability of Web3 services while pushing the boundary of decentralization towards communication infrastructures.
### _Network and application integration: entity discovery_
Another key function that the deController in the overlay network should perform is the network segment routing for data delivery. In the decentralized architecture, there is no central controller to determine and update the routing table for the whole network. Therefore, deControllers determine the routing for users without using conventional network addresses. In our proposed overlay network, shown in Fig. 5, deControllers can rely on the blockchain network to perform routing optimization. Specifically, each access node is identified by its BCADD or APPID, and the serving blockchain access points can be bound to addresses with the topological information. Hence, finding a data transmission path for two users is equivalent to finding a path between the two associated blockchain nodes. Thereby, a logical tunnel between two users is established with users' BCADD or APPID, and further encrypted by the keys exchanged between two blockchain access points. Furthermore, the blockchain network can be mapped into multiple segments of the network, and each blockchain segment represents the overlay access point of the nearby network. The global routing topology will be collected from all blockchain routing nodes to find the routing path among different segments.
### _Identity association with encrypted address translation_
Ledger records contain the information needed for routing and switching, which are essential to the self-claimed identities from clients and their current addresses' bindings. They together make up the identity registry and association services offered by deController in Fig. 5. In the case of switching, the local record utilizes the bound network interface of the entity's BCADD. By having the BCADD as the pointer, the endpoint router can perform NEAT (an address lookup protocol based upon hash table and bloom filter) to steer the traffic between any entities tagged within the BCADD and the connected interfaces.
### _Decentralized services sessions_
As one entity can be identified with an BCADD, per-session routing for each service entity can also be considered, while the traffic can be steered using the BCADDs, as indicated on the top of Fig. 5. During the per-session routing, mutual authentications are performed in every handshake between two encrypted identities via the required security socket layers. Meanwhile, the subsequent service status is updated by the identity manager, who keeps tracking the service quality, aliveness and most importantly, the service identity.
Fig. 5: deController architecture.
## V Conclusion
In this paper, we propose deController, a perspective of Web3 architecture for future decentralized Web3 infrastructures, consisting of overlay and underlay to catalyze more free and fair web access for people. The functions of deController are illustrated in a top-down sculpture of Web3 architecture with the considerations of concealed identity, security and privacy, and law. The term Web3 shall also enable not only the decentralization of giant Internet companies, but also the decentralization from the de-facto centralized infrastructure controller. Our solution proposed in this paradigm can be a potential starting point for the real Web3 infrastructure investment, which allows the true ownership of Web3 beyond the content.
|
2306.09548 | Online Heavy-tailed Change-point detection | We study algorithms for online change-point detection (OCPD), where samples
that are potentially heavy-tailed, are presented one at a time and a change in
the underlying mean must be detected as early as possible. We present an
algorithm based on clipped Stochastic Gradient Descent (SGD), that works even
if we only assume that the second moment of the data generating process is
bounded. We derive guarantees on worst-case, finite-sample false-positive rate
(FPR) over the family of all distributions with bounded second moment. Thus,
our method is the first OCPD algorithm that guarantees finite-sample FPR, even
if the data is high dimensional and the underlying distributions are
heavy-tailed. The technical contribution of our paper is to show that
clipped-SGD can estimate the mean of a random vector and simultaneously provide
confidence bounds at all confidence values. We combine this robust estimate
with a union bound argument and construct a sequential change-point algorithm
with finite-sample FPR guarantees. We show empirically that our algorithm works
well in a variety of situations, whether the underlying data are heavy-tailed,
light-tailed, high dimensional or discrete. No other algorithm achieves bounded
FPR theoretically or empirically, over all settings we study simultaneously. | Abishek Sankararaman, Balakrishnan, Narayanaswamy | 2023-06-15T23:39:05Z | http://arxiv.org/abs/2306.09548v2 | # Online Heavy-tailed Change-point detection
###### Abstract
We study algorithms for online change-point detection (OCPD), where samples that are potentially heavy-tailed, are presented one at a time and a change in the underlying mean must be detected as early as possible. We present an algorithm based on clipped Stochastic Gradient Descent (SGD), that works even if we only assume that the second moment of the data generating process is bounded. We derive guarantees on worst-case, finite-sample false-positive rate (FPR) over the family of all distributions with bounded second moment. Thus, our method is the first OCPD algorithm that guarantees finite-sample FPR, even if the data is high dimensional and the underlying distributions are heavy-tailed. The technical contribution of our paper is to show that clipped-SGD can estimate the mean of a random vector and simultaneously provide confidence bounds at all confidence values. We combine this robust estimate with a union bound argument and construct a sequential change-point algorithm with finite-sample FPR guarantees. We show empirically that our algorithm works well in a variety of situations, whether the underlying data are heavy-tailed, light-tailed, high dimensional or discrete. No other algorithm achieves bounded FPR theoretically or empirically, over all settings we study simultaneously.
## 1 Introduction
Online change-point detection (OCPD) is a fundamental problem in statistics where instantiations of a random variable are presented one after another and we want to detect if some parameter or statistic corresponding to the underlying data generating distribution has changed. This problem has been widely studied in machine learning, mathematical statistics and information theory over the past century. In part, this is due to the wide-ranging applications of OCPD to computational biology (Muggeo and Adelfio, 2011), online advertising (Zhang et al., 2017), cyber-security (Osanaiye et al., 2016; Kurt et al., 2018; Polunchenko et al., 2012), cloud-computing (Maghakian et al., 2019), finance (Lavielle and Teyssiere, 2007), medical diagnostics (Yang et al., 2006; Gao et al., 2018) and robotics (Konidaris et al., 2010). We refer interested readers to the recent surveys of (Aminikhanghahi and Cook, 2017) and (Xie et al., 2021) for details of applications of OCPD. These surveys build upon the classical texts in change-point detection obtained over the last decade (Basseville et al., 1993; Tartakovsky, 1991; Krichevsky and Trofimov, 1981).
Classical results for OCPD have focused on algorithms that assume known distributions for either one or both of the pre- and post-change data (Wald, 1992; Page, 1954; Shiryaev, 2007; Lorden, 1971; Pollak, 1985; Ritov, 1990; Moustakides, 1986; Tartakovsky, 1991). In recent years, algorithms have been developed for cases when the pre- and post- change distributions are unknown, but belong to a parametric class such as the exponential family (Lai and Xing, 2010; Fryzlewicz, 2014; Frick et al., 2014; Cho, 2016). Nonparametric algorithms have been developed in (Padilla et al., 2021; Madrid Padilla et al., 2021) and the references therein, but they only give asymptotic guarantees. The algorithms of (Adams and MacKay, 2007; Lai and Xing, 2010; Maillard, 2019; Alami et al., 2020) have finite-sample guarantees, but either rely on parametric assumptions such as an exponential family, or on tail assumptions such as sub-gaussian distribution families. The works of (Bhatt et al., 2022) and (Li and Yu, 2021) build upon the work in (Niu and Zhang, 2012), and give algorithms for multiple change-points with possibly heavy-tailed data in the _offline_ case with all data available up-front. The works of (Wang and Ramdas, 2022; Shekhar and Ramdas, 2023; Wang and Ramdas, 2022) give OCPD algorithms
for heavy-tailed, but uni-variate data.
In many modern applications such as cloud-computing and monitoring, data is known to often be heavy-tailed [12, 13, 14] and too complex to model with any simple parametric family [1, 11, 10]. Given the velocity, variety and volume of modern data streams, performance of change-point detection is measured through false-positive rates in order to combat alert fatigue [13], and algorithms must work for streams that have multiple change points. Motivated by these requirements, we seek an OCPD algorithm that simultaneously meet the following desiderata : it _(i)_ detects multiple change-points, _(ii)_ makes no parametric assumptions on the distribution of data, _(iii)_ works with potentially heavy-tailed data, _(iv)_ works for high-dimensional data streams, and _(v)_ guarantees finite sample FPR.
### Main Contributions
Our paper is the first to give an online algorithm satisfying all the \(5\) desiderata listed above. Specifically, our algorithm gives finite sample guarantees for FPR and detection-delay without assuming that data comes from a specific parametric family or assuming strong tail conditions - such as that the data have sub-gaussian distributions. No previous algorithm for OCPD simultaneously achieves all desiderata. Our main technical contribution is to provide a clipped-SGD algorithm with finite sample confidence bounds for heavy-tailed mean estimation, _that hold for all confidence values simultaneously_, a result of independent interest. We use these bounds to build a OCPD algorithm with finite sample FPR.
We further show good empirical performance across a variety of data streams with heavy-tailed, light-tailed, high dimensional or discrete distributions. However while our algorithm is designed to work across different distributions, we observe theoretically and empirically that when data has additional structure such as being one-dimensional with sub-gaussian tails or is binary, then specialized OCPD algorithms for those cases yield better results than our method. Closing these gaps is an ongoing direction of research.
## 2 Problem Setup
At each time \(t\), a random vector \(X_{t}\in\mathbb{R}^{d}\) is revealed to an OCPD algorithm. \(X_{t}\) has a probability measure and expectation denoted by \(\mathbb{P}_{t}\) and \(\mathbb{E}_{t}\) respectively, and mean \(\mathbb{E}_{t}[X_{t}]\in\mathbb{R}^{d}\). Subsequently, using all the samples observed so far - \(X_{1},\cdots,X_{t}\) - the algorithm outputs a binary decision denoting whether a change in mean has occurred since time \(t=1\) or the last time a change was output by the algorithm, whichever is larger. The goal of the OCPD algorithm is identify the change points as quickly as possible after they occur, with bounded false-positive rate (FPR). The observed datum \((X_{t})_{t\geq 1}\) are independent, although not identically distributed with piece-wise constant mean.
**Definition 2.1** (Piece-wise constant mean process).: Let \(T\) be the time horizon (stream-length) and let \(Q_{T}<T\) be the total number of change-points. A set of strictly increasing time-points \(1<\tau_{1}<\tau_{2}\cdots<\tau_{Q_{T}+1}:=T+1\) are called change-points, if for all \(c\in\{1,\cdots,Q_{T}\}\)
* \(\forall t\in[1,T]\), \(X_{t}\sim\mathbb{P}_{t}\) independently.
* \(\forall t\in[\tau_{c},\tau_{c+1})\), the mean \(\mathbb{E}_{t}[X_{t}]:=\theta_{c}\) of the observation is constant and does not depend on \(t\).
* \(\forall c\in[1,Q_{T}]\), \(\theta_{c}\neq\theta_{c+1}\).
Thus, a piece-wise constant mean process is identified by the quadruple \(\mathfrak{M}:=(T,Q_{T},(\tau_{c})_{c=1}^{Q_{T}},(\mathbb{P}_{t})_{t=1}^{T})\). Throughout, we use probability and expectation operators \(\mathbb{P}\) and \(\mathbb{E}\), to denote the joint product probability distribution \((\mathbb{P}_{t})_{t=1}^{T}\).
### Assumptions
Let \(\mathcal{P}\) be a family of probability measures on \(\mathbb{R}^{d}\) such that the probability distributions \(\mathbb{P}_{t}\), for all \(t\), are from this family, i.e., \(\mathbb{P}_{t}\in\mathcal{P}\), \(\forall t\in[1,T]\). Throughout this paper, we make the following non-parametric assumptions on the family \(\mathcal{P}\).
**Assumption 2.2**.: There exists a convex compact set \(\Theta\subset\mathbb{R}^{d}\) known to the algorithm, such that for all \(\mathbb{P}\in\mathcal{P}\), \(\mathbb{E}_{X\in\mathbb{P}}[X]\in\Theta\). In words, the mean of all the distributions in the family belong to a known bounded set \(\Theta\) such that \(\max_{\theta_{1},\theta_{2}\in\Theta}\|\theta_{1}-\theta_{2}\|:=G\).
**Assumption 2.3**.: There exists \(\sigma>0\) known to the algorithm, such that for all \(\mathbb{P}\in\mathcal{P}\) and \(\theta\in\Theta\), \(\mathbb{E}_{X\sim\mathbb{P}}[\|X-\mathbb{E}_{X\sim\mathbb{P}}[X]\|_{2}^{2}] \leq\sigma^{2}\). In words, the second moment is uniformly bounded for all distributions in \(\mathcal{P}\).
These assumptions are very general and encompass a wide range of families such as any bounded distribution, the set of sub-Gaussian distributions and heavy-tailed distributions that do not have finite higher moments. We seek algorithms that work without knowing the length of the data stream, the number of change-points and that do not make any assumptions on the underlying distributions generating the samples, beyond Assumptions 2.2 and 2.3.
### Performance Measures
Any OCPD algorithm is measured by two performance metrics - _(i)_ False-positive rate and _(ii)_ Detection delay. We set notation to define these measures.
_Notation 2.4_.: For every \(1\leq r\leq s<T\), we denote by \(X_{r:s}:=(X_{r},X_{r+1},\cdots,X_{s})\) to be the set of observed vectors from time \(r\) to time \(s\), with both end-points \(r\) and \(s\) inclusive.
**Definition 2.5** (OCPD algorithm).: A sequence of measurable functions \(\mathcal{A}:=(\mathcal{A}_{t})_{t\geq 1}\) is called an OCPD algorithm if for every time \(t\geq 1\), \(\mathcal{A}_{t}\in\{0,1\}\) and is measurable with respect to the sigma algebra generated by \(X_{1:t}\). The interpretation is that if \(\mathcal{A}_{t}=1\) for some \(t\), then the algorithm has detected a change at time \(t\) and if \(\mathcal{A}_{t}=0\), no change is detected at time \(t\).
_Notation 2.6_.: For an OCPD algorithm \(\mathcal{A}\) and for all \(t\in[T]\), denote by \(R^{(\mathcal{A})}(t)\in\mathbb{N}\) to be the random variable denoting the number of detections made till time \(t\), i.e., \(R^{(\mathcal{A})}(t)=\sum_{s=1}^{t}\mathcal{A}_{s}\).
_Notation 2.7_.: For an OCPD algorithm \(\mathcal{A}\), and every \(r\in\mathbb{N}\) and, denote by \(t_{r}^{(\mathcal{A})}\) as the stopping time
\[t_{r}^{(\mathcal{A})}:=\min(\inf\{t\in[0,T]\text{ s.t. }R^{(\mathcal{A})}(t)\geq r\},T+1),\]
where the \(\inf\) of an empty set is defined to be \(\infty\). In words, \(t_{r}^{(\mathcal{A})}\) is the stopping time when the OCPD algorithm detects a change for the \(r\)th time, or \(T+1\), whichever is larger.
**Definition 2.8** (False Positive Detection).: The \(r\)th detection of an OCPD algorithm \(\mathcal{A}\) is said to be a False Positive, if there exists no change-point between the \(r-1\)th and the \(r\)th detection. Formally, denote by the indicator (random) variable \(\chi_{r}^{(\mathcal{A})}=\mathbf{1}(\beta c\in[1,Q_{T}]\text{ s.t. }\tau_{c}\in(t_{r-1}^{(\mathcal{A})},t_{r}^{(\mathcal{A})}])\) to denote if the \(r\)th detection of \(\mathcal{A}\) is a false-positive. Note that by definition, on the event that \(R^{(\mathcal{A})}(T)<r\), \(\chi_{r}^{(\mathcal{A})}=0\).
**Definition 2.9** (False Positive Rate (FPR)).: An OCPD algorithm \(\mathcal{A}\) is said to have false-positive rate bounded by \(\delta\in(0,1)\) if
\[\sup_{\mathfrak{M}}\mathbb{E}\left[\frac{\sum_{r=1}^{T}\chi_{r}^{(\mathcal{A} )}}{R^{(\mathcal{A})}(T)}\mathbf{1}(R^{(\mathcal{A})}(T)>0)\right]\leq\delta. \tag{1}\]
In words, an OCPD algorithm \(\mathcal{A}\) has bounded false positive rate, if for every piece-wise constant mean process \(\mathfrak{M}\), the expected fraction of false-positives made by the algorithm \(\mathcal{A}\) is bounded by \(\delta\). In Equation (1), we take the sum till \(T\) because that is the maximum number of possible change points detected. If an algorithm only detects \(s<T\) change points, then by definition \(\chi_{r}^{(\mathcal{A})}=0\) for all \(r>s\).
**Definition 2.10** (Worst-case Detection Delay).: For \(n\in\mathbb{N}\) and \(\Delta>0\), let \(X_{1},X_{2},\cdots,X_{n},X_{n+1},\cdots\) be an infinite stream with the following distribution. For every \(t<n\), \(X_{t}\stackrel{{\text{ind}}}{{\rightarrow}}\mathbb{P}_{t}\) with \(\mathbb{E}_{X\sim\mathbb{P}_{t}}[X]=\theta_{1}\in\Theta\) and for every \(t\geq n\), \(X_{t}\stackrel{{\text{ind}}}{{\rightarrow}}\mathbb{P}_{t}\) with \(\mathbb{E}_{X\sim\mathbb{P}_{t}}[X]=\theta_{2}\in\Theta\) with \(\|\theta_{1}-\theta_{2}\|=\Delta\). Let \(\mathfrak{M}^{(n,\Delta)}\) denote all such infinite piece-wise constant mean process. An algorithm \(\mathcal{A}\) is said to have worst-case detection delay \(\mathcal{D}(\Delta,n,\delta^{\prime})\), if
\[\sup_{\mathfrak{M}^{(n,\Delta)}}\mathbb{P}\bigg{[}\inf\{t>n\colon\mathcal{A}_{ t}=1\}-n\geq\mathcal{D}(\Delta,n,\delta^{\prime})\bigg{]}\leq\delta^{\prime} \tag{2}\]
holds for all \(n\in\mathbb{N}\), \(\Delta>0\) and \(\delta^{\prime}\in(0,1)\).
In words, the detection delay function \(\mathcal{D}(\Delta,n,\delta^{\prime})\) is such that for every admissible process \(\mathfrak{M}^{(n,\Delta)}\) that has a single change-point at time \(n\) with jump magnitude \(\Delta\), algorithm \(\mathcal{A}\) detects the change-point before time \(n+\mathcal{D}(\Delta,n,\delta^{\prime})\), with probability at-least \(1-\delta^{\prime}\). Note that the delay metric is measured on data streams with exactly one change-point. Defining detection delay for streams with multiple change-points is ambiguous as there could be missed detections, with only a subset of the change-points being detected [1, 10]. The main question this paper studies is
_For each \(\delta\in(0,1)\), does there exists an OCPD algorithm with FPR bounded by \(\delta\) and having small worst-case detection-delay that only makes Assumptions 2.2 and 2.3?_.
Observe that it is trivial to achieve a FPR of \(0\) for example the constant function where \(\mathcal{A}(\cdot)=0\), i.e., an algorithm that never detects change-point at all. However, this algorithm has a worst-case detection-delay of \(\infty\), i.e., \(\mathcal{D}(\Delta,n,\delta^{\prime})=+\infty\) for all \(\Delta>0\), \(n\in\mathbb{N}\) and \(\delta^{\prime}\in(0,1)\). Thus, the challenge is to design an algorithm that satisfies the FPR constraint of \(\delta\) while having small, finite worst-case detection delay, without making parametric assumptions on the underlying data generating distributions.
## 3 Online Robust Mean Estimation
The central workhorse of our change-point detection algorithm is heavy-tailed online mean estimation. Suppose \(X_{1},X_{2},\cdots\) are a sequence of independent random vectors, with the means \(\mathbb{E}_{t}[X_{t}]=\theta^{*}\in\Theta\) being a constant independent of time \(t\). Let \((\widehat{\theta}_{t})_{t\geq 1}\) be a sequence of random variables such that \(\widehat{\theta}_{t}\) is an estimate of \(\theta\) based on the samples \(X_{1},\cdots,X_{t}\) defined through clipped-SGD algorithm described as follows. For a given non-negative sequence \((\eta_{t})_{t\geq 1}\) and \(\lambda>0\), the estimate \(\widehat{\theta}_{0}\in\Theta\) is arbitrary, \(\widehat{\theta}_{t}\) for each \(t\geq 1\), is given by
\[\widehat{\theta}_{t}:=\prod_{\Theta}(\widehat{\theta}_{t-1}-\eta_{t}\text{ clip}(X_{t}-\widehat{\theta}_{t-1},\lambda)), \tag{3}\]
where, \(\prod_{\Theta}\) is the projection operator onto the convex compact set \(\Theta\) and for every \(x\in\mathbb{R}^{d}\) and \(\lambda>0\),
\[\text{clip}(x,\lambda)=x\min\left(1,\frac{\lambda}{\|x\|}\right). \tag{4}\]
Our main result on the convergence of the estimator \(\widehat{\theta}_{t}\) to the true \(\theta^{*}\) with increasing number of samples \(t\) is the following.
**Theorem 3.1**.: _For all times \(t\geq 1\), when clipped SGD in Equation (3) is run with \(\lambda=2G\) and \(\eta_{t}=\frac{2}{(t+\gamma)}\) for \(\gamma=\max\left(120\lambda\sigma(\sigma+1),320\sigma^{2}+1\right)\), then for every \(t\geq 1\)
_and every \(\delta\in(0,1)\),_
\[\mathbb{P}\left[\|\widehat{\theta}_{t}-\theta^{*}\|_{2}^{2}\geq\mathcal{B}(t, \delta)\right]\leq\frac{\delta}{t(t+1)},\]
_where_
\[\mathcal{B}(t,\delta):=C_{t}\bigg{[}\frac{\gamma^{2}G^{2}}{(t+1)^{ 2}}+\left(\frac{16\sigma^{2}}{\lambda}+4\sigma^{2}\right)\frac{1}{2(t+1)}\\ +\frac{96\lambda^{2}\ln\left(\frac{2t^{2}(t+1)}{\delta}\right) \sigma(\sigma+1)}{(t+\gamma)\sqrt{t+1}}\bigg{]}, \tag{5}\]
_and \(C_{t}=\max(\frac{1024\sigma^{4}}{G^{2}\lambda^{2}},\frac{8\lambda\sqrt{\ln \left(\frac{2t^{2}(t+1)}{\delta}\right)}}{\gamma^{2}G})\)._
**Corollary 3.2**.: _There exists an universal constant \(A>0\) such that for all \(t\geq 1\), when clipped SGD in Equation (3) is run with parameters in Theorem 3.1_
\[\mathbb{P}\left[\|\widehat{\theta}_{t}-\theta^{*}\|\geq A\max\left(\frac{ \sigma^{3}}{\sqrt{t}},\frac{\sigma\sqrt{\ln\left(\frac{t^{3}}{\delta}\right)} }{\sqrt{t}}\right)\right]\leq\frac{\delta}{t(t+1)},\]
_holds for every \(\delta\in(0,1)\)._
Proof is in Appendix in Section B and uses tools from [1], [1, 10] and [20].
_Remark 3.3_.: Compared to [20], we do not need the failure probability \(\delta\) in the input and we can give simultaneous confidence intervals for all failure probabilities \(\delta\). In contrast, the algorithm of [20] requires \(\delta\in(0,1)\) as an input and only guarantees that the estimate mean is close to the true mean, upto error probability of \(\delta\). However, the bound in Theorem 3.1 is off by logarithmic factors compared to [20]. Concretely, \(C_{t}=O(1)\) for the algorithm of [20], while it is \(O(\log(t/\delta))\) for us. This is the price to have confidence intervals hold for all failure probabilities simultaneously as opposed to just having one single failure probability.
_Remark 3.4_.: Compared to the setting of [20], our setting is _weaker_ as we assume that the domain \(\Theta\) is compact with finite diameter \(G\). This is what enables us to use an appropriately tuned learning rate and clipping parameter to make the algorithm any-time and obtain confidence intervals at all failure probabilities simultaneously. It is an open question whether the assumption that \(\Theta\) is compact can be relaxed and if we can still make guarantees confidence interval holding for all failure probabilities \(\delta\) for all \(t\) for heavy tailed distributions.
_Remark 3.5_.: The constants in Theorem 3.1 are not optimal. In Section 5, we suggest an alternative set of constants that work well empirically across variety of settings.
_Remark 3.6_.: There have been significant recent advances in robust mean estimation [1, 16, 17, 18, 19, 20], that are known to provide near optimal error bounds. However, unlike our method, none of these algorithms can give confidence bounds for all confidence values simultaneously.
_Remark 3.7_.: Theorem \(4.3\) in [17] proves that it is impossible to get a finite sample confidence bound to hold for all \(\delta\in(0,1)\). Our result does not contradict this since the restriction on the allowable \(\delta\) is _implicit_ in Theorem 3.1. Equation (5) gives that, for every \(t\in\mathbb{N}\), as \(\delta\searrow 0\), \(\mathcal{B}(t,\delta)\nearrow\infty\). However, from Assumption 2.2, if \(\mathcal{B}(t,\delta)\geq G\), then the statement of Theorem 3.1 is vacuous. Thus, Theorem 3.1 gives a non-vacuous bounds only for \(\delta\in(\delta_{min}^{(t)},1)\) where \(\delta_{min}^{(t)}:=\inf_{\delta>0}\{\mathcal{B}(t,\delta)<G\}\).
### Uniform over time bound
As a corollary of Theorem 3.1, we get the following bound that holds uniformly over all time.
**Corollary 3.8**.: _There exists an universal constant \(A>0\) such that, when clipped SGD in Equation (3) is run with parameters in Theorem 3.1,_
\[\mathbb{P}\left[\exists t\in\mathbb{N}:\|\widehat{\theta}_{t}-\theta^{*}\| \geq A\max\left(\frac{\sigma^{3}}{\sqrt{t}},\frac{\sigma\sqrt{\ln\left(\frac {t^{3}}{\delta}\right)}}{\sqrt{t}}\right)\right]\leq\delta,\]
_holds for every \(\delta\in(0,1)\)._
The proof follows by taking an union bound over all \(t\geq 1\), i.e., summing over \(t\geq 1\) on both the LHS and RHS of Corollary 3.8 and noticing that \(\sum_{t\geq 1}\frac{1}{t(t+1)}=1\). The bounds in Theorem 3.1 and Corollary 3.8 are _dimension free_, i.e., the term \(d\) does not appear in the bounds. The moment bound \(\sigma\) plays the role of dimension. In particular, suppose that all distributions in the family \(\mathcal{P}\) have covariance matrices bounded in the positive semi-definite sense by \(\Sigma\in\mathbb{R}^{d\times d}\). In this case, by definition, \(\sigma^{2}\leq\text{Trace}(\Sigma)\) and plays the role of dimension.
In the special case when the samples \((X_{t})_{t\geq 1}\) are i.i.d. with sub-gaussian distributions with mean \(\theta^{*}\) and covariance matrix \(\Sigma\), [1, 17, 18], that for all \(\delta\in(0,1)\),
\[\mathbb{P}\bigg{[}\exists t\in\mathbb{N}:\left\|\frac{1}{t}\sum_ {s=1}^{t}X_{s}-\theta^{*}\right\|\geq\\ \sqrt{2\lambda_{max}(\Sigma)\left(1+\frac{1}{t}\right)\ln\left( \frac{(t+1)^{d}}{\delta}\right)}\bigg{]}\leq\delta, \tag{6}\]
holds, where \(\lambda_{max}(\Sigma)\) is the highest eigen-value of the covariance matrix \(\Sigma\). Thus, for the special case of sub-gaussian
distributions, Equation (6) has a better dependence on time \(t\) compared to our Corollary 3.8. The improved dependence on time arises as Equation (6) is based on the construction of a self normalized martingale and using the martingale stopping theorem to obtain uniform over time bounds while Corollary 3.8 is based on a simple union bound.
However, Equation (6) is not dimension free and depends on the scale of the problem through the term \(d\lambda_{max}(\Sigma)\) which by definition is larger than \(\text{Trace}(\Sigma)\). In many high dimensional settings, \(d\lambda_{max}(\Sigma)\) is much larger than \(\text{Trace}(\Sigma)\) and thus algorithms and bounds depending explicitly on \(d\) is undesirable [14, 10]. For the uni-variate heavy-tailed distributions, a sequence of works [10, 10] establish confidence bounds with sharp dependence on time by extending the martingale recipe developed in [11]. In our draft, we are able to get dimension free bounds for heavy-tailed distributions, but at the cost of a compactness Assumption 2.2 that are not needed in [1]. It is an open question if we can get dimension-free bounds with the improved time-dependence of the kind in Equation (6) without the compactness assumption.
## 4 Change-Point Detection Algorithm
Our algorithm is described in Algorithm 1 and is based on the following idea. A change point is detected in the time-interval \([r,t]\) if there exists \(r<s<t\) such that confidence interval around the estimated mean of the observations \(X_{r:s}\) is separated from the confidence interval around the estimated mean of the observations \(X_{s+1:t}\). Further, in order to accommodate multiple change-points, the algorithm _restarts_ after every change detection, similar to [1]. It is known that standard empirical mean is a poor estimator when the underlying distributions can potentially be heavy-tailed, as its confidence interval under only assumptions in 2.3 is wide [10]. To attain better confidence intervals, we use the clipped-SGD Algorithm 1 that gives a confidence interval for the estimated mean for every failure probability \(\delta\in(0,1)\) simultaneously. Having multiple confidence intervals is crucial as we show that adaptively testing different intervals of times at different carefully chosen confidence intervals (Line \(8\) of Algorithm 1) leads to the bounded FPR guarantee.
### Connections to Glr
Restating our algorithm, a change point is detected in a time-interval \([t_{0},t]\) if
\[\exists s\in(t_{0},t)\text{ s.t. }\|\widehat{\theta}_{t_{0}:s}-\widehat{ \theta}_{s+1:t}\|^{2}\geq\mathcal{C}(t_{0},s,t,\delta),\]
where the function \(\mathcal{C}(\cdot)\) is given in Line \(8\) of Algorithm 1. In the above re-statement, the estimates \(\widehat{\theta}_{t_{0}:s}\) and \(\widehat{\theta}_{s+1:t}\) are robust estimates of the mean based on the set of observations \(\{X_{t0},\cdots,X_{s}\}\) and \(\{X_{s+1},\cdots,X_{t}\}\) respectively. The Improved-GLR of [13] uses a detector that is structurally similar to the above equation except that they _(i)_ use the empirical mean as they are dealing with sub-gaussian random variables, and _(ii)_ use a function \(\mathcal{C}(\cdot)\) derived from the Laplace method that gives confidence bounds with better dependence on time, but is not dimension free. In contrast, we use the robust mean estimator given by clipped-SGD and the function \(\mathcal{C}(\cdot)\) is derived from the confidence guarantees that only require the existence of the second moment and make no other tail assumptions and yields dimension free bounds. The cost however is that the confidence bound derived from clipped SGD has a weaker dependence on time compared to that obtained by the Laplace's method [13].
### False-Positive Guarantee
We will prove the following result on Algorithm 1. For a given process \(\mathfrak{M}\), and every \(r\in\mathbb{N}\), denote by the deterministic time \(\tau_{c}^{(r)}:=\inf\{\tau_{c}:\tau_{c}>r\}\) be the first change-point after time \(r\).
**Theorem 4.1** (False Positives).: _When Algorithm 1 is run with parameters \(\lambda=2G\), \(\eta_{t}=\frac{2}{(t+\gamma)}\) for \(\gamma=\max\left(120\lambda\sigma(\sigma+1),320\sigma^{2}+1\right)\) and \(\delta\in(0,1)\),_
\[\sup_{\mathfrak{M},r}\mathbb{P}[\exists t\in[r,\tau_{c}^{(r)}),\text{ s.t. }\mathcal{A}_{t}=1|\mathcal{A}_{r}=1]\leq\delta,\]
_holds almost-surely._
Proof is in Appendix in Section C.1. This result states that with probability at-most \(\delta\), a true change-point _does not_ lie between any two consecutive detections made by the algorithm. This theorem implies the following lemma.
**Lemma 4.2**.: _Under the conditions of Theorem 4.1, the FPR condition in Equation (1) holds._
The proof is in the Appendix in Section C.2. We emphasize that the guarantee in 4.2 is a _worst-case guarantee_. In other words, no matter the underlying distribution, as long as Assumptions 2.2 and 2.3 are met, Algorithm 1 will not have more than a \(\delta\) fraction of false-positives.
### Worst-Case Detection Delay Guarantee
**Lemma 4.3**.: _If Algorithm 1 is run with the parameters from Theorem 4.1, then for every \(n\in\mathbb{N}\), \(\Delta>0\) and \(\delta^{\prime}\in(0,1)\)_
\[\mathcal{D}(n,\Delta,\delta^{\prime})\leq\inf\left\{d\in\mathbb{N}:\Delta^{2} \geq\mathcal{B}\left(n-1,\frac{\delta^{\prime}}{2}\right)+\right.\]
\[\mathcal{B}\left(d,\frac{\delta^{\prime}}{2}\right)+\mathcal{B}\left(n -1,\frac{\delta}{2(n+d+1)(n+d)}\right)+\\ \mathcal{B}\left(d,\frac{\delta}{2(n+d+1)(n+d)}\right)\bigg{\}}, \tag{7}\]
_where \(\mathcal{D}(\cdot)\) and \(\mathcal{B}(\cdot)\) are in Eqns (2) and (5) respectively._
Proof is in the Appendix in Section D. Lemma 4.3 is an _upper bound on the worst case delay_. In other words, for any pre- and post-change distribution with norm of the means differing by \(\Delta\), Algorithm 1 will detect this change within delay of \(\mathcal{D}(n,\Delta,\delta^{\prime})\) with probability at-least \(1-\delta^{\prime}\).
For many specific choices of pre- and post-change distribution families however, we expect the observed detection delay to be much smaller than predicted by Lemma 4.3. This bound is conservative as it is worst-case over all distributions. In Figure 0(a) we plot the bound in Lemma 4.3 for a fixed \(\delta^{\prime}=\delta=0.1\) as \(n\) and \(\Delta\) varies. We use the constants given in Section 5.1 to plot Figure 0(a). In Figure 0(b), we plot the empirically observed detection delay for a sequence of \(32\) dimensional Pareto distributed random vectors with shape parameter \(2.01\). As can be seen in Figure 1, the observed detection delay is much smaller than that indicated by Lemma 4.3, which is a worst case over all distributions.
_Remark 4.4_.: In the special case when the observations are Bernoulli random variables, the R-BOCPD algorithm of [1] gives a smaller detection delay compared to ours - our detection delay bound in 4.3 has additional poly-logarithmic factors of \(\log(n/\delta)\) and sub-optimal constants compared to R-BOCPD. However, our bound holds for _any_ family of distributions, including high-dimensional and heavy tailed ones, while R-BOCPD can only be applied for Bernoulli distributions.
**Corollary 4.5** (Un-detectable Change).: _If \(\Delta\leq\mathcal{O}\left(\frac{\log\left(\frac{n}{\delta}\right)}{\sqrt{n}}\right)\), then \(\mathcal{D}(n,\Delta,\delta^{\prime})\leq\infty\) for all \(\delta^{\prime}\in(0,1)\), the delay bound in Lemma 4.3 is vacuous._
_Remark 4.6_.: The undetectable region consists of the grey/white areas of Figure 0(a). However, since Lemma 4.3 is only an upper-bound, the fact that \(\mathcal{D}(n,\Delta,\delta^{\prime})=\infty\)_does not imply_ that our algorithm cannot detect the change (cf. Figure 0(b)).
_Remark 4.7_.: In the case of sub-gaussian, exponential families, [10] give a lower bound for changes that not detectable by _any_ algorithm. When Algorithm 1 is applied to sub-gaussian random variables from an exponential family, the detection-delay bound in Lemma 4.3 is sub-optimal by poly-logarithmic factors in \(\log(n/\delta)\) compared to the lower bound. However, Algorithm 1 and the delay bound in Lemma 4.3 holds for any class of distributions subject to Assumptions 2.3 and 2.2, while the bounds in
Figure 1: Figure \((a)\) plots the heat-map of \(\mathcal{D}(n,\Delta,\delta^{\prime})\) from Lemma 4.3 for fixed \(\delta^{\prime}=0.1\). The white cells represent infinity. Figure \((b)\) plots the \(90\)th quantile (\(\delta^{\prime}=0.1\)) of the observed delay for Pareto distribution \(d=32\) over \(30\) runs. As can be seen, the observed detection delay in \((b)\) is much smaller than the worst case delay in \((a)\).
[Maillard, 2019] only applies to sub-gaussian observations from a known exponential family.
_Remark 4.8_.: In parallel work, the FCS detector of [Shekhar and Ramdas, 2023], when combined with the heavy-tailed Catoni-style confidence sequences of [Wang and Ramdas, 2023] is shown to detect univariate mean changes as long as \(\Delta\preceq\sqrt{\log(\log(n)/\alpha)/n}\). Whether this rate is achievable in multivariate settings is left for future work
### Change-point localization
In practice, it is also crucial to identify the location where the change point occurred. In this section we describe how to modify Algorithm 1 to also output the estimate of the location of change in addition to just detecting the existence of a change. Recall that for every \(r\in\mathbb{N}\), \(\tau_{r}^{(\mathcal{A})}\in\mathbb{N}\cup\{\infty\}\) is the stopping time denoting the \(r\)th time, Algorithm \(\mathcal{A}\) detects a change point. We modify Algorithm 1 by additionally outputting for every \(r\in\mathbb{N}\), a time interval \([s_{r;1}^{(\mathcal{A})},s_{2;r}^{(\mathcal{A})}]\subseteq[\tau_{r-1}^{( \mathcal{A})},\tau_{r}^{(\mathcal{A})}]\) such that this is an interval that contains a change-point \(\tau_{c}\).
In order do so, we need an additional definition. For every \(r<s<t\) and \(\delta\in(0,1)\), denote by \(\mathfrak{B}(r,s,t,\delta)\in\{0,1\}\) as the indicator variable that
\[\mathfrak{B}(r,s,t,\delta)=\mathbf{1}\bigg{(}\|\widehat{\theta}_{r:s}-\widehat {\theta}_{s+1:t}\|_{2}^{2}>\mathfrak{B}_{\perp}+\mathfrak{B}_{\perp}\bigg{)}, \tag{8}\]
where \(\mathfrak{B}\mathbf{1}=\mathcal{B}\left(s-r,\frac{\delta}{2(t-r)(t-r+1)}\right)\) and \(\mathfrak{B}\mathbf{2}=\mathcal{B}\left(t-s-1,\frac{\delta}{2(t-r)(t-r+1)}\right)\). The estimates of the location of change in a time-interval \([r,t]\) is all those time instants \(s\in[r,t]\) such that \(\mathfrak{B}(r,s,t,\delta)=1\). Line \(12\) in Algorithm 2 in Section A in the Appendix, precisely defines the estimator. The empirical performance of this method is shown in Figure 3. We observe that this produces an accurate and sharp estimate of the change-point location in simulations.
## 5 Experiments
In this section we give numerical evidence to show that Algorithm 1 can be applied across variety of settings. Line \(8\) of Algorithm 1 relies on confidence bounds for high-dimensional estimation where the global constants are not optimized. This is an artifact of the proof analysis in robust estimation [Lugosi and Mendelson, 2019, Vershynin, 2018]. Thus, we modify the absolute constants used in Theorem 4.1 as follows. We use \(\gamma=\max\left(4\lambda\sigma(\sigma+1),8\sigma^{2}+1\right)\) with the color red highlighting the changes from the definition in Theorem 4.1. The constant \(C_{t}\) is modified as follows \(C_{t}=\max(\frac{0.5\sigma^{4}}{G^{2}\lambda^{2}},\frac{1\lambda\sqrt{\ln \left(\frac{2t^{2}(t+1)}{\delta}\right)}}{\gamma^{2}G})\). In addition, we use the following definition of \(\mathcal{B}(\cdot,\cdot)\)
\[\mathcal{B}(t,\delta):=C_{t}\bigg{[}\frac{\gamma^{2}G^{2}}{t+1}+ \left(\frac{2\sigma^{2}}{\lambda}+1\sigma^{2}\right)\frac{1}{2(t+1)}\\ +\frac{2\lambda^{2}\ln\left(\frac{2t^{2}(t+1)}{\delta}\right) \sigma(\sigma+1)}{(t+\gamma)\sqrt{t+1}}\bigg{]}, \tag{9}\]
where \(C_{t}\) and \(\gamma\) are the modified values stated above. Further, in all simulations we assume \(\Theta=\mathbb{R}^{d}\) to be the whole plane.
### Synthetic simulations
Here, demonstrate that Algorithm 1 with choice of hyperparameters in Equation (9) is practical and can be applied across a variety of data generating distributions - either heavy-tailed, or high-dimensional or both and still obtains bounded false-positive rates and a much lower detection delay compared to what the conservative bound in Lemma 4.3 would indicate.
#### 5.1.1 Setup
In Figure 2, we construct synthetic situations and introduce change-points with each change lasting \(400\) time-units. In all experiments, we choose the family of distributions \(\mathfrak{M}\) such that \(\sigma=1\), \(G=12\). At each time \(t\), a sample is drawn from the appropriate distribution that we detail below and presented to the change-point algorithm. The true-change points and the median detection times along with the 95 percentile upper and lower confidence bands are show in Figure 2. These are estimated by averaging \(30\) independent runs for each setting in Figure 2.
**Heavy-tailed distribution:** In Figures 1(a), 1(b) and 1(i), the sample at every time-point is drawn from a Pareto distribution with shape-parameter \(2.01\). This implies that the third central moment of the distribution is infinity. The mean of the samples in the time-durations \(t\in[0,400)\cup[800,1200)\) is \(0\) in all figures and the mean at times \(t\in[400,800)\cup[1200,1600)\) is \(\Delta=0.5,1,1\) respectively in Figures 1(a), 1(b) and 1(i). In Figure 1(c), 1(d) and 1(j), we consider the observation at time \(t\) to be \(32\) dimensional isotropic random vector with norm having Pareto distributions with shape parameter \(2.01\). The mean vector at times \([0,400)\cup[800,1200)\) is \(0\in\mathbb{R}^{32}\) and at times \(t\in[400,800)\cup[1200,1600)\) is \(\frac{\Delta}{\sqrt{32}}[1,\cdots,1]\in\mathbb{R}^{32}\), where \(\Delta=0.5,1,1\) respectively in Figures 1(c), 1(d) and 1(j) respectively.
**Gaussian distribution:** In Figures 1(e) and 1(f) the sample at every time-point is drawn from a unit variance Gaussian distribution. The mean of the samples in the time-durations \(t\in[0,400)\cup[800,1200)\) in all three figures is \(0\) and the mean at times \(t\in[400,800)\cup[1200,1600)\) in the two figures 1(e) and 1(f) are \(\Delta=0.5\) and \(\Delta=1\) respectively. In Figures 1(g) and 1(h) we consider the observation at time \(t\) to be \(32\) dimensional isotropic gaussian random vector with co-variance on each axis being \(1/\sqrt{32}\). The mean vector
at times \([0,400)\cup[800,1200)\) is \(0\in\mathbb{R}^{32}\) and at times \(t\in[400,1600)\cup[1200,1600)\) is \(\frac{\Delta}{\sqrt{32}}[1,\cdots,1]\in\mathbb{R}^{32}\).
**Bernoulli distribution:** In Figures 1(k) and 1(l), the data was \(\{0,1\}\) valued Bernoulli random variable with means at times \([0,400)\cup[800,1200)\) was \(0.7\) and \(0.85\) respectively in the two figures, and the means at times \([400,800)\cup[1200,1600)\) are \(0.3,0.15\) respectively in the two figures.
#### 5.1.2 Baselines
We consider the Improved-GLR of [10] and R-BOCPD of [1] as baselines since they have been empirically demonstrated to be state-of-art, and are the only other algorithms to possess finite sample, non-asymptotic FPR guarantees. The Improved-GLR can be applied to any distribution, while its theoretical guarantees only hold for sub-gaussian distributions. The R-BOCPD algorithm is only applicable to binary data, and thus we only use it on the Bernoulli distributed setting.
#### 5.1.3 Results
**Figure 2 shows that our algorithm is the only one to attain bounded FPR across heavy-tailed, Gaussian, high dimensional and Bernoulli distribution.**
For Pareto distribution, Figures 1(h) and 1(j) show that the Improved-GLR algorithm has a large number of False Positives. Intuitively this occurs because the Improved-GLR algorithm assumes sub-gaussian tails and thus large deviations that are typical for the heavy-tailed Pareto distributions are mistaken for a change. (See also Figure 6). In contrast, from Figures 1(a), 1(b), 1(c), 1(d) and 1(j), we see that our algorithm consistently attains bounded false-positive rates and finite detection delay guarantees across choices of \(\Delta\) and dimension \(d\).
On gaussian distributed data, both our algorithm 1 and the Improved-GLR obtains similar performance in-terms of false-positive rates. However, the the median detection time of our algorithm is larger than the 95th percentile detection time of Improved-GLR. In Bernoulli distributed data, all methods attain similar False-positive guarantees; however, the specialized algorithm of R-BOCPD is superior in terms of detection delay compared to ours and the Improved-GLR.
In Table 1, we summarize Figure 2 by measuring _regret_. For any OCPD algorithm \(\mathcal{A}\), we can define a function \(R^{(\mathcal{A})}:[T]\rightarrow\mathbb{N}\) where \(R^{(\mathcal{A})}(t)=\sum_{s\leq t}\mathcal{A}_{s}\) is the total number
Figure 3: Plots showing that by Algorithm 2 can detect and localize change-points across a variety of settings.
Figure 2: Empirical performance of Algorithm 1 in a variety of scenarios. Exact details of each plot in Section 5.1.
of change-points detected upto time \(t\). Similarly, for any \(t\in[T]\), the ground-truth function \(R^{*}(t)=\max\{c:\tau_{c}\leq t\}\) is the number of true changes till time \(t\). The regret of algorithm \(\mathcal{A}\) is defined as \(\sum_{t=1}^{T}|R^{\mathcal{A}}(t)-R^{*}(t)|\). This measure is non-negative and is 0 if and only if the output of the algorithm matches the ground truth. In Table 1, we give the median value of regret along with \(95\)% confidence interval. We observe in Table 1 that our method achieves lower regret across a variety of situations - whether the data is heavy-tailed, light tailed high dimensional or discrete.
#### 5.1.4 Change-point localization
In Figure 3, we demonstrate sharpness of change-point localization (detailed in Algorithm 2). The setting in Figure 3 is identical to that of Figure 2 with the boundary of the shaded region representing the \(5\)th quantile for the starting point and the \(95\)th quantile for the ending point of the change location interval output in Line \(12\) of Algorithm 2. The localization region is biased towards the right, which is expected since our algorithm is designed to minimize false positives even in the worst-case.
### Real-data
In Figure 4 we show the performance of Algorithm 1 and the Improved-GLR on the well-log dataset [12]. This dataset consists of \(4050\) measurements in the range \([6\times 10^{4},10^{5}]\) of nuclear-magnetic-response taken during drilling of a well. The data are used to interpret the geophysical structure of the rock surrounding the well. The variations in mean reflect the stratification of the earth's crust. We process the data by dividing it by \(10^{4.5}\) and run Algorithm 1 with \(G=10\), \(\sigma=1\) and Improved GLR with \(\sigma=1\). The detected change-points are shown in Figure 4. Figure 4 shows that Algorithm 1 is comparable to Improved-GLR in terms of false-positives.
## 6 Conclusions
We introduced a new method based on clipped-SGD, to detect change-points with guaranteed finite-sample FPR, without parametric or tail assumptions. The key technical contribution is to give an anytime online mean estimation algorithm, that provides a confidence bound for the mean at all confidence levels simultaneously. We also give a finite-sample, high probability bound on the detection delay as a function of the gap between the means and number of pre-change observations. We further corroborate empirically that ours is the only algorithm to detect change-points with bounded FPR, across multi-dimensional heavy tailed, gaussian or binary-valued data streams.
Our work opens several interesting directions for future work. Obtaining sharp confidence intervals for estimating the mean of a random vector without the existence of variance was shown in [3, 10]. Extending the tools from therein to further relax the second moment assumption we considered is a natural direction of future work. Another open question is to see if the martingale methods can be extended to the high-dimensional to get dimension free confidence bounds. Further, we observe in simulations that our method attains'sharp' localization empirically. Understanding the three
\begin{table}
\begin{tabular}{c|c|c||c|c|c}
**Distribution** & **d** & \(\mathbf{\Delta}\) & **Algorithm 1** & **Improved GLR**[10] & **R-BOCPD**[1] \\ \hline \multirow{4}{*}{Normal} & \(1\) & \(1\) & \(274\pm 38\) & \(\mathbf{64\pm 45}\) & \multirow{4}{*}{N/A} \\ & \(32\) & \(1\) & \(\mathbf{300\pm 6}\) & \(2400\pm 0\) & \\ & \(1\) & \(0.5\) & \(694\pm 191\) & \(\mathbf{356\pm 150}\) & \\ & \(32\) & \(0.5\) & \(\mathbf{1427\pm 14}\) & \(2400\pm 1\) & \\ \hline \multirow{4}{*}{Pareto} & \(1\) & \(1\) & \(\mathbf{296\pm 35}\) & \(19913\pm 8143\) & \multirow{4}{*}{N/A} \\ & \(32\) & \(1\) & \(\mathbf{302\pm 7}\) & \(1616\pm 921\) & \\ \cline{1-1} & \(1\) & \(0.5\) & \(\mathbf{868\pm 365}\) & \(1891\pm 663\) & \\ \cline{1-1} & \(32\) & \(0.5\) & \(\mathbf{1431\pm 14}\) & \(1667\pm 653\) & \\ \hline \multirow{2}{*}{Bernoulli} & - & \(0.7\) & \(515\pm 49\) & \(181\pm 23\) & \(\mathbf{23\pm 479}\) \\ \cline{1-1} & - & \(0.5\) & \(1509\pm 53\) & \(1466\pm 762\) & \(\mathbf{63\pm 380}\) \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative summary of Figure 2 by comparing regret, where lower is better. Our method achieves lower regret across variety of settings of distribution, dimension and change magnitude.
Figure 4: Performance of change-point detection of Algorithm 1 and the Improved-GLR on real data.
way trade-off between sharpness of localization, FPR and detection delay is an important area of future work.
**Acknowledgements** AS thanks Aaditya Ramdas for several useful comments that improved the presentation.
|
2305.10549 | Indirect Rate Distortion Functions with $f$-Separable Distortion
Criterion | We consider a remote source coding problem subject to a {distortion
function}. Contrary to the use of the classical separable distortion criterion,
herein we consider the more general, $f$-separable distortion measure and study
its implications on the characterization of the minimum achievable rates (also
called $f$-separable indirect rate distortion function (iRDF)) under both
excess and average distortion constraints. First, we provide a single-letter
characterization of the optimal rates subject to an excess distortion using
properties of the $f$-separable distortion. Our main result is a single-letter
characterization of the $f$-separable iRDF subject to an average distortion
constraint. As a consequence of the previous results, we also show a series of
equalities that hold using either indirect or classical RDF under $f$-separable
excess or average distortions. We corroborate our results with two application
examples in which new closed-form solutions are derived, and based on these, we
also recover known special cases. | Photios A. Stavrou, Yanina Shkel, Marios Kountouris | 2023-05-17T20:08:21Z | http://arxiv.org/abs/2305.10549v1 | # Indirect Rate Distortion Functions with
###### Abstract
We consider a remote source coding problem subject to a distortion function. Contrary to the use of the classical separable distortion criterion, herein we consider the more general, \(f\)-separable distortion measure and study its implications on the characterization of the minimum achievable rates (also called \(f\)-separable indirect rate distortion function (iRDF)) under both excess and average distortion constraints. First, we provide a single-letter characterization of the optimal rates subject to an excess distortion using properties of the \(f\)-separable distortion. Our main result is a single-letter characterization of the \(f\)-separable iRDF subject to an average distortion constraint. As a consequence of the previous results, we also show a series of equalities that hold using either indirect or classical RDF under \(f\)-separable excess or average distortions. We corroborate our results with two application examples in which new closed-form solutions are derived, and based on these, we also recover known special cases.
## I Introduction
The mathematical analysis of the lossy source coding under a fidelity criterion, called rate distortion theory [1], was developed under the assumption that an encoder observes an information source \(\mathbf{x}\) with distribution \(p(x)\) defined on the alphabet space \(\mathcal{X}\), and the aim is for the decoder to reconstruct in a minimal end-to-end rate-constrained manner, its representation \(\widehat{\mathbf{x}}\) defined on an alphabet \(\widehat{\mathcal{X}}\) within a distortion measure \(d:\mathcal{X}\times\widehat{\mathcal{X}}\mapsto[0,\infty)\). When the information source generates a sequence of \(n\) realizations, the source sequence induces the distribution \(p(x^{n})\) on the Cartesian product alphabet space \(\mathcal{X}^{n}\), with its reconstruction alphabet being \(\widehat{\mathcal{X}}^{n}\). For the latter case, Shannon in [1] extended the single-letter expression of the distortion measure to the \(n\)-letter expression \(d^{n}:\mathcal{X}^{n}\times\widehat{\mathcal{X}}^{n}\mapsto[0,\infty)\) by taking the arithmetic mean of single-letter distortions, i.e.,
\[d^{n}(x^{n},\widehat{x}^{n})=\frac{1}{n}\sum_{i=0}^{n}d(x_{i},\widehat{x}_{i}), \tag{1}\]
which is often encountered as _separable_, _additive_ or _per-letter_ distortion measure.
A natural extension of the lossy source coding problem, called _indirect_ or _remote_ lossy source coding, was proposed almost fifteen year later in [2]. Therein the authors considered the case where the encoder observes a noisy version of the source \(\mathbf{x}\), say \(\mathbf{z}\), and the goal is to reconstruct \(\widehat{\mathbf{x}}\) with minimal rates subject to an average distortion \(d:\mathcal{X}\times\widehat{\mathcal{X}}\mapsto[0,\infty)\). A major result in [2] is that for stationary memoryless sources, the fundamental limit in the asymptotic regime corresponds to the classical lossy source coding problem with an amended average distortion constraint. Subsequently, this problem and some of its variants, e.g., non-asymptotic analysis, excess distortion measures, multi-terminal systems, were revisited by many researchers, see e.g., [3, 4, 5, 6, 7, 8, 9, 10, 11, 12] and references therein.
All the aforementioned efforts in [3, 4, 5, 6, 7, 8, 9, 10, 11], consider separable distortion penalties. On one hand, the separability assumption is natural and quite appealing when it comes to the derivation of tractable characterizations of the fundamental trade-offs between the coding (or compressed) rate and its corresponding distortion. On the other hand, the separability assumption is very restrictive because it only models distortion penalties that are _linear functions_ of the single-letter distortion in the source reconstruction. However, in real-world applications, distortion measures may be highly _non-linear_. To address this issue and inspired by [13], here we consider a much broader class of distortion measures, namely, \(f\)-separable distortion measures.
In this work, we derive the following new results: (i) a single-letter characterization of the minimal rates subject to an excess distortion using properties of the \(f\)-separable distortion (see Lemma 1); (ii) a single-letter characterization of the \(f\)-separable iRDF (obtained for finite alphabets) subject to an average distortion constraint that is obtained under relatively mild regularity conditions and by making use of a strong converse theorem [8] (see Theorem 1); (iii) new series of equalities under \(f\)-separable excess or average distortion constraints using indirect or classical RDFs (see Corollary 1 and Theorem 1); (iv) two application examples in which new analytical solutions are derived for various types of \(f\)-separable average distortions; we also explain how these analytical expressions recover known results as special cases (see Examples 1, 2). It is worth mentioning that from (ii), we also derive the implicit solution of the optimal minimizer that achieves the characterization of the \(f\)-separable iRDF (see Corollary 2). This result can be readily used to derive new Blahut-Arimoto type of algorithms [14, 15] for a much richer class of distortion penalties.
## II Problem Formulation
We consider a memoryless source described by the tuple \((\mathbf{x},\mathbf{z})\) with probability distribution \(p(x,z)\) in the product
alphabet space \(\mathcal{X}\times\mathcal{Z}\). The remote information of the source is in \(\mathbf{x}\) whereas \(\mathbf{z}\) is the noisy observation at the encoder side. The goal is to study the remote source coding problem [2, 5, 6] under an \(f\)-separable distortion measure.
Formally, the system model (without the distortion penalties) is illustrated in Fig. 1 and can be interpreted as follows. An _information source_ is a sequence of \(n\)-length independent and identically distributed (i.i.d) RVs \((\mathbf{x}^{n},\mathbf{z}^{n})\). The _encoder (E)_ and the _decoder (D)_, are modeled by the mappings
\[f^{E}:\mathcal{Z}^{n}\rightarrow\mathcal{W},\ \ g^{D}:\mathcal{W}\to \mathcal{\widehat{X}}^{n} \tag{2}\]
where the index set \(\mathcal{W}\in\{1,2,\ldots,M\}\).
We consider a per-letter distortion measure responsible to penalize the remote information source in Fig. 1 given by \(d:\mathcal{X}\times\mathcal{\widehat{X}}\mapsto[0,\infty)\) and their corresponding \(n\)-letter expressions given by \(d^{n}:\mathcal{X}^{n}\times\mathcal{\widehat{X}}^{n}\mapsto[0,\infty)\). This setting has recently gained attention in the context of goal-oriented semantic communication [16, 17], where \(\mathbf{x}\) can represent the semantic or intrinsic information of the source, which is not directly observable, whereas \(\mathbf{z}\) is the noisy observation of the source at the encoder side.
Next, we define the precise terminology of the noisy lossy source codes for the single-letter and the multi-letter case (without restricting to i.i.d processes at this stage).
**Definition 1**.: _(Noisy lossy source codes) Consider constants \(\epsilon\in[0,1)\), \(D\geq 0\), and an integer \(M\)._
**(1)** _We say that a noisy lossy source-code \((f^{E},g^{D})\) is an \((M,D)\)-noisy lossy source code on \((\mathcal{X},\mathcal{Z},\mathcal{X},d)\) such that \(\mathbf{x}-\mathbf{z}-\widehat{\mathbf{x}}\), if \(\mathbf{E}[d(\mathbf{x},\widehat{\mathbf{x}})]\leq D\), where \(\widehat{\mathbf{x}}=g^{D}\big{(}f^{E}(\mathbf{z})\big{)}\)._
**(2)** _We say that a noisy lossy source-code \((f^{E},g^{D})\) is an \((M,D,\epsilon)\)-noisy lossy source code on \((\mathcal{X},\mathcal{Z},\mathcal{X},d)\) such that \(\mathbf{x}-\mathbf{z}-\widehat{\mathbf{x}}\), if \(\mathbf{P}[d(\mathbf{x},\widehat{\mathbf{x}})>D]\leq\epsilon\) where \(\widehat{\mathbf{x}}=g^{D}(f^{E}(\mathbf{z}))\)._
**(3)** _If \((f^{E},g^{D})\) is an \((M,D)\)-noisy lossy source code on \((\mathcal{X},\mathcal{Z}^{n},\mathcal{\widehat{X}}^{n},d^{n})\) such that \(\mathbf{x}^{n}-\mathbf{z}^{n}-\widehat{\mathbf{x}}^{n}\), we say that \((f^{E},g^{D})\) is an \((n,M,D)\)-noisy lossy source code._
(4) _If \((f^{E},g^{D})\) is an \((M,D,\epsilon)\)-noisy lossy source code on \((\mathcal{X}^{E},\mathcal{Z}^{n},\mathcal{\widehat{X}}^{n},d^{n})\) such that \(\mathbf{x}^{n}-\mathbf{z}^{n}-\widehat{\mathbf{x}}^{n}\), we say that \((f^{E},g^{D})\) is an \((n,M,D,\epsilon)\)-noisy lossy source code._
We remark the following special case of Definition 1.
**Remark 1**.: _(On Definition 1) In our analysis, we will also consider as a special case the classical (noiseless) lossy source codes subject to similar single-letter and multi-letter distortion measures as in the case of noisy lossy source coding. This means that we will use special cases of Definition 1. For example, for a noiseless lossy source code, Definition 1,_ **(1)**_, will be modified as follows_
* _we say that a lossy source-code_ \((f^{E},g^{D})\) _is an_ \((M,D)\)_-lossy source code on_ \((\mathcal{X},\mathcal{\widehat{X}},d)\) _if_ \(\mathbf{E}[d(\mathbf{x},\widehat{\mathbf{x}})]\leq D\)_, where_ \(\widehat{\mathbf{x}}=g^{D}(f^{E}(\mathbf{x}))\) _(because_ \(\mathbf{x}=\mathbf{z}\)_)._
_Definition 1,_ **(2)**_-_**(4)**_, are modified accordingly._
Using [13, Definition 1], we consider an _f-separable distortion measure_ associated with the remote information source of the setup in Fig. 1 defined as follows
\[d_{f}^{n}(x^{n},\hat{x}^{n})\triangleq f^{-1}\Bigg{(}\frac{1}{n}\sum_{i=1}^{ n}f(d(x_{i},\hat{x}_{i}))\Bigg{)} \tag{3}\]
where \(f(\cdot)\) is a continuous, increasing function on \([0,\infty)\).
In the sequel, we give the definitions of indirect and direct (or classical) RDFs under \(f\)-separable distortion measures. To do it, we need the following definition of achievability.
**Definition 2**.: _(Achievability) Suppose that a sequence of distortion measures \(\{d^{n}:\ n=1,2,\ldots\}\) on \((\mathcal{X}^{n},\mathcal{\widehat{X}}^{n})\) is given, such that \(\mathbf{x}^{n}-\mathbf{z}^{n}-\widehat{\mathbf{x}}^{n}\). Then, we define the following statements._
**(1)** _The rate distortion tuple \((R,D)\) is indirectly achievable if there exists a sequence \((n,M_{n},D^{n})\)-noisy lossy source codes such that \(\limsup_{n\rightarrow\infty}\frac{1}{n}\log M_{n}\leq R\), \(\limsup_{n\rightarrow\infty}D^{n}\leq D\)._
**(2)** _The rate distortion tuple \((R,D)\) is indirectly and excess distortion achievable if for any \(\gamma>0\) there exists a sequence \((n,M_{n},D+\gamma,\epsilon_{n})\)-noisy lossy source codes such that \(\limsup_{n\rightarrow\infty}\frac{1}{n}\log M_{n}\leq R\), \(\limsup_{n\rightarrow\infty}\epsilon_{n}=0\), where \(\epsilon_{n}\) denotes the decoding error probability, i.e., \(\epsilon_{n}=\mathbf{P}\left[\mathbf{x}^{n}\neq g^{D}(f^{E}(\mathbf{z}^{n}))\right]\)._
If we assume sequences of noiseless lossy source codes, we say that a rate distortion tuple \((R,D)\) is directly (and excess distortion) achievable in analogous way to Definition 2, with \(\mathcal{X}^{n}=\mathcal{Z}^{n}\). This means that the sequence of distortion measures \(\{d^{n}:n=1,2,\ldots\}\) can be defined either on \((\mathcal{Z}^{n},\mathcal{\widehat{X}}^{n})\) or on \((\mathcal{X}^{n},\mathcal{\widehat{X}}^{n})\).
**Definition 3**.: _(iRDF) Given a single-letter distortion measure \(d\colon\mathcal{X}\times\mathcal{\widehat{X}}\rightarrow[0,\infty)\) and a continuous, increasing function \(f\) on \([0,\infty)\), let \(\{d_{f}^{n}:\ n=1,2,\ldots\}\) be a sequence of \(f\)-separable distortion measures. Then,_
\[\mathcal{I}_{f,d}(D)=\inf\{R:(R,D)\text{ is indirectly achievable}\} \tag{4}\]
_and \(\mathcal{\widehat{I}}_{f,d}(D)=\inf\bigl{\{}R:(R,D)\text{ is indirectly and excess distortion achievable}\bigr{\}}\). If \(f\) is the identity function, then we have a sequence of separable distortion measures; in this case we omit the subscript \(f\) and write \(\mathcal{I}_{d}(D)\) and \(\mathcal{\widehat{I}}_{d}(D)\)._
**Definition 4**.: _(Direct RDF) Given a single-letter distortion measure \(d\colon\mathcal{Z}\times\mathcal{\widehat{X}}\to[0,\infty)\) and a continuous, increasing function \(f\) on \([0,\infty)\), let \(\{d_{f}^{n}:\ n=1,2,\ldots\}\) be a sequence of \(f\)-separable distortion measures. Then,_
\[\mathcal{R}_{f,d}(D)=\inf\{R:(R,D)\text{ is directly achievable}\} \tag{5}\]
_and \(\mathcal{\widehat{R}}_{f,d}(D)=\inf\{R\): (R, D) is directly and excess distortion achievable\(\}\). If \(f\) is the identity function, we omit the subscript \(f\) and write \(\mathcal{R}_{d}(D)\) and \(\mathcal{\widehat{R}}_{d}(D)\)._
We give the following remark for the previous two Definitions.
Fig. 1: System model.
**Remark 2**.: _(On Definitions 3, 4) In this work our goal is to characterize the \(f\)-separable iRDFs \(\mathcal{I}_{d,f}(D)\) and \(\widehat{\mathcal{I}}_{d,f}(D)\) for a given distortion measure \(d(\cdot,\cdot)\) and a function \(f(\cdot)\). In addition to the \(f\)-separable iRDFs, we consider the following three special cases: (1) separable RDF \(\mathcal{R}_{d}(D)\) and \(\widehat{\mathcal{R}}_{d}(D)\), (2) separable iRDFs \(\mathcal{I}_{d}(D)\) and \(\widehat{\mathcal{I}}_{d}(D)\), and (3) \(f\)-separable RDFs \(\mathcal{R}_{d,f}(D)\) and \(\widehat{\mathcal{R}}_{d,f}(D)\). To state our results, we compare these different classes of RDFs to each other. While the iRDFs is defined over some space \((\mathcal{X},\mathcal{Z},\widehat{\mathcal{X}},d)\), it is possible to generate modified direct RDFs from iRDFs in which case these are definite over the space \((\mathcal{Z},\widehat{\mathcal{X}},\hat{d})\), where \(\hat{d}\colon\mathcal{Z}\times\widehat{\mathcal{X}}\to[0,\infty)\) is an amended distortion measure. In general, the underlying space for the direct RDFs should be clear from context. For example, \(\mathcal{R}_{d}(D)\) refers to an RDF on \((\mathcal{X},\widehat{\mathcal{X}},d)\), while \(\mathcal{R}_{\widehat{d}}(D)\) refers to an RDF on \((\mathcal{Z},\widehat{\mathcal{X}},\hat{d})\)._
## III Prior Work
Next, we discuss more extensively some prior results that will be used in our main results.
### _RDF under Average and Excess Constraints_
For i.i.d sources with finite alphabets \((\mathcal{X},\widehat{\mathcal{X}})\) and bounded distortion measure \(d\), the RDF is given by
\[\mathcal{R}_{d}(D)=\inf_{q(\widehat{x}|x)\colon\,\mathbf{E}[d( \mathbf{x},\widehat{\mathbf{x}})]\leq D}I(\mathbf{x};\widehat{\mathbf{x}}).\]
See e.g., [18, Theorem 10.2.1] and [19, Theorem 5.2.1]. Moreover, we know that for stationary ergodic sources with a bounded distortion measure,
\[\mathcal{R}_{d}(D)=\widehat{\mathcal{R}}_{d}(D). \tag{6}\]
That is, the RDF is the same under average and excess distortion constraints [19, Theorem 5.9.1]. We also know that for stationary ergodic sources \(\widehat{\mathcal{R}}_{d}(D)\) satisfies the so-called _strong converse_[8, 20]. Finally, the second order asymptotic expansion of \(\widehat{\mathcal{R}}_{d}(D)\) is given as well, see e.g., [21, 22], but this type of analysis is beyond the scope of the present paper.
### _iRDF_
For i.i.d sources with finite alphabets \((\mathcal{X},\mathcal{Z},\widehat{\mathcal{X}})\) and bounded distortion measure \(d\), the iRDF is given by
\[\mathcal{I}_{d}(D) =\inf_{\begin{subarray}{c}q(\widehat{x}|x)\colon\\ \mathbf{E}[d(\mathbf{x},\widehat{\mathbf{x}})]\leq D\end{subarray}}I(\mathbf{ z};\widehat{\mathbf{x}})\] \[\stackrel{{(a)}}{{=}}\inf_{\begin{subarray}{c}q( \widehat{x}|x)\colon\\ \mathbf{E}[\widehat{d}(\mathbf{x},\widehat{\mathbf{x}})]\leq D\end{subarray}}I( \mathbf{z};\widehat{\mathbf{x}})\equiv\mathcal{R}_{\widehat{d}}(D) \tag{7}\]
where \((a)\) follows from [2] (see also Remark 2) and \(\mathcal{R}_{\widehat{d}}(D)\) is the direct RDF for \((\mathcal{X},\mathcal{Z})\) with the amended distortion given by \(\hat{d}(z,\widehat{x})=\sum_{\mathcal{X}}p(x|z)d(x,\widehat{x})\). In other words, the indirect rate distortion problem reduces to a direct rate distortion problem with a modified per-letter distortion measure [2, 5, 6, 8]. Moreover, for i.i.d sources, the iRDF is the same under average and excess distortion constraints
\[\mathcal{I}_{d}(D)=\widehat{\mathcal{I}}_{d}(D) \tag{8}\]
and the strong converse also holds [8]. Finally, for this problem, the second-order asymptotic analysis has been addressed in [8] where it was shown that the equivalence between direct and indirect problems no longer holds in the second-order (dispersion) sense.
### _f-Separable RDF_
Similar equivalence results hold for \(f\)-separable RDFs. Specifically, for i.i.d sources
\[\mathcal{R}_{f,d}(D)=\mathcal{R}_{d}(f(D))=\inf_{\begin{subarray}{c}q( \widehat{x}|x)\\ \mathbf{E}[\widehat{d}(\mathbf{x},\widehat{\mathbf{x}})]\leq f(D)\end{subarray}}I (\mathbf{x};\widehat{\mathbf{x}}) \tag{9}\]
where \(\mathcal{R}_{\widehat{d}}(\cdot)\) is the separable RDF for \((\mathcal{X},\widehat{\mathcal{X}})\) with the amended distortion given by \(\bar{d}(x,\widehat{x})=f(d(x,\widehat{x}))\), see [13]. More generally, it is shown in [13] that for the \(f\)-separable rate distortion problem
\[\widehat{\mathcal{R}}_{f,d}(D)=\widehat{\mathcal{R}}_{\bar{d}}(f(D)). \tag{10}\]
That is, under excess distortion criterion, the \(f\)-separable RDF reduces to the classical separable case without any assumption on the underlying source. In fact for stationary ergodic sources, this result extends to both average and excess distortion criteria under some regularity assumptions (see [13, Theorem 1]),
\[\mathcal{R}_{f,d}(D)=\widehat{\mathcal{R}}_{f,d}(D). \tag{11}\]
We remark that the generalizations of the classical rate-distortion problem to indirect and \(f\)-separable rate distortion problems have intriguing parallels. Both generalizations could be expressed in terms of a classical amended rate distortion problem. The same insight holds when we apply both generalizations simultaneously. As we will see next, the resulting rate-distortion function could be expressed in terms of the classical amended rate distortion problem.
## IV Single-letter characterization of the operational rates for i.i.d sources
In this section, we characterize the \(f\)-separable iRDFs for the setup in Fig. 1 for i.i.d sources. Specifically, our main result states that for i.i.d sources over finite alphabets (under mild regularity assumptions) we have that
\[\mathcal{I}_{f,d}(D)=\mathcal{R}_{\bar{d}}(f(D)) \tag{12}\]
where \(\mathcal{R}_{\bar{d}}(D)\) is the RDF for \((\mathcal{X},\mathcal{Z},\tilde{d})\) with the amended distortion given by \(\tilde{d}(z,\widehat{x})=\sum_{\mathcal{X}}p(x|z)f(d(x,\widehat{x}))\).
First, we give a lemma in which we characterize the \(f\)-separable iRDF under the excess distortion criterion.
**Lemma 1**.: _(\(f\)-separable iRDF under excess distortion) Given a single-letter distortion measure \(d\colon\mathcal{X}\times\widehat{\mathcal{X}}\to[0,\infty)\) and a continuous, increasing function \(f\) on \([0,\infty)\),_
\[\widehat{\mathcal{I}}_{f,d}(D)=\widehat{\mathcal{I}}_{\bar{d}}(f(D)) \tag{13}\]
_where \(\widehat{\mathcal{I}}_{\bar{d}}(f(D))\) is computed subject to the single-letter separable distortion measure \(\bar{d}(x,\widehat{x})=f(d(x,\widehat{x}))\)._
Next, we make assumptions that will be used to derive the single-letter information theoretic characterization to our problem. These assumptions are a counterpart of the assumptions utilized in [13, Theorem 1]; however, due to the difficulty of the indirect rate distortion problem, these assumptions are more restrictive, e.g., we only consider finite alphabets.
**Assumptions**.: _Suppose that the following statements are true._
* _The joint process_ \(\{(\mathbf{x}^{n},\mathbf{z}^{n}):\,n=1,2,\ldots\}\) _is_ \(\mathrm{i.i.d}\) _sequence of random variables, namely,_ \(p(x^{n},z^{n})=p(x)p(z|x)\times\ldots\times p(x)p(z|x)=p(x|z)p(z)\times\ldots \times p(x|z)p(z)\)_, for any_ \(n\)_;_
* _The single-letter distortion_ \(d(\cdot,\cdot)\) _and is such that_ \[\max_{(x,\widehat{x})\in\mathcal{X}\times\widehat{\mathcal{X}}}d(x,\widehat{ x})<\infty;\] (14)
* _The alphabets_ \((\mathcal{X},\mathcal{Z},\widehat{\mathcal{X}})\) _are finite._
In particular, assumption (A2) rules out pathological rate-distortion function for which finite distortion is only possible at full rate.
**Corollary 1**.: _(Consequence of Lemma 1) Under Assumptions_ **(A1)**_-_**(A3)**_, a consequence of Lemma 1 is the following series of equalities_
\[\widehat{\mathcal{I}}_{f,d}(D)=\widehat{\mathcal{I}}_{\bar{d}}(f(D))=\widehat {\mathcal{R}}_{f,\bar{d}}(D)=\widehat{\mathcal{R}}_{\bar{d}}(f(D)) \tag{15}\]
_where_
\[\bar{d}(x,\widehat{x}) =f(d(x,\widehat{x})) \tag{16}\] \[\hat{d}(z,\widehat{x}) =f^{-1}\Bigg{(}\sum_{x}p(x|z)f(d(x,\widehat{x}))\Bigg{)}\] (17) \[\bar{d}(z,\widehat{x}) =\sum_{x}p(x|z)f(d(x,\widehat{x})). \tag{18}\]
Proof:: The first equality, \(\widehat{\mathcal{I}}_{f,d}(D)=\widehat{\mathcal{I}}_{\bar{d}}(f(D))\), is shown in Lemma 1. We have that \(\widehat{\mathcal{I}}_{\bar{d}}(f(D))=\widehat{\mathcal{R}}_{\bar{d}}(f(D))\) from (6), (7) and (8). Finally, \(\widehat{\mathcal{R}}_{f,\bar{d}}(D)=\widehat{\mathcal{R}}_{\bar{d}}(f(D))\) follows from (10). This completes the proof.
Next, we show the same result for the average rate-distortion functions.
**Theorem 1**.: _(\(f\)-separable iRDF under average distortion) Under Assumptions_ **(A1)**_-_**(A3)**_, the \(f\)-separable iRDF under an average distortion constraint satisfies the following equality_
\[\mathcal{I}_{f,d}(D)=\mathcal{I}_{\bar{d}}(f(D)) \tag{19}\]
_where \(\bar{d}(x,\widehat{x})\) is given in (16). In particular, this implies that under Assumptions_ **(A1)**_-_**(A3)**_,_
\[\mathcal{I}_{f,d}(D)=\widehat{\mathcal{I}}_{f,d}(D)=\mathcal{R}_{f,\bar{d}}( D)=\mathcal{R}_{\bar{d}}(f(D)) \tag{20}\]
_and_
\[\mathcal{I}_{f,d}(D)=\inf_{\begin{subarray}{c}q(\widehat{x}|z)\\ \mathbf{E}[\bar{d}(z,\widehat{\mathbf{x}})]\leq f(D)\end{subarray}}I(\mathbf{ z};\widehat{\mathbf{x}}) \tag{21}\]
_where \(\hat{d}(z,\widehat{x})\) and \(\bar{d}(z,\widehat{x})\) are given by (17) and (18), respectively._
Proof:: Equations (20) and (21) follow from (19) and the results in Section III. Namely, we have that \(\mathcal{I}_{\bar{d}}(f(D))=\mathcal{R}_{\bar{d}}(f(D))\) from (7); \(\mathcal{R}_{f,\bar{d}}(D)=\mathcal{R}_{\bar{d}}(f(D))\) from (10) and (11), and \(\mathcal{I}_{\bar{d}}(f(D))=\widehat{\mathcal{I}}_{\bar{d}}(f(D))=\widehat{ \mathcal{I}}_{f,d}(D)\) from (8) and Lemma 1. Likewise, (21) is a consequence of (19) and (7).
It remains to show (19). To do it, we need the following useful lemma.
**Lemma 2**.: _Suppose that the remote source \((\mathbf{x}^{n},\mathbf{z}^{n})\) and the sequence of distortion measures \(\{d^{n}\}_{n=1}^{\infty}\) are such that_
\[\limsup_{n\to\infty}\sup_{(x^{n},\widehat{x}^{n})}d^{n}(x^{n},\widehat{x}^{n} )\leq\Delta<\infty. \tag{22}\]
_Then, if the rate-distortion pair \((R,D)\) is excess distortion achievable, it is achievable under the average distortion._
First note that \(f\)-separable iRDF can be upper bounded as follows:
\[\mathcal{I}_{f,d}(D)\stackrel{{(a)}}{{\leq}}\widehat{\mathcal{I }}_{f,d}(D)\stackrel{{(b)}}{{=}}\widehat{\mathcal{I}}_{\bar{d}}(f(D ))\stackrel{{(c)}}{{=}}\mathcal{I}_{\bar{d}}(f(D)) \tag{23}\]
where \((a)\) is a consequence of Assumption **(A2)** and Lemma 2; \((b)\) follows from Lemma 1; \((c)\) follows from the equivalence between excess and average iRDF, see (8).
The other direction,
\[\mathcal{I}_{f,d}(D)\geq\mathcal{I}_{\bar{d}}(f(D)) \tag{24}\]
is a consequence of the strong converse by [8]. This completes the proof.
One pleasing consequence of Theorem 1 is the following corollary.
**Corollary 2**.: _(Implicit solution of \(\mathcal{I}_{\bar{d}}(f(D))\)) The characterization in (21) via (19) admits the following implicit solution to its minimizer_
\[p^{*}(\widehat{x}|z)=\frac{e^{s\bar{d}(z,\widehat{x})}p^{*}(\widehat{x})}{ \sum_{\widehat{x}}e^{s\bar{d}(z,\widehat{x})}p^{*}(\widehat{x})}, \tag{25}\]
_where \(s<0\) is the Lagrange multiplier associated with the amended distortion penalty \(\mathbf{E}[\bar{d}(z,\widehat{x})]\leq f(D)\) and \(p^{*}(\widehat{x})=\sum_{z}q^{*}(\widehat{x}|z)p(z)\) is the \(\widehat{\mathcal{X}}\)-marginal of the output i.i.d process \(\widehat{\mathbf{x}}^{n}\). Moreover, the optimal parametric solution of (21) via (20) when \(\mathcal{I}_{f,d}(D)>0\) is given by_
\[\mathcal{I}_{f,d}(D^{*})=sf(D^{*})-\sum_{z}p(z)\log\Bigg{(}\sum_{\widehat{ x}}e^{s\bar{d}(z,\widehat{x})}p^{*}(\widehat{x})\Bigg{)}. \tag{26}\]
By taking \(p(z|x)\) to be a noiseless channel, Corollary 2 gives us an implicit solution for \(\mathcal{R}_{f,d}(D)\) which was suggested in [13].
## V Examples
In what follows, we give two examples to demonstrate the impact of \(f\)-separable distortion measures to a popular class of finite alphabet sources.
**Example 1**.: _(Binary memoryless sources) Let the joint process \((\mathbf{x}^{n},\mathbf{z}^{n})\) form an i.i.d sequence of RVs such that
\(\widehat{\mathcal{X}}=\{0,1\}\) furnished with the classical single-letter Hamming distortion, i.e.,_
\[d(x,\widehat{x})=\begin{cases}0,&\text{ if }x=\widehat{x}\\ 1&\text{ if }x\neq\widehat{x}.\end{cases} \tag{27}\]
_Moreover, let \(\mathbf{x}_{i}\sim Bernoulli(\frac{1}{2})\) and a binary memoryless channel that induces a transition probability of the form_
\[p(z|x)=\begin{bmatrix}1-\beta&\beta\\ \beta&1-\beta\end{bmatrix}\!,\ \ \beta\in\bigg{[}0,\frac{1}{2}\bigg{)}. \tag{28}\]
Using the above input data, we obtain the following theorem.
**Theorem 2**.: _(Closed-form solution) For the previous inputs and for any continuous, increasing function \(f(\cdot)\), we obtain_
\[\mathcal{I}_{f,d}(D)=\mathcal{I}_{d}(f(D))=\] \[\bigg{[}1-h_{b}\bigg{(}\frac{f(D)-(1-\beta)f(0)-\beta f(1)}{(1- \beta)f(1)+\beta f(0)-(1-\beta)f(0)-\beta f(1)}\bigg{)}\bigg{]}^{+} \tag{29}\]
_where \([\cdot]^{+}=\max\{0,\cdot\}\), \(f(D)\in\Big{[}(1-\beta)f(0)+\beta f(1),\frac{f(0)+f(1)}{2}\Big{]}\) and \(h_{b}(\cdot)\) denotes the binary entropy function._
In Fig. 2 we illustrate some plots of (29) for various functions \(f(\cdot)\) and different distortion levels \(D\). It should be noted that due to the nature of the indirect rate distortion problem compared to the classical rate distortion problem, there are different minimum distortion thresholds for which the curves are well-defined. In particular, when the function \(f\) is exponential, with \(\beta=0.01\) and \(\rho=9.2\), Fig. 2 demonstrates that the \(f\)-separable iRDF curve is non-convex, monotonic and well-defined for \(D\in(D^{\mathrm{exp}}_{\min},D^{\mathrm{exp}}_{\max}]=\Big{(}\frac{1}{\rho} \log(1-\beta+\beta\exp(\rho)),\frac{1}{\rho}\log\Big{(}\frac{1+\exp(\rho)}{2 }\Big{)}\Big{]}\). Similarly, if the function \(f\) is third order polynomial with \(\beta=0.15\) and \(\alpha=0.4\) or quadratic with \(\beta=0.001\), then, from Fig. 2 we observe that \(\mathcal{I}_{f,d}(D)\) is again non-convex, monotonic and well-defined for \(D\in\Big{(}D^{\mathrm{pol}}_{\min},D^{\mathrm{pol}}_{\max}\Big{]}=\Big{(} \sqrt[3]{(1-a)^{3}\beta-a^{3}(1-b)}+a\), \(\sqrt[3]{\frac{(1-a)^{3}-a^{3}}{2}}+a\Big{]}\) and for \(D\in(D^{qua}_{\min},D^{qua}_{\max}]=\Big{(}\sqrt{\beta},\sqrt{\frac{1}{2}} \Big{]}\), respectively. Clearly, if in Fig. 2 we consider the function \(f\) to be the identity map, then, as Fig. 2 demonstrates, we obtain \(\mathcal{I}_{f,d}(D)=\mathcal{I}_{d}(f(D))=\mathcal{R}_{d}(D)\) and the closed-form solution of (29) recovers the solution of [3, Exercise 3.8] i.e.,
\[\mathcal{I}_{f,d}(D)=\bigg{[}1-h_{b}\bigg{(}\frac{D-\beta}{1-2\beta}\bigg{)} \bigg{]}^{+}\text{if }D\in[\beta,\tfrac{1}{2}]. \tag{30}\]
This example aims at further emphasizing on the impact of the \(f\)-separable (non-linear) distortion constraint on the indirect rate distortion curve as opposed to the classical separable (linear) distortions for which the indirect rate-distortion curve is always convex.
Special caseIf in Example 1 we assume that in (28) we have \(\beta=0\), then our problem recovers the solution of [13, eq. (44)] for \(\mathbf{x}\sim Bernoulli(\frac{1}{2})\).
**Example 2**.: _Let the joint process \((\mathbf{x}^{n},\mathbf{z}^{n})\) form an i.i.d sequence of RVs such that \(\mathcal{X}=\widehat{\mathcal{X}}=\{0,1\}\), \(\mathcal{Z}=\{0,e,1\}\) furnished with the Hamming distortion in (27). Moreover, let \(\mathbf{x}_{i}\sim Bernoulli(\frac{1}{2})\) and a binary memoryless erasure channel that induces a transition probability of the form_
\[p(z|x)=\begin{bmatrix}1-\delta&0\\ \delta&\delta\\ 0&1-\delta\end{bmatrix}\!,\ \ \delta\in[0,1]. \tag{31}\]
Using the above input data, we obtain the following theorem.
**Theorem 3**.: _(Closed-form solution) For the previous input data, and for any continuous, increasing function \(f(\cdot)\) we obtain_
\[\mathcal{I}_{f,d}(D)=\mathcal{I}_{d}(f(D))=\] \[\bigg{[}(1-\delta)\bigg{(}\log(2)-h_{b}\bigg{(}\frac{f(D)-\frac{ \delta}{2}f(1)-f(0)(1-\frac{\delta}{2})}{(1-\delta)(f(1)-f(0))}\bigg{)}\bigg{)} \bigg{]}^{+} \tag{32}\]
_where \(f(D)\in\Big{[}(1-\frac{\delta}{2})f(0)+\frac{\delta}{2}f(1),\frac{f(1)+f(0)}{ 2}\Big{]}\)._
Special caseIf the chosen \(f\)-separable distortion measure is additive (function \(f\) corresponds to the identity map), then the closed-form solution of (32) recovers the solution of [8, Eq. (76)], which in turn admits the closed-form solution
\[\mathcal{I}_{f,d}(D)=\Bigg{[}(1-\delta)\bigg{(}\log(2)-h_{b}\bigg{(}\frac{D- \frac{\delta}{2}}{1-\delta}\bigg{)}\bigg{)}\bigg{]}^{+} \tag{33}\]
where \(D\in\big{[}\frac{\delta}{2},\frac{1}{2}\big{]}\).
## Acknowledgement
The work of P. A. Stavrou and M. Kountouris has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation programme (Grant agreement No. 101003431).
Fig. 2: Computation of \(\mathcal{I}_{f,d}(D)\) for various functions \(f(\cdot)\) and single-letter Hamming distance. |
2306.11307 | Transforming Graphs for Enhanced Attribute Clustering: An Innovative
Graph Transformer-Based Method | Graph Representation Learning (GRL) is an influential methodology, enabling a
more profound understanding of graph-structured data and aiding graph
clustering, a critical task across various domains. The recent incursion of
attention mechanisms, originally an artifact of Natural Language Processing
(NLP), into the realm of graph learning has spearheaded a notable shift in
research trends. Consequently, Graph Attention Networks (GATs) and Graph
Attention Auto-Encoders have emerged as preferred tools for graph clustering
tasks. Yet, these methods primarily employ a local attention mechanism, thereby
curbing their capacity to apprehend the intricate global dependencies between
nodes within graphs. Addressing these impediments, this study introduces an
innovative method known as the Graph Transformer Auto-Encoder for Graph
Clustering (GTAGC). By melding the Graph Auto-Encoder with the Graph
Transformer, GTAGC is adept at capturing global dependencies between nodes.
This integration amplifies the graph representation and surmounts the
constraints posed by the local attention mechanism. The architecture of GTAGC
encompasses graph embedding, integration of the Graph Transformer within the
autoencoder structure, and a clustering component. It strategically alternates
between graph embedding and clustering, thereby tailoring the Graph Transformer
for clustering tasks, whilst preserving the graph's global structural
information. Through extensive experimentation on diverse benchmark datasets,
GTAGC has exhibited superior performance against existing state-of-the-art
graph clustering methodologies. | Shuo Han, Jiacheng Liu, Jiayun Wu, Yinan Chen, Li Tao | 2023-06-20T06:04:03Z | http://arxiv.org/abs/2306.11307v3 | # Transforming Graphs for Enhanced Attribute Clustering: An Innovative Graph Transformer-Based Method
###### Abstract
Graph Representation Learning (GRL) is an influential methodology, enabling a more profound understanding of graph-structured data and aiding graph clustering, a critical task across various domains. The recent incursion of attention mechanisms, originally an artifact of Natural Language Processing (NLP), into the realm of graph learning has spearheaded a notable shift in research trends. Consequently, Graph Attention Networks (GATs) and Graph Attention Auto-Encoders have emerged as preferred tools for graph clustering tasks. Yet, these methods primarily employ a local attention mechanism, thereby curbing their capacity to apprehend the intricate global dependencies between nodes within graphs. Addressing these impediments, this study introduces an innovative method known as the Graph Transformer Auto-Encoder for Graph Clustering (GTAGC). By melding the Graph Auto-Encoder with the Graph Transformer, GTAGC is adept at capturing global dependencies between nodes. This integration amplifies the graph representation and surmounts the constraints posed by the local attention mechanism. The architecture of GTAGC encompasses graph embedding, integration of the Graph Transformer within the autoencoder structure, and a clustering component. It strategically alternates between graph embedding and clustering, thereby tailoring the Graph Transformer for clustering tasks, whilst preserving the graph's global structural information. Through extensive experimentation on diverse benchmark datasets, GTGC has exhibited superior performance against existing state-of-the-art graph clustering methodologies.
## Introduction
Graph representation learning (GRL) is a powerful technique for effectively representing graph-structured data, enabling the extraction of information on graph structure and complex node relationships [24]. GRL has found extensive use in various downstream tasks, including node classification [1], link prediction [1], and graph clustering [1].
Graph clustering [13]is a current area of interest in machine learning, where data is first expressed as a graph and then transformed into a graph partitioning problem [23]. Compared to other clustering methods, graph clustering methods have demonstrated superior performance [25]. These methods can cluster arbitrarily shaped data, overcoming the limitations of traditional clustering methods, which are only effective for clustering convex-shaped data. Graph clustering [26] is an essential task in graph analysis [27], aimed at uncovering the intrinsic structure and interaction patterns of graphs through the clustering analysis of nodes and edges. As an unsupervised learning method, graph clustering algorithms typically rely on a similarity measure to compute the distance or similarity between data points and employ clustering algorithms to group them into distinct clusters.
In recent years, the attention mechanism, originally stemming from natural language processing (NLP), has found burgeoning applications in graph learning. A notable example is the Graph Attention Network (GAT) [28], an avant-garde neural network architecture specifically tailored to handle graph-structured data. By employing masked self-attentional layers, GAT transcends the constraints of preceding methods dependent on graph convolutions or their approximations, thereby facilitating the implicit allocation of diverse weights to distinct nodes within a neighborhood. An extension of this concept, GAT-STC [1], integrates Spatial-Temporal Clustering into the Graph Attention Network, thereby augmenting GNN-based traffic predictions in Intelligent Transportation Systems (ITS) through the incorporation of recent-aware and periodic-aware features. Complementing this, [10] unveil a clustering algorithm that capitalizes on multi-layer features within graph attention networks, effectively redressing the neglect of shallow features prevalent in deep ensemble clustering.
Graph Attention Auto-Encoders synergize the robust capabilities of attention mechanisms and the principles of unsupervised learning integral to auto-encoders. Their primary function involves distilling a significant representation or encoding from a comprehensive dataset. Uniquely tailored to manage graph-structured data, these auto-encoders extend the application scope of traditional auto-encoders to this specialized domain. Recently, research interest has surged in the area of graph clustering [26, 24],
employing Graph Attention Auto-Encoders as a focal tool.
However, both GAT-based and Graph Attention Auto-Encoder-based approaches suffer from a limitation in that they utilize a local attention mechanism Zhao et al. (2022), which may not fully consider the influence of global information on graph clustering tasks. The notion of global information encompasses the comprehensive interconnections and interdependencies that exist among nodes within a graph Li et al. (2021). Disregarding this crucial information may result in suboptimal clustering outcomes Ostroumova Prokhorenkova and Samosvat (2014).
In recent years, the Graph Transformer architecture Yun et al. (2019) has gained increasing attention in graph representation learning. It naturally overcomes several limitations of graph neural networks (GNNs) Scarselli et al. (2008) by avoiding their strict structural induction bias and encoding the graph structure through positional encoding. As a generalization of Transformer Neural Network architectures for arbitrary graphs, Graph Transformer extends the key design principles of Transformers Dwivedi and Bresson (2021) from NLP to graphs in general. The Graph Transformer achieves global attention by employing Laplacian eigenvectors to encode node positions and integrating an attention mechanism that enables nodes to attend to every other node in a given graph Rampasek et al. (2022). This powerful feature allows for comprehensive and accurate analysis of the graph structure, leading to superior performance in a variety of graph-based applications.
Similar to the Graph Attention Network (GAT), the Graph Transformer architecture is mainly applicable to graph classification and node-level classification tasks Min et al. (2022). Its attention mechanism is based on computing the similarity between node features and aggregating features from neighboring nodes to update node features Li et al. (2020). However, this approach is not directly applicable to graph clustering because there is no explicit notion of node similarity or feature aggregation in clustering tasks Jin et al. (2021).
To address the challenges associated with attributed graph clustering, we introduce GTAGC (Graph Transformer Auto-Encoder for Graph Clustering). This innovative approach seamlessly integrates the Graph Transformer into a graph autoencoder framework, thereby enhancing its ability to effectively comprehend global relationships and dependencies among graph nodes. The GTAGC model ingeniously amalgamates graph embedding, graph autoencoder, and Graph Transformer architectures, creating a powerful synergy. This fusion enables the capture of both local and global graph information, thereby yielding superior clustering results. The graph embedding component efficiently reduces the dimensionality of the input graph data. Concurrently, the integration of the Graph Transformer within the autoencoder framework bolsters the modeling of long-range node dependencies, underscoring the profound influence of node interconnections. Moreover, the graph autoencoder component preserves the structural subtleties of the graph within the generated embeddings, thereby further enhancing the effectiveness of clustering.
GTAGC crafts an enhanced representation of the original graph, preserving node structural similarities. By reducing differences between the original and projected graphs, it surpasses traditional graph attention network (GAT) limitations, emphasizing global node interconnections for improved clustering accuracy. Our approach blends graph embedding to discern node structural similarities, later applied to clustering Chen et al. (2022). Transitioning between embedding and clustering, GTAGC adeptly leverages the Graph Transformer, maintaining the graph's structural integrity. The main contributions of this paper are listed as follows.
* We propose a novel graph clustering method called Graph Transformer Auto-Encoder for Graph Clustering (GTAGC), which is designed for goal-oriented graph clustering tasks. By ingeniously amalgamating the Graph Auto-Encoder with the Graph Transformer, the GTAGC successfully harnesses the capability to apprehend global dependencies between nodes. To the best of our knowledge, this is the first method to effectively utilize the Graph Transformer in graph clustering, providing a unique contribution to the field of graph clustering.
* Our proposed method combines the Graph Transformer and graph autoencoder techniques to overcome the Graph Transformer's limitation of not being directly applicable to graph clustering. Specifically, we leverage the strengths of graph autoencoder to alternate between graph embedding and clustering, resulting in a more effective approach to graph clustering.
* Extensive experimental results on several benchmark datasets demonstrate the superiority of the proposed method against the existing state-of-the-art graph clustering methods.
## Related Work
This section briefly reviews key advancements in graph and attribute clustering, emphasizing major contributions and their limitations.
### Graph Embedding
Graph embedding transforms complex data into vectors or tensors, compacting high-dimensional data into dense vectors Cai et al. (2018). Techniques are categorized into matrix decomposition-based and deep learning-based approaches.
In matrix decomposition, Cao et al.'s GraRep model employ a log-transfer probability matrix and Singular Value Decomposition (SVD) Cao et al. (2015), while the HOPE method uses generalized SVD to emphasize asymmetric transitivity in directed networks Ou et al. (2016).
Deep learning-based embeddings include SDNE, combining graph similarities Wang et al. (2016), DNGR, utilizing random walks and deep auto-encoders Cao et al. (2016), VGAE, integrating graph convolutional networks with variational auto-encoders for undirected graph embedding Kipf and Welling (2016), and Li et al.'s SCDMLGE, a semi-supervised model combining deep metric learning with graph embedding Li et al. (2020).
Figure 1: Illustration of the proposed GTAGC. Given a graph G = (V,E), the Laplacian eigenvector matrix is computed based on the graph, which is then utilized for position embedding of the inputs, i.e., the adjacency matrix and the feature matrix. Subsequent to this process, the data is passed through the transformer’s H-head self-attention mechanism. The output is obtained by multiplying the result with the matrix O. After a residual connection and normalization, the output undergoes a Feed-Forward Network (FFN) layer, followed by another residual connection and normalization step. After passing through several identical network layers, the final clustering result is acquired.
### Graph Clustering
Graph clustering groups graph vertices into clusters, focusing on dense connections within clusters and sparse connections between them.
Schaeffer et al. [17] surveyed graph clustering, explaining its definition, methodologies, and evaluation. Zhou et al. [14] explored clustering with structural and attribute similarities, introducing the SA Cluster algorithm. Wang et al. [11] presented a multi-level fusion-based deep graph clustering network, while Guo et al. [12] developed an end-to-end framework combining a variational graph autoencoder with generative models.
Recent innovations include contrastive learning in graph clustering. Techniques like Simple Contrastive Graph Clustering (SCGC) [13] optimize contrastive learning for deep graph clustering. The Hard Sample Aware Network (HSAN) [13] further refines this approach by introducing a holistic similarity measure and a dynamic sample weighting strategy.
## Proposed Method
In this section, we introduce a novel graph clustering method, termed Graph Transformer Auto-Encoder for Graph Clustering (GTAGC). This method integrates the Graph Transformer into a graph autoencoder framework, thereby augmenting its capacity to discern global relationships and dependencies among graph nodes effectively.
### Main Idea
The Graph Transformer Auto-Encoder for Graph Clustering (GTAGC) model, proposed in this study, represents a sophisticated deep learning algorithm meticulously designed for efficient graph clustering. Constituted by two principal components, the GTAGC model integrates a Graph Transformer encoder with a dedicated clustering module, synergizing their functionalities to achieve the targeted clustering objectives.
### Graph Transformer Encoder Module
The Graph Transformer encoder is designed to take the graph as input and embeds each node into a low-dimensional space while preserving the graph structure. The clustering module then utilizes the embeddings to group the nodes into clusters.
Before encoding, the Laplacian filter [10] was employed to perform neighbor information aggregation in the following manner:
\[\widetilde{X}=X(I-\widetilde{L})^{t} \tag{1}\]
In this equation, \(\widetilde{L}\) represents the symmetric normalized graph Laplacian matrix, while \(t\) refers to the layer number of the Laplacian filter. Furthermore, \(\widetilde{X}\) represents the smoothed attribute matrix.
After that, the Graph Transformer encoder is composed of \(L\) Graph Transformer layers, where each layer takes node features \(X\) and the adjacency matrix \(A\) as input and outputs a new set of node features \(Z\). Specifically, the output of the \(l\:th\) layer can be mathematically represented as:
\[\mathbf{Z}^{(l)}=\text{GraphTransformerLayer}(\widetilde{\mathbf{X}}^{(l-1)}, \mathbf{A})^{(1)} \tag{2}\]
where the Graph Transformer layer function operates on the input node feature matrix of \(layer^{(l-1)}\).
The Graph Transformer encoder consists of several Graph Transformer (GT) layers with a self-attention mechanism and positional encoding. Each GT layer takes as input a feature matrix \(\widetilde{X}\in\mathbb{R}^{N\times D}\), where \(N\) is the number of nodes in the graph and \(D\) is the dimensionality of the input features. The GT layer also takes in an adjacency matrix \(A\in\mathbb{R}^{N\times N}\), which represents the connectivity of the graph, and a binary mask \(M\in\mathbb{R}^{N\times N}\), which masks out the self-connections in the graph.
The GT layer first applies a linear transformation to the input features \(\widetilde{X}\) using a weight matrix \(W\in\mathbb{R}^{D\times F}\), where \(F\) is the number of output features. This results in a new feature matrix \(H\in\mathbb{R}^{N\times F}\):
\[H=\widetilde{X}W \tag{3}\]
The GT layer then computes a global self-attention mechanism on the transformed features \(H\). To do this, it learns two attention weight matrices: \(a_{self}\in\mathbb{R}^{F\times 1}\) and \(a_{local}\in\mathbb{R}^{F\times 1}\), which are used to compute the attention scores for the node itself and its neighboring nodes, respectively. The attention scores for each node are then combined and multiplied with the binary mask \(M\) to obtain a dense attention matrix \(D\in\mathbb{R}^{N\times N}\):
\begin{table}
\begin{tabular}{c l} \hline
**Notation** & **Description** \\ \hline \(X\) & Feature matrix of the graph \\ \(Z\) & Embedded node feature matrix \\ \(L\) & Number of Graph Transformer layers \\ \(\widetilde{X}\) & Smoothed attribute matrix \\ \(\widetilde{L}\) & Symmetric normalized graph Laplacian matrix \\ \(t\) & Layer number of the Laplacian filter \\ \(A\) & Adjacency matrix of the graph \\ \(M\) & Binary mask of the graph \\ \(D\) & Dimensionality of the input features \\ \(F\) & Number of output features \\ \(W\) & Weight matrix of the linear transformation \\ \(H\) & New feature matrix after linear transformation \\ \(a_{global}\) & Global Attention weight matrices \\ \(a_{local}\) & Local Attention weight matrices \\ \(D\) & Dense attention matrix \\ \(A^{\prime}\) & New adjacency matrix \\ \(\alpha^{\prime}_{self}\) & Normalized attention vector \\ \(Q\), \(K\), \(V\) & Matrices of node feature vectors \\ \(d_{k}\) & Dimensions of the key vectors \\ \(Y\) & Output of the clustering layer \\ \(\alpha\) & Hyperparameter controlling the balance \\ & between losses \\ \hline \end{tabular}
\end{table}
Table 1: Summary of Notations
\[D=LeakyReLU(\gamma Ha_{local}^{T}+Ha_{global}^{T})^{T}\odot M) \tag{4}\]
where \(\odot\) denotes element-wise multiplication and \(LeakyReLU\) is the LeakyReLU activation function. The coefficient \(\gamma\) determines the trade-off between the contribution from neighborhood attention and global attention mechanisms.
The GT layer then applies the attention matrix \(D\) to the adjacency matrix \(A\) by element-wise multiplication. This results in a new adjacency matrix \(A^{\prime}\in\mathbb{R}^{N\times N}\), where the non-zero entries correspond to the attention scores computed by the self-attention mechanism:
\[A^{\prime}_{i,j}=\begin{cases}A_{i,j}D_{i,j}&\text{if }A_{i,j}>0\\ -9\times 10^{15}&\text{otherwise}\end{cases} \tag{5}\]
The GT layer then applies the self-attention mechanism again, this time to the transformed features \(H\) to compute an attention vector \(a^{\prime}_{self}\in\mathbb{R}^{N\times 1}\) for each node. The attention scores are then normalized using the softmax function:
\[\alpha^{\prime}_{self}=Softmax(Ha^{\prime}_{self}) \tag{6}\]
```
Input: Feature matrix \(X\in\mathbb{R}^{N\times D}\), Adjacency matrix \(A\in\mathbb{R}^{N\times N}\), Binary mask \(M\in\mathbb{R}^{N\times N}\), \(\alpha\) Output: Transformed feature matrix \(Z^{(L)}\), Clustering probabilities \(\hat{y}\)
1 Apply Laplacian filter: \(\widetilde{X}=X(I-\widetilde{L})^{t}\) for\(l=1\)to\(L\)do
2 Apply Linear Transformation: \(H=\widetilde{X}W\) Compute global self-attention mechanism: \(D=LeakyReLU(\gamma Ha_{neighs}^{T}+Ha_{global}^{T})^{T}\odot M\) Update adjacency matrix: \(A^{\prime}_{i,j}=\begin{cases}A_{i,j}D_{i,j}&\text{if }A_{i,j}>0\\ -9\times 10^{15}&\text{otherwise}\end{cases}\) Compute self-attention vector: \(\alpha^{\prime}_{self}=Softmax(Ha^{\prime}_{self})\) Compute output features: \(H^{\prime}=(\omega_{1}A^{\prime}+\omega_{2}I)\theta H\) Apply FFNN with ReLU activation and batch normalization: \(y=BN(\text{W}_{2}FFNN(H^{\prime})+\text{b}_{2})\) Apply softmax to output: \(\hat{y}=\text{Softmax}(y)\) Compute clustering probabilities: \(Y=\text{ClusteringLayer}(Z^{(L)})\) Compute Loss: \(L=\text{ReconstructionLoss}+\alpha\cdot\text{ClusteringLoss}\)
3 end for return\(Z^{(L)}\), \(\hat{y}\)
```
**Algorithm 1**Graph Transformer Auto-Encoder for Graph Clustering (GTAGC)
Finally, the GT layer computes the final output features \(H^{\prime}\in\mathbb{R}^{N\times F}\) by aggregating the transformed features \(H\) using the normalized attention matrix \(A^{\prime}\) and the normalized attention vector \(a^{\prime}_{self}\):
\[H^{\prime}=(\omega_{1}A^{\prime}+\omega_{2}I)\theta H \tag{7}\]
where \(I\) is the identity matrix and \(\theta\) is the learnable parameters of the GT layer. The output of the GT layer is then passed through a feedforward neural network (FFNN) with ReLU activation and batch normalization, which is defined as follows:
\[FFNN(x)=\text{ReLU}(BN(\text{W}_{1}x+\text{b}_{1})) \tag{8}\]
\[y=BN(\text{W}_{2}FFNN(H^{\prime})+\text{b}_{2}) \tag{9}\]
where \(W_{1}\), \(W_{2}\), \(b_{1}\), and \(b_{2}\) are the weight matrices and bias vectors of the FFNN, respectively, and BN denotes batch normalization. The final output of the model is obtained by applying a softmax function to y:
\[\hat{y}=\text{Softmax}(y) \tag{10}\]
The hyperparameters of the model, including the number of GT layers, the size of the hidden state, the learning rate, and the dropout rate, are selected by grid search on a validation set.
The Graph Transformer layer comprises a self-attentive mechanism and a feedforward neural network. The self-attentive mechanism calculates each node's attention coefficient based on each node's features and its neighbors' features. Then, the feedforward neural network performs a non-linear transformation of the weighted sum of the neighbors' features to produce the features of the output nodes. The self-attention mechanism can be expressed as follows:
\[\text{Attention}(Q,K,V)=\text{Softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}} \right)V \tag{11}\]
where \(Q\), \(K\) and \(V\) are matrices of node feature vectors and \(d_{k}\) are the dimensions of the key vectors. After \(L\) Graph Transformer layers, the output node features \(Z^{(L)}\) are fed into the clustering module to produce the final clustering results.
### Clustering Module
Inspired by DAEGC [20], our clustering module operates within an unsupervised learning paradigm, translating node features into a probability set that indicates clustering propensities. Without reliance on ground truth labels, a binary cross-entropy loss function is used to minimize misclassifications, enhancing clustering precision. The output of the clustering layer is formalized as follows:
\[Y=\text{ClusteringLayer}(Z^{(L)}) \tag{12}\]
where \(Y\) signifies the output resulting from the clustering layer function.
The defining equation for the clustering layer function is:
\[\text{ClusteringLayer}(Z)=\frac{\exp(ZW)}{\sum_{j=1}^{k}\exp(ZW_{j})} \tag{13}\]
In the given equation, \(Z\) is a matrix of embedded node features, and \(W\) represents a matrix of weights. The clustering layer's output provides clustering probabilities for each node, indicating potential association with different clusters.
The clustering module's loss function is a weighted combination of reconstruction loss, measuring divergence between the input and its reconstruction, and clustering loss, penalizing discrepancies in predicted clustering. The total loss, denoted by \(L\), is:
\[L=\text{ReconstructionLoss}+\alpha\cdot\text{ClusteringLoss} \tag{14}\]
Here, \(\alpha\) is a non-negative hyperparameter controlling the balance between the two losses, optimizing clustering performance during training.
## Experiments
In this section, we conducted a series of comprehensive experiments on three widely used datasets [14], namely Citeseer, Cora, and Pubmed, to evaluate the effectiveness of the proposed method.
### Experimental Settings
The Graph Transformational Attentional Graph Clustering (GTAGC) model was trained on an NVIDIA 3090 GPU using PyTorch, guided by a framework inspired by the Deep Attentional Embedded Graph Clustering (DAEGC) model [22]. The training was end-to-end and capped at 200 epochs to align with the DAEGC baseline.
The model's input included a normalized adjacency matrix and a standardized feature matrix for the nodes. The training utilized the Adam Optimizer with a learning rate of 0.001, and early stopping was implemented with patience of 80 epochs to prevent overfitting. If no improvement in validation loss was observed, the best-performing model parameters were restored.
### Performance Comparison
To assess the effectiveness and robustness of the proposed Graph Transformer Auto-Encoder for Graph Clustering (GTAGC) was conducted against a range of established methods. These methods include K-means [15], Spectral Clustering [23], GraphEncoder [24], TADW [25], GAE [26], VGAE [27], ARVGE [28], ARGE [29], DAEGC [28], S\({}^{2}\)GC [26] and GC-VAE [25]. We adopt the code from the original papers for all baseline methods, setting hyperparameters based on author recommendations or applying our own tuning if guidance is lacking. Acknowledging that hardware and operating environment may influence experimental results, we compare our reproduced outcomes with those from the original study, subsequently selecting the optimal values.
The performance of these models was gauged across three distinct datasets: Citeseer, Cora, and Pubmed. Four performance metrics were employed to measure the quality of clustering - accuracy (ACC) [1], normalized mutual information (NMI) [1], F-score [10], and adjusted rand index (ARI) [12].
### Main Results
We present a comprehensive analysis of the performance of GTAGC on three benchmark datasets, namely Cora, Citeseer, and Pubmed. The results are presented in Table 2, providing detailed insights into the efficacy of GTAGC compared to state-of-the-art graph clustering methods. The values highlighted in red and blue correspond to the highest and second-highest outcomes, respectively.
The presented results shed light on GTAGC's distinguished performance, showcasing its robustness and effectiveness across three widely adopted datasets. Moreover, its commendable results under diverse evaluation metrics, namely ACC, NMI, F-score, and ARI, underline its applicability and process in different analytical settings.
On the Citeseer dataset, GTAGC excels, standing out as the top performer across every evaluation metric. This demonstrates its ability to adapt to and excel in different performance aspects, providing holistic and effective solutions for graph clustering. These results are particularly notable given the competitiveness of other cutting-edge techniques it is compared with, clearly demonstrating GTAGC's superior potential and efficacy.
Moving to the Cora dataset, GTAGC achieves the highest scores in ACC, NMI, and F-score, affirming its leading position in graph clustering. Although second in ARI, it still delivers a competitive score, underscoring its consistent effectiveness across metrics.
With the Pubmed dataset, GTAGC demonstrates adaptability and resilience. While not leading in ACC, it outperforms many other methods, showcasing its robust performance across various datasets. Its strong ranks in NMI, F-score, and a second-best score in ARI further highlight GTAGC's reliability and consistency in clustering results.
In conclusion, these experimental results offer compelling evidence endorsing GTAGC as a trustworthy and robust approach to graph clustering. Its consistently high performance across diverse datasets and metrics, along with its demonstrated ability to compete with or outperform other leading techniques, confidently affirms GTAGC's versatility, reliability, and overall effectiveness.
Figure 2: The 2D visualization of the proposed GTAGC on Citeseer and Cora dataset.
### Ablation Studies
We conducted an ablation study on the Graph Transformational Attentional Graph Clustering (GTAGC) model to assess the individual contributions of each component, using DAEGC as the baseline [22]. The results are detailed in Table 3.
In the Citeseer dataset, the Laplacian filter significantly enhanced accuracy (ACC), normalized mutual information (NMI), and F-score. Incremental improvements were observed with positional encoding, and the global attention mechanism further augmented all metrics, leading the GTAGC model to record the highest scores in ACC, NMI, and F-score.
In the Cora dataset, despite minor declines with the Laplacian filter and positional encoding, the global attention mechanism substantially improved all metrics. Consequently, the GTAGC model excelled in ACC, NMI, and ARI, surpassing existing methods.
In summary, the study emphasizes the vital role of the Laplacian filter, positional encoding, and global attention mechanism in the GTAGC model's superior clustering performance. The combined effect of these components resulted in enhanced accuracy and overall clustering quality, as demonstrated by the GTAGC model's leading performance.
### Hyper-parameter Analysis
We investigated the impact of the hyperparameter \(\gamma\), a coefficient governing the equilibrium between local neighborhood and global attention mechanisms. As depicted in Figure 3, clustering outcomes vary with changes in the balance of these attention weights. Notably, optimal clustering is achieved when \(\gamma\) is set to 0.75, rendering the local neighborhood attention three times the weight of global attention. This finding underscores the importance of a balanced approach that judiciously integrates both global and local information among nodes, thereby optimizing clustering quality.
## Conclusion
In this paper, we introduce the Graph Transformer Auto-Encoder for Graph Clustering (GTAGC), a pioneering method for attributed graph clustering. Distinct from existing Graph auto-encoder-based approaches, GTAGC enhances flexibility and efficiency in graph clustering through a global attention mechanism. To our knowledge, this constitutes the first integration of Graph Transformer within attributed graph clustering tasks. Future work will explore the incorporation of diverse Graph Transformer variants to fur
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c c} \hline \hline Dataset & Metric & K-means & Spectral & GraphEncoder & TADW & GAE & VGAE & ARVGE & ARGE & DAEGC & S\({}^{2}\)GC & GC-VAE & GTAGC \\ \hline \multirow{4}{*}{Citeseer} & ACC & 0.544 & 0.308 & 0.225 & 0.529 & 0.408 & 0.603 & 0.544 & 0.573 & 0.672 & **0.691** & 0.666 & **0.708** \\ & NMI & 0.312 & 0.090 & 0.033 & 0.320 & 0.174 & 0.343 & 0.261 & 0.350 & 0.397 & **0.429** & 0.409 & **0.452** \\ & F-score & 0.413 & 0.257 & 0.301 & 0.436 & 0.297 & 0.460 & 0.529 & 0.546 & 0.636 & **0.647** & 0.634 & **0.657** \\ & ARI & 0.285 & 0.082 & 0.010 & 0.286 & 0.141 & 0.344 & 0.245 & 0.341 & 0.410 & - & **0.415** & **0.469** \\ \hline \multirow{4}{*}{Cora} & ACC & 0.500 & 0.398 & 0.301 & 0.536 & 0.596 & 0.592 & 0.638 & 0.640 & 0.704 & 0.696 & **0.707** & **0.717** \\ & NMI & 0.317 & 0.297 & 0.059 & 0.366 & 0.397 & 0.408 & 0.450 & 0.449 & 0.528 & **0.547** & 0.536 & **0.540** \\ & F-score & 0.376 & 0.332 & 0.230 & 0.401 & 0.415 & 0.456 & 0.627 & 0.619 & 0.682 & 0.658 & **0.695** & **0.703** \\ & ARI & 0.239 & 0.174 & 0.046 & 0.240 & 0.293 & 0.347 & 0.352 & **0.496** & - & 0.482 & **0.489** \\ \hline \multirow{4}{*}{Pubmed} & ACC & 0.562 & 0.496 & 0.531 & 0.565 & 0.605 & 0.619 & 0.635 & 0.653 & 0.671 & **0.710** & **0.682** & 0.678 \\ & NMI & 0.262 & 0.147 & 0.210 & 0.224 & 0.232 & 0.216 & 0.232 & 0.248 & 0.263 & **0.332** & 0.297 & **0.318** \\ \cline{1-1} & F-score & 0.559 & 0.471 & 0.506 & 0.481 & 0.479 & 0.478 & **0.678** & 0.657 & 0.659 & **0.703** & 0.669 & 0.664 \\ \cline{1-1} & ARI & 0.227 & 0.098 & 0.184 & 0.177 & 0.221 & 0.201 & 0.225 & 0.244 & 0.278 & - & **0.298** & **0.290** \\ \hline \end{tabular}
\end{table}
Table 2: Experimental Results on three Datasets.
Figure 3: Sensitivity analysis of the hyper-parameter \(\gamma\) on Citeseer and Cora dataset.
\begin{table}
\begin{tabular}{c|c|c c c c|c} \hline \hline Dataset & Metric & Baseline & Laplacian filter & positional encoding & global attention & ours \\ \hline \multirow{4}{*}{Citeseer} & ACC & 0.672 & 0.6832 & 0.6688 & 0.6829 & **0.7078** \\ & NMI & 0.397 & 0.4118 & 0.4125 & 0.4253 & **0.4523** \\ & F-score & 0.636 & 0.6366 & 0.6254 & 0.6325 & **0.6573** \\ & ARI & 0.410 & 0.4304 & 0.4173 & 0.4350 & **0.4685** \\ \hline \multirow{4}{*}{Cora} & ACC & 0.704 & 0.6913 & 0.6669 & 0.6880 & **0.7171** \\ & NMI & 0.528 & 0.5154 & 0.5145 & 0.5362 & **0.5402** \\ \cline{1-1} & F-score & 0.682 & 0.4642 & 0.4300 & 0.6594 & **0.7027** \\ \cline{1-1} & ARI & 0.496 & 0.6785 & 0.6375 & 0.4574 & **0.4886** \\ \hline \end{tabular}
\end{table}
Table 3: Ablation study of GTAGC.
ther augment the model's capabilities.
|
2305.07035 | Shhh! The Logic of Clandestine Operations | An operation is called covert if it conceals the identity of the actor; it is
called clandestine if the very fact that the operation is conducted is
concealed. The paper proposes a formal semantics of clandestine operations and
introduces a sound and complete logical system that describes the interplay
between the distributed knowledge modality and a modality capturing coalition
power to conduct clandestine operations. | Pavel Naumov, Oliver Orejola | 2023-05-10T22:15:58Z | http://arxiv.org/abs/2305.07035v1 | # Shhh! The Logic of Clandestine Operations
###### Abstract
An operation is called covert if it conceals the identity of the actor; it is called clandestine if the very fact that the operation is conducted is concealed. The paper proposes a formal semantics of clandestine operations and introduces a sound and complete logical system that describes the interplay between the distributed knowledge modality and a modality capturing coalition power to conduct clandestine operations.
## 1 Clandestine Games
In this paper, we study games in which coalitions can engage in concealed operations. The US Department of Defense Dictionary of Military and Associated Terms distinguishes between covert and clandestine operations. Covert operations are planned and executed to conceal the identity of the actor. An operation is clandestine when the very fact that the operation is conducted is concealed [12]. Thus, every clandestine operation is covert, but not every covert operation is clandestine. The focus of the current work is on clandestine operations.
An example of a clandestine operation is 1962 Operation Anadyr conducted by the Soviet Union arm forces as a pre-lude to the Cuban Missile Crisis [16]. The operation consisted of the delivery and deployment of ballistic missiles with nuclear warheads in Cuba to prevent an invasion of the island by the United States. Figure 1 depicts our representation of the Cuban Missile Crisis as a _clandestine game_ between three players: the Americans (\(a\)), the Cubans (\(c\)), and the Russians (\(r\)). Operation Anadyr was executed by the Cubans and the Russians and consisted in transitioning the world from state \(w\) to state \(w^{\prime}\). Propositional variable \(m\) denotes the statement "Missiles are deployed in Cuba". It is false in state \(w\) and true in state \(w^{\prime}\). Operation Anadyr was _concealed_ in the sense that the Americans were not able to detect the transition of the world from state \(w\) to state \(w^{\prime}\). In the diagram, the indistinguishability of these two states to Americans is shown using a dashed line.
Although states \(w\) and \(w^{\prime}\) are indistinguishable to Americans, this does not prevent them from discovering the transition from state \(w\) to state \(w^{\prime}\) by executing an operation of their own. In fact, they did just that on October 14th, 1962, by conducting a clandestine operation Mission 3101 [12]. Mission 3101 consisted of a U-2 spy plane secretly flying over Cuban territory to collect military intelligence. Mission 3101 also was concealed in the sense that, as shown in the diagram, the Cubans and the Russians were not able to detect its execution that transitioned the world from state \(w^{\prime}\) to state \(v\). If the same Mission 3101 were to be executed in state \(w\), it would hypothetically transition the world from state \(w\) to state \(u\). The Americans can distinguish state \(v\) from state \(u\) based on the reconnaissance photos taken by the spy plane. This explains how the Americans were able to detect the execution of Operation Anadyr through operation Mission 3101.
Coalition power in games with imperfect information has been studied before in _synchronous_ settings where all agents act at once and, thus, everyone is aware that something happened [23, 13, 14, 15, 16, 17, 18, 19]. To capture clandestine operations it is crucial to use semantics in which an agent might be unaware of the game transitioning from one state to another as a result of the actions of other agents. Such a behaviour could be modelled, for example, by extending the semantics of the above logical systems with a single \(sleep\) action. Additionally, it should be required that any agent executing action \(sleep\) should not be able to distinguish the initial and the final state of any transition during which the agent used \(sleep\). This approach would also need to settle who learns what if two or more disjoint coalitions execute clandestine operations synchronously.
For the sake of the clarity of presentation, in this paper, we
Figure 1: Cuban Missile Crisis Game.
define the semantics of clandestine operations in terms of a class of asynchronous games that we call _clandestine games_ that are described in the definition below.
In this paper, we will assume a fixed set of agents \(\mathcal{A}\). By a coalition we mean any (possibly empty) subset of \(\mathcal{A}\). For any coalition \(C\), by \(\overline{C}\) we denote the complement of set \(C\) with respect to set \(\mathcal{A}\).
**Definition 1**.: _Let a **clandestine game** be any such tuple \((W,\{\sim_{a}\}_{a\in\mathcal{A}},\Delta,M,\pi)\) that_
1. \(W\) _is a set of "states"._
2. \(\sim_{a}\) _is an "indistinguishability" equivalence relation on set_ \(W\) _for each agent_ \(a\in\mathcal{A}\)_. We write_ \(w\sim_{C}u\) _if_ \(w\sim_{a}u\) _for each agent_ \(a\in C\)_._
3. \(\Delta\) _is a nonempty set of "operations"._
4. \(M\) _is a set of tuples_ \((w,C,\delta,u)\)_, where_ \(w,u\in W\) _are states,_ \(C\subseteq\mathcal{A}\) _is a coalition, and_ \(\delta\in\Delta\) _is an operation. It is assumed that set_ \(M\)_, called "mechanism", satisfies the following two conditions_ 1. _[label=()]_ 2. **concealment**: _for any two states_ \(w,u\in W\)_, any coalition of agents_ \(C\subseteq\mathcal{A}\)_, any operation_ \(\delta\in\Delta\)_, if_ \((w,C,\delta,u)\in M\)_, then_ \(w\sim_{\overline{C}}u\)_,_ 3. **nontermination**_: _for any state_ \(w\in W\)_, any coalition of agents_ \(C\subseteq\mathcal{A}\)_, and any operation_ \(\delta\in\Delta\)_, there is at least one state_ \(u\in W\) _such that_ \((w,C,\delta,u)\in M\)_._
#### 2.2.5 \(\pi(p)\) is a subset of \(W\) for each propositional variable \(p\).
The diagram in Figure 1 depicts an example of a clandestine game with four states (\(w\), \(w^{\prime}\), \(u\), and \(v\)) and two operations [1]. The indistinguishability relations are shown by dashed lines and the mechanism is depicted by directed lines. The diagram omits loop operations. This means, for example, that if _the Americans_ execute Operation Anadyr in any of the states, then the game transitions back to the same state. The nontermination condition 4(b) guarantees that no operation can terminate a game without reaching some state.
In a real-world setting, a variety of operations might be performed by any coalition. Some of them satisfy the concealment condition 4(a) of Definition 1, the others might not. We excluded non-concealed operations from our games to keep the presentation simple. If such operations are added to the models and the quantifier over operations \(\delta\) in item 5 of Definition 2 below is simultaneously restricted to concealed operations only, then the soundness and the completeness results of this paper will remain true and no changes to their proofs will be necessary.
In this paper, we propose a sound and complete logical system for reasoning about coalition power to conduct clandestine operations. The rest of the paper is organized as follows. In the next section, we discuss the interplay between knowledge and actions and explain why existing coalition power modalities do not capture the properties of clandestine operations. Then, we define the syntax and semantics of our logical system. In the section Coalition-Informat-Adversary Principle, we introduce and discuss the most non-trivial axiom of our system. In the section that follows, we list the remainder of the axioms. After that, we sketch the completeness of our logical system. The proof of soundness and some details of the completeness are in the appendix.
## 2 Knowledge and Actions
In this section, we discuss how different forms of knowledge can be captured in the existing modal logics for reasoning about coalition power and explain why the power to perform a clandestine operation is not expressible in these logics.
When discussing the interplay between knowledge and actions, it is common to distinguish _ex-ante_, _interim_, and _ex-post_ knowledge. They refer to an agent's (or a coalition's) knowledge before the action, at the moment of the action, and after the action, respectively. One of the first logical systems describing the interplay between distributed knowledge modality \(\mathsf{K}_{C}\) and coalition power modality \(\mathsf{S}_{C}\) was introduced in [1]. Using their language, one can write \(\mathsf{K}_{C}\mathsf{S}_{C}\varphi\) to state that coalition \(C\) knows _ex-ante_ (before the action) that it has a strategy (joint action) to achieve \(\varphi\). Using the same language, one can write \(\mathsf{S}_{C}\mathsf{K}_{C}\varphi\) to state that coalition \(C\) has a strategy that would result in \(\varphi\) being known _ex-post_ to the coalition. The language of [1] cannot be used to express _interim_ knowledge. However, this could be done using "seeing to it" modality [1, 1, 1, 10].
Knowing that a strategy exists, as in \(\mathsf{K}_{C}\mathsf{S}_{C}\varphi\), is different from actually knowing the strategy. If a coalition \(C\) knows _ex-ante_ what strategy it can use to achieve \(\varphi\), then we say that the coalition has a _know-how_ strategy to achieve \(\varphi\) and denote this by \(\mathsf{H}_{C}\varphi\). Unless the coalition has a perfect recall, knowing _ex-ante_ a strategy to achieve \(\varphi\) does not imply knowing ex-ante a strategy that results in knowing ex-post that \(\varphi\) is achieved. The latter, however, could be expressed as \(\mathsf{H}_{C}\mathsf{K}_{C}\varphi\). The interplay between coalitional know-how modality \(\mathsf{H}_{C}\) and distributed knowledge modality \(\mathsf{K}_{C}\) has been studied in [1, 1, 10, 11, 12].
In epistemic models, knowledge is usually captured by an indistinguishably relation between states. For example, in Figure 2 (left), \(w_{1}\Vdash\mathsf{K}_{C}\mathsf{S}_{C}p\). In other words, coalition \(C\) knows ex-ante that it has a strategy to achieve \(p\). This is true because the coalition has such a strategy not only in state \(w_{1}\)
Figure 2: Knowledge and Actions.
but also in state \(w_{2}\), indistinguishable to the coalition from \(w_{1}\). Note that this is not a know-how strategy because the required strategy in state \(w_{1}\) (strategy \(\delta_{1}\)) is different from the required strategy in state \(w_{2}\) (strategy \(\delta_{2}\)). Thus, \(w_{1}\Vdash\neg\mathsf{H}_{C}p\). Note also that in state \(w_{1}\) coalition \(C\) does not have a strategy to achieve ex-post knowledge of \(p\). We write this as \(w_{1}\Vdash\neg\mathsf{S}_{C}\mathsf{K}_{C}p\). This is true because state \(u_{1}\) is indistinguishable from state \(u_{0}\) where \(p\) is not satisfied.
The situation is different in Figure 2 (centre). Here, coalition \(C\) has a strategy in state \(w_{1}\) to achieve \(p\), but the coalition does not know this ex-ante because it cannot distinguish state \(w_{1}\) from state \(w_{2}\) where such a strategy does not exist. Using our notations, \(w_{1}\Vdash\mathsf{S}_{C}p\) and \(w_{1}\Vdash\mathsf{K}_{C}\mathsf{S}_{C}p\). Note, however, that in this setting coalition also has a strategy to achieve ex-post knowledge of \(p\) because \(p\) is satisfied not only in state \(u_{1}\) but also in state \(u_{0}\), indistinguishable to \(C\) from state \(u_{1}\). We write this as \(w_{1}\Vdash\mathsf{S}_{C}\mathsf{K}_{C}p\)
The clandestine operations that we consider in this paper are know-how strategies. Furthermore, for the reason we discuss in the next section, they are know-how strategies to achieve ex-post knowledge. This alone would not require a new modality because it can be captured in existing know-how logics as \(\mathsf{H}_{C}\mathsf{K}_{C}\varphi\). However, the last formula does not account for the concealed nature of clandestine operations. We capture the late by requiring the initial and the final state of the operation to be indistinguishable to the _complement_\(\overline{C}\) of coalition \(C\). Strategy \(\delta\) depicted in Figure 2 (right) is a clandestine operation of coalition \(C\) to achieve \(p\). In this paper, we introduce a new modality \(\Box_{C}\varphi\) to denote an existence of a clandestine operation of coalition \(C\) to achieve \(\varphi\). This modality is not definable through existing modalities of coalition power, know-how, and seeing-to-it, because these existing modalities cannot capture the indistinguishably (by the complement of coalition \(C\)) of the initial and the final state of the operation.
## 3 Syntax and Semantics
Language \(\Phi\) of our logical system is defined by the grammar
\[\varphi:=p\mid\neg\varphi\mid\varphi\to\varphi\mid\mathsf{K}_{C}\varphi\mid \Box_{C}\varphi,\]
where \(p\) is a propositional variable and \(C\) is a coalition. We read formula \(\mathsf{K}_{C}\varphi\) as "coalition \(C\) knows \(\varphi\)", and formula \(\Box_{C}\varphi\) as "coalition \(C\) knows which clandestine operation it can execute to achieve \(\varphi\)". In both cases, the knowledge is distributed. We assume that Boolean constants \(\top\) and \(\bot\) as well as disjunction \(\vee\) are defined in the standard way. We use \(\mathsf{K}_{C,D}\varphi\) and \(\Box_{C,D}\varphi\) as shorthand for \(\mathsf{K}_{C\cup D}\varphi\) and \(\Box_{C\cup D}\varphi\) respectively.
In the definition below, item 5 gives formal semantics of modality \(\Box_{C}\varphi\), see Figure 3.
**Definition 2**.: _For any state \(w\in W\) of a clandestine game \((W,\{\sim_{a}\}_{a\in\mathcal{A}},\Delta,M,\pi)\) and any formula \(\varphi\in\Phi\), satisfiability relation \(w\Vdash\varphi\) is defined recursively as_
1. \(w\Vdash p\) _if_ \(w\in\pi(p)\)_,_
2. \(w\Vdash\neg\varphi\) _if_ \(w\Vdash\varphi\)_,_
3. \(w\Vdash\varphi\to\psi\) _if_ \(w\Vdash\varphi\) _or_ \(w\Vdash\psi\)_,_
4. \(w\Vdash\mathsf{K}_{C}\varphi\) _if_ \(u\Vdash\varphi\) _for any_ \(u\in W\) _such that_ \(w\sim_{C}u\)_,_
5. \(w\Vdash\Box_{C}\varphi\) _if there is a nonempty coalition_ \(C^{\prime}\subseteq C\) _and an operation_ \(\delta\in\Delta\) _such that for any states_ \(w^{\prime},u,u^{\prime}\in W\)_, if_ \(w\sim_{C}w^{\prime}\)_,_ \((w^{\prime},C^{\prime},\delta,u)\in M\)_, and_ \(u\sim_{C}u^{\prime}\)_, then_ \(u^{\prime}\Vdash\varphi\)_._
In item 5 of the above definition, we introduce coalition \(C^{\prime}\) to capture the fact that in order for a coalition \(C\) to know a clandestine operation to achieve a certain goal, not all members of the coalition \(C\) have to take an active part in it.
Recall that Definition 1 allows for some operations to be conducted by the empty coalition. Such operations can change the state of the game. However, according to the concealment condition of Definition 1, such change is not noticeable to any agent in the game. Informally, these operations could be thought of as nondeterministic transitions of the game that occur independently from the actions of the agents and are not noticeable to them. The presence of such transitions is not significant for the results in this paper. We do not exclude them for the sake of generality. At the same time, in Definition 2, we require coalition \(C^{\prime}\) to be nonempty. Intuitively, a coalition can ask some of its members to conduct an operation, but the coalition cannot ask the empty coalition, because operations of the empty coalition are system transitions not controlled by the agents. The restriction of \(C^{\prime}\) to nonempty coalitions is significant for our results.
Item 5 of Definition 2 is using state \(w^{\prime}\) to express that the clandestine operation \(\delta\) not only exists but it is known to coalition \(C\). Note that this knowledge, captured through statement \(w\sim_{C}w^{\prime}\), is the knowledge of the whole coalition \(C\), not just its part \(C^{\prime}\) that executes the operation. In other words, we assume that some members of the coalition \(C\) could be passive _informants_. We explore this in the Coalition-Informat-Adversary axiom of our logical system.
Formula \(\Box_{C}\varphi\) states that coalition \(C\) knows a clandestine operation to achieve \(\varphi\). Because clandestine games are asynchronous, an important question is for how long \(\varphi\) will remain true after the operation. If another coalition can "undo" the operation without \(C\) even noticing, then coalition \(C\) could only be sure that \(\varphi\) holds at the very moment the operation is completed. To avoid this, in item 5 of Definition 2, we require \(\varphi\) to be satisfied not only in the completion state \(u\) of the operation \(\delta\), but also in all states \(u^{\prime}\) indistinguishable from state \(u\) by coalition \(C\). In other words, statement \(\varphi\) remains true until at least one of the members of coalition \(C\) takes part in another clandestine operation1.
Footnote 1: If non-concealed operations are added to Definition 1 as described in the previous section, then \(\varphi\) will remain until at least one of the members of coalition \(C\) becomes aware that another operation took place.
Figure 3: Towards item 5 of Definition 2.
Coalition-Informant-Adversary Principle
The most interesting axiom of our logical system is a principle that captures strategic information dynamics between three sets of agents: a _coalition_ that conducts a clandestine operation, a group of _informants_ who passively cooperate with the coalition by sharing knowledge but do not participate in the operation itself, and a group of _adversaries_ who do not cooperate with the coalition at all. To understand this principle, let us first consider its simplified form without the adversaries: for any disjoint coalitions \(C\) and \(I\),
\[\mathsf{K}_{I}(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\psi)\to(\square_{C} \varphi\to\square_{C,I}\psi). \tag{1}\]
The assumption \(\square_{C}\varphi\) of this principle states that before the operation (_ex-ante_) the coalition knows which clandestine operation it should conduct in order to know after the operation (_ex-post_) that \(\varphi\) is true. The other assumption \(\mathsf{K}_{I}(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\psi)\) of this principle refers to _ex-ante_ knowledge of a group of informants \(I\). Because the operation is clandestine and \(C\cap I=\varnothing\), the _ex-ante_ and _ex-post_ knowledge of \(I\) is the same. Thus, statement \(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\psi\) will have to be true after the operation. In other words, after the operation coalition, \(C\) will know that not only condition \(\varphi\), but also condition \(\psi\) is true. Coalition \(C\) alone, however, does not know this _ex-ante_ and thus, it alone does not know an operation to achieve condition \(\psi\). Nevertheless, recall that coalition \(I\) knows \(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\psi\)_ex ante_. Thus, it knows _ex-ante_ that \(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\psi\) will have to be true after any clandestine operation that does not involve \(I\). Therefore, the union of the coalitions \(C\) and \(I\) knows _ex-ante_ the operation that \(C\) can conduct to achieve \(\psi\). That is, \(\square_{C,I}\psi\).
Note that the purpose of modality \(\mathsf{K}_{I}\) in the assumption \(\mathsf{K}_{I}(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\psi)\) of principle (1) is to make sure that statement \(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\psi\) is preserved during the clandestine operation of coalition \(C\). If one were to consider an additional coalition, that we refer to as an adversary coalition \(A\), then replacing modality \(\mathsf{K}_{I}\) with \(\mathsf{K}_{A,I}\) still guarantees that statement \(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\psi\) is preserved during the operation (as long as \(A\) is also disjoint with \(C\)). Thus, one might think that the following form of principle (1) is also valid:
\[\mathsf{K}_{A,I}(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\psi)\to(\square_{C} \varphi\to\square_{C,I}\psi).\]
This statement, however, is not true. Assumptions \(\mathsf{K}_{A,I}(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\psi)\) and \(\square_{C}\varphi\) can only guarantee that coalition \(C\) knows _ex ante_ an operation to achieve \(\varphi\). If this operation is executed, then coalition \(C\cup I\) will know _ex-post_ that \(\varphi\) is true, but _they might not know ex-ante that they will know this ex-post_. To make sure that they indeed have such _ex-ante_ knowledge, one more knowledge modality should be added to the formula:
\[\mathsf{K}_{C,I}\mathsf{K}_{A,I}(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\psi) \to(\square_{C}\varphi\to\square_{C,I}\psi).\]
Finally, note that if instead of preserving \(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\psi\), it is enough just to be able to preserve statement \(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C,I}\psi\):
\[\mathsf{K}_{C,I}\mathsf{K}_{A,I}(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C,I}\psi )\to(\square_{C}\varphi\to\square_{C,I}\psi).\]
As it turns out, the above formula is the final and the most general form of the Coalition-Informant-Adversary principle. In this paper, we show that this principle, in combination with several much more straightforward other axioms, forms a logical system that can derive all universally valid properties of clandestine operations.
## 5 Axioms
In addition to propositional tautologies in the language \(\Phi\), our logical system contains the following axioms:
1. Truth: \(\mathsf{K}_{C}\varphi\to\varphi\),
2. Negative Introspection: \(\neg\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\neg\mathsf{K}_{C}\varphi\),
3. Distributivity: \(\mathsf{K}_{C}(\varphi\to\psi)\to(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\psi)\),
4. Monotonicity: \(\mathsf{K}_{C^{\prime}}\varphi\to\mathsf{K}_{C}\varphi\), where \(C^{\prime}\subseteq C\),
5. Strategic Introspection: \(\square_{C}\varphi\to\mathsf{K}_{C}\square_{C}\varphi\),
6. Coalition-Informant-Adversary: if \(C\cap(I\cup A)=\varnothing\), then \(\mathsf{K}_{C,I}\mathsf{K}_{A,I}(\mathsf{K}_{C}\varphi\to\mathsf{K}_{C,I}\psi) \to(\square_{C}\varphi\to\square_{C,I}\psi)\),
7. Nontermination: \(\neg\square_{C}\bot\),
8. Empty Coalition: \(\neg\square_{\varnothing}\varphi\).
We write \(\vdash\varphi\) if a formula \(\varphi\) is provable from the above axioms using the Modus Ponens and the two Necessitation inference rules:
\[\frac{\varphi,\varphi\to\psi}{\psi}\qquad\quad\frac{\varphi}{\mathsf{K}_{C} \varphi}\qquad\frac{\varphi,\quad C\neq\varnothing}{\square_{C}\varphi}.\]
We write \(X\vdash\varphi\) if the formula \(\varphi\) is provable from the theorems of our logical system and the set of additional axioms \(X\) using only the Modus Ponens inference rule. The next two lemmas state well-known properties of S5 modality \(\mathsf{K}\).
The next five lemmas are used in the proof of the completeness of our logical system. We give their proofs in the appendix.
**Lemma 1**.: _If \(\varphi_{1},...,\varphi_{n}\vdash\psi\), then \(\mathsf{K}_{C}\varphi_{1},...,\mathsf{K}_{C}\varphi_{n}\vdash\mathsf{K}_{C}\psi\)._
**Lemma 2**.: \(\vdash\mathsf{K}_{C}\varphi\to\mathsf{K}_{C}\mathsf{K}_{C}\varphi\)_._
**Lemma 3**.: \(\vdash\mathsf{K}_{F}\mathsf{K}_{E}\mathsf{K}_{F}\varphi\to\square_{F}\varphi\)_, where \(F\nsubseteq
is a sequence of labels along the path leading from the root of the tree to a node of the tree.
Let us now define canonical clandestine game \(M(X_{0})=(W,\{\sim_{a}\}_{a\in\mathcal{A}},\Delta,M,\pi)\) for an arbitrary maximal consistent set of formulae \(X_{0}\).
**Definition 3**.: _Set \(W\) consists of all finite sequences \(X_{0},C_{1},\ldots,C_{n},X_{n}\), such that \(n\geq 0\) and_
1. \(X_{i}\) _is a maximal consistent set of formulae for all_ \(i>1\)_,_
2. \(C_{i}\subseteq\mathcal{A}\) _is a coalition for all_ \(i\leq n\)_,_
3. \(\{\varphi\in\Phi\mid\mathsf{K}_{C_{i}}\varphi\in X_{i-1}\}\subseteq X_{i}\)_, for all_ \(i\leq n\)_._
We define a tree structure on the set of states \(W\) by saying that state \(w=X_{0},C_{1},X_{1},C_{2},\ldots,C_{n},X_{n}\) and state \(w::C_{n+1}::X_{n+1}\) are connected by an undirected edge labeled with all agents in coalition \(C_{n+1}\). For example, for the tree depicted in Figure 4, state \(X_{0},C_{2},X_{2}\) is adjacent to state \(X_{0},C_{2},X_{2},C_{8},X_{8}\) and the edge between them is labelled with all agents in coalition \(C_{8}\).
**Definition 4**.: _For any two states \(w,w^{\prime}\in W\) and any agent \(a\in\mathcal{A}\), let \(w\sim_{a}w^{\prime}\) if all edges along the simple path between \(w\) and \(w^{\prime}\) are labelled with agent \(a\)._
Note that, in the above definition, the path might consist of a single node.
**Lemma 6**.: _Relation \(\sim_{a}\) is an equivalence relation on set \(W\)._
**Definition 5**.: _Set of operations \(\Delta\) is the set of all formulae in language \(\Phi\)._
Informally, operation \(\varphi\in\Delta\) is a clandestine operation in the canonical game that achieves \(\varphi\) unnoticeable to the agents outside of the coalition that performed the operation and makes the result known to the coalition. This intuition is captured in the definition below. Throughout the paper, by \(hd(w)\) we denote the last element of the sequence \(w\).
**Definition 6**.: _Canonical mechanism \(M\) is a set of all tuples \((w,C,\varphi,u)\) where \(w,u\in W\) are states, \(C\subseteq\mathcal{A}\) is a coalition, and \(\varphi\in\Phi\) is a formula, such that \(w\sim_{\overline{C}}u\) and if \(\Box_{C}\varphi\in hd(w)\), then \(\mathsf{K}_{C}\varphi\in hd(u)\)._
Note that the requirement \(w\sim_{\overline{C}}u\) in the above definition implies that mechanism \(M\) satisfies the concealment condition from Definition 1. Next, we show that \(M\) also satisfies the nontermination condition.
**Lemma 7**.: _For any state \(w\), any coalition \(C\subseteq\mathcal{A}\), and any formula \(\varphi\in\Phi\), there is a state \(u\in W\) such that \((w,C,\varphi,u)\in M\)._
Proof.: We consider the following two cases separately:
**Case I:**\(\Box_{C}\varphi\in hd(w)\). Let
\[X=\{\mathsf{K}_{C}\varphi\}\cup\{\psi\mid\mathsf{K}_{\overline{C}}\psi\in hd( w)\}.\]
_Claim. Set \(X\) is consistent._
Proof of Claim.: Assume the opposite. Thus, there are formulae \(\mathsf{K}_{\overline{C}}\psi_{1}\),..., \(\mathsf{K}_{\overline{C}\overline{C}}\psi_{n}\in hd(w)\) such that \(\psi_{1},\ldots,\psi_{n}\vdash\neg\mathsf{K}_{C}\varphi\). Hence, \(\mathsf{K}_{\overline{C}}\psi_{1},\ldots,\mathsf{K}_{\overline{C}}\psi_{n} \vdash\mathsf{K}_{\overline{C}}\neg\mathsf{K}_{C}\varphi\). by Lemma 1. Then, \(hd(w)\vdash\mathsf{K}_{\overline{C}}\neg\mathsf{K}_{C}\varphi\) by the assumption \(\mathsf{K}_{\overline{C}}\psi_{1},\ldots,\mathsf{K}_{\overline{C}}\psi_{n} \in hd(w)\). Thus, \(hd(w)\vdash\neg\Box_{C}\varphi\) by Lemma 4 and the Modus Ponens inference rule. Then, \(\Box_{C}\varphi\notin hd(w)\) because set \(hd(w)\) is consistent, which contradicts the assumption of the case.
Let \(X^{\prime}\) be any maximal consistent extension of set \(X\) and \(u\) be the sequence \(w::\overline{C}::X^{\prime}\). Then, \(w\in W\) by Definition 3 as well as the choice of sets \(X\) and \(X^{\prime}\).
Finally, note that \(w\sim_{\overline{C}}u\) by Definition 4 because \(u=w::\overline{C}::X^{\prime}\). Also, \(\mathsf{K}_{C}\varphi\in X\subseteq X^{\prime}=hd(u)\) by the choice of sets \(X\) and \(X^{\prime}\) and the choice of sequence \(u\). Therefore, \((w,C,\varphi,u)\in M\) by Definition 6.
**Case II:**\(\Box_{C}\varphi\notin hd(w)\). Take \(u\) to be world \(w\). Therefore, \((w,C,\varphi,u)\in M\) by Definition 6. This concludes the proof of the lemma.
**Definition 7**.: \(\pi(p)=\{w\in W\mid p\in hd(w)\}\)_._
This concludes the definition of the canonical model \(M(X_{0})=(W,\{\sim_{a}\}_{a\in\mathcal{A}},\Delta,M,\pi)\).
## 7 Completeness
As usual, the proof of completeness is using an "induction" (or "truth") lemma to connect the syntax of our system with the semantics of the canonical model. In our case, this is Lemma 13. The next five lemmas are auxiliary statements that will be used in different cases of the induction step in the proof of Lemma 13.
**Lemma 8**.: \(\mathsf{K}_{D}\varphi\in X_{n}\) _iff \(\mathsf{K}_{D}\varphi\in X_{n+1}\) for any formula \(\varphi\in\Phi\), any \(n\geq 0\), and any state \(X_{0},C_{1},X_{1},C_{2},\ldots,X_{n},C_{n+1},X_{n+1}\in W\), and any coalition \(D\subseteq C_{n+1}\)._
Proof.: If \(\mathsf{K}_{D}\varphi\in X_{n}\), then \(X_{n}\vdash\mathsf{K}_{D}\mathsf{K}_{D}\varphi\) by Lemma 2. Hence, \(X_{n}\vdash\mathsf{K}_{C_{n+1}}\mathsf{K}_{D}\varphi\) by the Monotonicity axiom, the assumption \(D\subseteq C_{n+1}\), and the Modus Ponens inference rule. Thus, \(\mathsf{K}_{C_{n+1}}\mathsf{K}_{D}\varphi\in X_{n}\) by the maximality of set \(X_{n}\). Therefore, \(\mathsf{K}_{D}\varphi\in X_{n+1}\) by Definition 3.
Suppose that \(\mathsf{K}_{D}\varphi\notin X_{n}\). Hence, \(\neg\mathsf{K}_{D}\varphi\in X_{n}\) by the maximality of set \(X_{n}\). Thus, \(X_{n}\vdash\mathsf{K}_{D}\neg\mathsf{K}_{D}\varphi\) by the Negative Introspection axiom and the Modus Ponens inference rule. Hence, \(X_{n}\vdash\mathsf{K}_{C_{n+1}}\neg\mathsf{K}_{D}\varphi\) by the Monotonicity axiom, the assumption \(D\subseteq C_{n+1}\), and the Modus Ponens inference rule. Then, \(\mathsf{K}_{C_{n+1}}\neg\mathsf{K}_{D}\varphi\in X_{n}\) by the maximality of set \(X_{n}\). Thus, \(\neg\mathsf{K}_{D}\varphi\in X_{n+1}\) by Definition 3. Therefore, \(\mathsf{K}_{D}\varphi\notin X_{n+1}\) because set \(X_{n+1}\) is consistent.
Figure 4: Tree Construction.
**Lemma 9**.: _If \(\mathsf{K}_{C}\varphi\in hd(w)\) and \(w\sim_{C}u\), then \(\varphi\in hd(u)\)._
Proof.: Assumption \(w\sim_{C}u\) implies that all edges along the unique simple path between nodes \(w\) and \(u\) are labeled with all agents in coalition \(C\). Thus, \(\mathsf{K}_{C}\varphi\in hd(u)\) by Lemma 8. Hence, \(hd(u)\vdash\varphi\) by the Truth axiom and the Modus Ponens inference rule. Therefore, \(\varphi\in hd(u)\) because set \(hd(u)\) is maximal.
**Lemma 10**.: _If \(\mathsf{K}_{C}\varphi\notin hd(w)\), then there is \(u\in W\) such that \(w\sim_{C}u\) and \(\varphi\notin hd(u)\)._
Proof.: Consider set \(X=\{\neg\varphi\}\cup\{\psi\mid\mathsf{K}_{C}\psi\in hd(w)\}\).
_Claim._: _Set \(X\) is consistent._
Proof of Claim.: Suppose the opposite. Thus, there are formulae \(\mathsf{K}_{C}\psi_{1},\ldots,\mathsf{K}_{C}\psi_{n}\in hd(w)\) such that \(\psi_{1},\ldots,\psi_{n}\vdash\varphi\). Hence, \(\mathsf{K}_{C}\psi_{1},\ldots,\mathsf{K}_{C}\psi_{n}\vdash\mathsf{K}_{C}\varphi\) by Lemma 1. Then, \(hd(w)\vdash\mathsf{K}_{C}\varphi\) by the assumption \(\mathsf{K}_{C}\psi_{1},\ldots,\mathsf{K}_{C}\psi_{n}\in hd(w)\). Thus, \(\mathsf{K}_{C}\varphi\in hd(w)\) because set \(hd(w)\) is maximal, which contradicts the assumption of the lemma.
Let \(X^{\prime}\) be any maximal consistent extension of set \(X\) and \(u\) be the sequence \(w::C::X^{\prime}\). Then, \(w\in W\) by Definition 3 as well as the choice of sets \(X\) and \(X^{\prime}\).
Finally, \(\neg\varphi\in X\subseteq X^{\prime}=hd(u)\) by the choice of sets \(X\) and \(X^{\prime}\) and the choice of sequence \(u\). Therefore, \(\varphi\notin hd(u)\) because set \(hd(u)\) is consistent.
**Lemma 11**.: _For any formula \(\square_{C}\varphi\in hd(w)\) and any three states \(w^{\prime},u,u^{\prime}\in W\), if \(w\sim_{C}w^{\prime}\), \((w^{\prime},C,\varphi,u)\in M\), and \(u\sim_{C}u^{\prime}\), then \(\varphi\in hd(u^{\prime})\)._
Proof.: Assumption \(\square_{C}\varphi\in hd(w)\) implies that \(hd(w)\vdash\mathsf{K}_{C}\square_{C}\varphi\) by the Strategic Introspection axiom and the Modus Ponens inference rule. Hence, \(\mathsf{K}_{C}\square_{C}\varphi\in hd(w)\) because set \(hd(w)\) is maximal. Thus, \(\square_{C}\varphi\in hd(w^{\prime})\) by Lemma 9 and the assumption \(w\sim_{C}w^{\prime}\). Then, \(\mathsf{K}_{C}\varphi\in hd(u)\) by Definition 6 and the assumption \((w^{\prime},C,\varphi,u)\in M\). Therefore, \(\varphi\in hd(u^{\prime})\) by Lemma 9 and the assumption \(u\sim_{C}u^{\prime}\).
**Lemma 12**.: _If \(\square_{F}\varphi\notin hd(w)\), then for any nonempty coalition \(E\subseteq F\) and any action \(\delta\in\Delta\), there are states \(w^{\prime},u,u^{\prime}\) such that \(w\sim_{F}w^{\prime}\), \((w^{\prime},E,\delta,u)\in M\), \(u\sim_{F}u^{\prime}\), and \(\varphi\notin hd(u^{\prime})\)._
In the proof of this lemma located below, we consecutively construct states \(w^{\prime}\), \(u\), and \(u^{\prime}\). To guarantee that state \(u^{\prime}\) could be constructed after state \(u\), we construct a state \(u\) such that set \(hd(u)\) contains formula \(\neg\mathsf{K}_{F}\varphi\). In this case, by Lemma 10, there must exist a state \(u^{\prime}\) such that \(u\sim_{F}u^{\prime}\), and \(\varphi\notin hd(u^{\prime})\).
One might think that state \(u\) could be constructed from state \(w^{\prime}\) in a similar fashion by guaranteeing first that set \(hd(w^{\prime})\) contains formula \(\neg\mathsf{K}_{\overline{E}}\mathsf{K}_{F}\varphi\). However, there is a problem because Definition 6 states that if set \(hd(w^{\prime})\) contains formula \(\square_{E}\delta\), then set \(hd(u)\), in addition to formula \(\neg\mathsf{K}_{F}\varphi\), must also contain formula \(\mathsf{K}_{E}\delta\). Thus, there are two possible ways sets \(hd(w^{\prime})\) and \(hd(u)\) could be constructed:
1. Set \(hd(u)\) contains \(\neg\mathsf{K}_{F}\varphi\) and \(\mathsf{K}_{E}\delta\). In this case, set \(hd(w^{\prime})\) must contain formula \(\neg\mathsf{K}_{\overline{E}}\neg(\neg\mathsf{K}_{F}\varphi\wedge\mathsf{K}_ {E}\delta)\). The last formula is equivalent to \(\neg\mathsf{K}_{\overline{E}}(\mathsf{K}_{E}\delta\rightarrow\mathsf{K}_{F} \varphi)\),
2. Set \(hd(u)\) contains only formula \(\neg\mathsf{K}_{F}\varphi\). In this case, set \(hd(w^{\prime})\) must contain formulae \(\neg\mathsf{K}_{\overline{E}}\neg\mathsf{K}_{F}\varphi\) and \(\neg\square_{E}\delta\).
We visualise these two cases on the diagram in Figure 5.
Unfortunately, there is no way to decide upfront which of these two ways could be used to construct a consistent set \(hd(w^{\prime})\). Thus, in the proof below we attempt to concurrently construct both versions of the set \(hd(w^{\prime})\) and prove that one of the two attempts succeeds by resulting in a consistent set \(hd(w)\). Finally, note that in both cases we must also guarantee that \(w\sim_{F}w^{\prime}\). To achieve this, we include in set \(hd(w^{\prime})\) all such formulae \(\psi\) that \(\mathsf{K}_{F}\psi\in hd(w)\).
In the proof below, the two different attempts to create a set \(hd(w^{\prime})\) are carried out by defining sets \(X\) and \(Y\) and proving that at least one of them is consistent. Set \(hd(w^{\prime})\) is later defined as a maximal consistent extension of either set \(X\) or set \(Y\) depending on which one is consistent.
Proof.: Consider the following two sets of formulae:
\[X = \{\neg\mathsf{K}_{\overline{E}}(\mathsf{K}_{E}\delta \rightarrow\mathsf{K}_{F}\varphi)\}\cup\{\psi\mid\mathsf{K}_{F}\psi\in hd(w)\},\] \[Y = \{\neg\square_{E}\delta,\neg\mathsf{K}_{\overline{E}}\mathsf{K}_{F} \varphi\}\cup\{\psi\mid\mathsf{K}_{F}\psi\in hd(w)\}.\]
Claim.: _Either set \(X\) or set \(Y\) is consistent._
Proof of Claim.: Suppose the opposite. Thus, there are
\[\mathsf{K}_{F}\psi_{1},\ldots,\mathsf{K}_{F}\psi_{m},\mathsf{K}_{F}\psi_{1}^{ \prime},\ldots,\mathsf{K}_{F}\psi_{n}^{\prime}\in hd(w) \tag{2}\]
such that
\[\psi_{1},\ldots,\psi_{m} \vdash \mathsf{K}_{\overline{E}}(\mathsf{K}_{E}\delta\rightarrow\mathsf{K }_{F}\varphi),\] \[\psi_{1}^{\prime},\ldots,\psi_{n}^{\prime} \vdash \square_{E}\delta\vee\mathsf{K}_{\overline{E}}\mathsf{K}_{F}\varphi.\]
Then, by the Strategic Introspection axiom,
\[\psi_{1},\ldots,\psi_{m} \vdash \mathsf{K}_{\overline{E}}(\mathsf{K}_{E}\delta\rightarrow\mathsf{K }_{F}\varphi),\] \[\psi_{1}^{\prime},\ldots,\psi_{n}^{\prime} \vdash \mathsf{K}_{E}\square_{E}\delta\vee\mathsf{K}_{\overline{E}} \mathsf{K}_{F}\varphi.\]
Figure 5: Towards the Proof of Lemma 12.
Hence, by Lemma 1,
\[\mathsf{K}_{F}\psi_{1},\ldots,\mathsf{K}_{F}\psi_{m} \vdash \mathsf{K}_{F}\mathsf{K}_{\overline{E}}(\mathsf{K}_{E}\delta\to \mathsf{K}_{F}\varphi),\] \[\mathsf{K}_{F}\psi^{\prime}_{1},\ldots,\mathsf{K}_{F}\psi^{ \prime}_{n} \vdash \mathsf{K}_{F}(\mathsf{K}_{E}\Box_{E}\delta\lor\mathsf{K}_{ \overline{E}}\mathsf{K}_{F}\varphi).\]
Thus, by assumption (2),
\[\begin{array}{rcl}hd(w)&\vdash&\mathsf{K}_{F}\mathsf{K}_{\overline{E}}( \mathsf{K}_{E}\delta\to\mathsf{K}_{F}\varphi),\\hd(w)&\vdash&\mathsf{K}_{F}(\mathsf{K}_{E}\Box_{E}\delta\lor\mathsf{K}_{ \overline{E}}\mathsf{K}_{F}\varphi).\end{array} \tag{3}\]
The last statement, by Lemma 5, assumption \(E\subseteq F\) of the lemma, and the Modus Ponens inference rule, implies that
\[\begin{array}{rcl}hd(w)\vdash\mathsf{K}_{E}\Box_{E}\delta\lor\mathsf{K}_{F }\mathsf{K}_{\overline{E}}\mathsf{K}_{F}\varphi.\end{array}\]
Then, by the Truth axiom and propositional reasoning,
\[\begin{array}{rcl}hd(w)\vdash\Box_{E}\delta\lor\mathsf{K}_{F}\mathsf{K}_{ \overline{E}}\mathsf{K}_{F}\varphi.\end{array} \tag{4}\]
Recall that set \(E\) is nonempty by the assumption of the lemma. Thus, there is at least one \(e\in E\). Then, \(e\in F\) by the assumption \(E\subseteq F\) of the lemma. Hence, \(e\in F\setminus\overline{E}\). Thus, \(F\nsubseteq\overline{E}\). Then, \(\vdash\mathsf{K}_{F}\mathsf{K}_{\overline{E}}\mathsf{K}_{F}\varphi\to\Box_{F}\varphi\) by Lemma 3. At the same time, \(hd(w)\not\vdash\Box_{F}\varphi\) by the assumption \(\Box_{F}\varphi\notin hd(w)\) of the lemma and the maximality of the set \(hd(w)\). Then, \(hd(w)\vdash\mathsf{K}_{F}\mathsf{K}_{\overline{E}}\mathsf{K}_{F}\varphi\) by the contraposition of the Modus Ponens inference rule. Hence, \(\neg\mathsf{K}_{F}\mathsf{K}_{\overline{E}}\mathsf{K}_{F}\varphi\in hd(w)\) because set \(hd(w)\) is maximal. Thus, by propositional reasoning using statement (4),
\[\begin{array}{rcl}hd(w)\vdash\Box_{E}\delta.\end{array} \tag{5}\]
At the same time, assumption \(E\subseteq F\) of the lemma implies that \((F\setminus E)\cup\overline{F}=\overline{E}\). Then, \(E\cap((F\setminus E)\cup\overline{F})=E\cap\overline{E}=\varnothing\). Hence, the following formula
\[\begin{array}{rcl}\mathsf{K}_{E,F\setminus E}\mathsf{K}_{\overline{F},F \setminus E}(\mathsf{K}_{E}\delta\!\to\!\mathsf{K}_{E,F\setminus E}\varphi) \to(\Box_{E}\delta\!\to\!\Box_{E,F\setminus E}\varphi)\end{array}\]
is an instance of the Coalition-Informat-Adversary axiom where \(C=E\), \(I=F\setminus E\), and \(A=\overline{F}\). Thus, using statement (5) and propositional reasoning,
\[\begin{array}{rcl}hd(w)\vdash\mathsf{K}_{E,F\setminus E}\mathsf{K}_{ \overline{F},F\setminus E}(\mathsf{K}_{E}\delta\!\to\!\mathsf{K}_{E,F\setminus E }\varphi)\to\Box_{E,F\setminus E}\varphi.\end{array}\]
Note that \(E\cup(F\setminus E)=F\) and \(\overline{F}\cup(F\setminus E)=\overline{E}\) by the assumption \(E\subseteq F\) of the lemma. In other words,
\[\begin{array}{rcl}hd(w)\vdash\mathsf{K}_{F}\mathsf{K}_{\overline{E}}( \mathsf{K}_{E}\delta\to\mathsf{K}_{F}\varphi)\to\Box_{F}\varphi.\end{array}\]
Then, \(hd(w)\vdash\Box_{F}\varphi\) by statement (3) and the Modus Ponens inference rule. Therefore, \(\Box_{F}\varphi\in hd(w)\) because set \(hd(w)\) is maximal, which contradicts assumption \(\Box_{F}\varphi\notin hd(w)\) of the lemma. \(\boxtimes\)
The claim that we just proved states that either set \(X\) or set \(Y\) is consistent. We consider these two cases separately.
**Case I:** set \(X\) is consistent. Let \(X^{\prime}\) be any maximal consistent extension of the set \(X\) and let state \(w^{\prime}\) be the sequence \(w::F::X^{\prime}\). Note that \(w\in W\) by Definition 3 and the choice of set \(X\), set \(X^{\prime}\), and sequence \(w^{\prime}\). Also, \(w\sim_{F}w^{\prime}\) by Definition 4 and the choice of sequence \(w^{\prime}\).
Note that \(\neg\mathsf{K}_{\overline{E}}(\mathsf{K}_{E}\delta\to\mathsf{K}_{F}\varphi) \in X\subseteq X^{\prime}=hd(w^{\prime})\) by the choice of set \(X\), set \(X^{\prime}\), and sequence \(w^{\prime}\). Thus, \(\mathsf{K}_{\overline{E}}(\mathsf{K}_{E}\delta\to\mathsf{K}_{F}\varphi) \notin hd(w^{\prime})\) because set \(hd(w^{\prime})\) is consistent. Hence, by Lemma 10, there is a state \(u\in W\) such that \(w^{\prime}\sim_{\overline{E}}u\) and \(\mathsf{K}_{E}\delta\to\mathsf{K}_{F}\varphi\notin hd(u)\). Then, \(\mathsf{K}_{E}\delta\in hd(u)\) and \(\mathsf{K}_{F}\varphi\notin hd(u)\) because \(hd(u)\) is a maximal consistent set. Statements \(w^{\prime}\sim_{\overline{E}}u\) and \(\mathsf{K}_{E}\delta\in hd(u)\) imply that \((w^{\prime},E,\delta,u)\in M\) by Definition 6. Statement \(\mathsf{K}_{F}\varphi\notin hd(u)\) implies that there is a state \(u^{\prime}\in W\) such that \(u\sim_{F}u^{\prime}\) and \(\varphi\notin hd(u^{\prime})\) by Lemma 10.
**Case II:** set \(Y\) is consistent. Let \(Y^{\prime}\) be any maximal consistent extension of the set \(Y\) and let state \(w^{\prime}\) be the sequence \(w::F::X^{\prime}\). As in the previous case, \(w\in W\) by Definition 3 and the choice of set \(Y\), set \(Y^{\prime}\), and sequence \(w^{\prime}\). Also, \(w\sim_{F}w^{\prime}\) by Definition 4 and the choice of \(w^{\prime}\).
Note that \(\neg\mathsf{K}_{\overline{E}}\mathsf{K}_{F}\varphi\in Y\subseteq Y^{\prime}=hd(w ^{\prime})\) by the choice of set \(Y\), set \(Y^{\prime}\), and sequence \(w^{\prime}\). Thus, \(\mathsf{K}_{\overline{E}}\mathsf{K}_{F}\varphi\notin hd(w^{\prime})\) as set \(hd(w^{\prime})\) is maximal consistent. Hence, by Lemma 10, there is a state \(u\in W\) such that
\[\begin{array}{rcl}w^{\prime}\sim_{\overline{E}}u&\text{and}&\mathsf{K}_{F} \varphi\notin hd(u).\end{array} \tag{6}\]
At the same time, \(\neg\Box_{E}\delta\in Y\subseteq Y^{\prime}=hd(w^{\prime})\) by the choice of set \(Y\), set \(Y^{\prime}\), and sequence \(w^{\prime}\). Thus, \(\Box_{E}\delta\notin hd(w^{\prime})\) because set \(hd(w^{\prime})\) is consistent. Then, \((w^{\prime},E,\delta,u)\in M\) by Definition 6 and because \(w^{\prime}\sim_{\overline{E}}u\) by statement (6).
Finally, \(Kr_{F}\varphi\notin hd(u)\) by statement (6). Therefore, by Lemma 10, there exists a state \(u^{\prime}\in W\) such that \(u\sim_{F}u^{\prime}\) and \(\varphi\notin hd(u^{\prime})\). This concludes the proof of the lemma. \(\Box\)
The next "truth lemma" follows from the four previous lemmas in the standard way. Due to the space constraint, we give its proof in the appendix.
**Lemma 13**.: \(w\vdash\varphi\) _iff \(\varphi\in hd(w)\)._
**Theorem 1**.: _If \(X\not\vdash\varphi\), then there is a state \(w\) of a clandestine game such that \(w\Vdash\chi\) for each formula \(\chi\in X\) and \(w\Vdash\varphi\)._
Proof.: If \(X\not\vdash\varphi\), then set \(X\cup\{\neg\varphi\}\) is consistent. Let \(w\) be any maximal consistent extension of this set. Then, \(w\Vdash\chi\) for each formula \(\chi\in X\) and \(w\vdash\neg\varphi\) by Lemma 13. Therefore, \(w\Vdash\varphi\) by item 2 of Definition 2. \(\Box\)
## 8 Conclusion
In this paper, we proposed a sound and complete logical system that describe properties of clandestine power modality \(\Box_{C}\varphi\). A natural generalization of our work could be a study of "partially-clandestine" modality \(\Box_{C}^{F}\varphi\), that stands for "coalition \(C\) knows an operation that it can use to achieve \(\varphi\) unnoticeable to anyone outside (friendly) coalition \(F\)".
It is also possible to consider a broader class of clandestine operations that achieve a goal through several consecutive clandestine actions of the given coalition. This type of multi-step operations is similar to multi-step strategies studied in know-how logics [18, 19, 20, 21].
|
2304.00473 | Kernel-level Rootkit Detection, Prevention and Behavior Profiling: A
Taxonomy and Survey | One of the most elusive types of malware in recent times that pose
significant challenges in the computer security system is the kernel-level
rootkits. The kernel-level rootkits can hide its presence and malicious
activities by modifying the kernel control flow, by hooking in the kernel
space, or by manipulating the kernel objects. As kernel-level rootkits change
the kernel, it is difficult for user-level security tools to detect the
kernel-level rootkits. In the past few years, many approaches have been
proposed to detect kernel-level rootkits. It is not much difficult for an
attacker to evade the signature-based kernel-level rootkit detection system by
slightly modifying the existing signature. To detect the evolving kernel-level
rootkits, researchers have proposed and experimented with many detection
systems. In this paper, we survey traditional kernel-level rootkit detection
mechanisms in literature and propose a structured kernel-level rootkit
detection taxonomy. We have discussed the strength and weaknesses or challenges
of each detection approach. The prevention techniques and profiling
kernel-level rootkit behavior affiliated literature are also included in this
survey. The paper ends with future research directions for kernel-level rootkit
detection. | Mohammad Nadim, Wonjun Lee, David Akopian | 2023-04-02T07:12:35Z | http://arxiv.org/abs/2304.00473v1 | # Kernel-level Rootkit Detection, Prevention and Behavior Profiling: A Taxonomy and Survey
###### Abstract
One of the most elusive types of malware in recent times that pose significant challenges in the computer security system is the kernel-level rootkits. The kernel-level rootkits can hide its presence and malicious activities by modifying the kernel control flow, by hooking in the kernel space, or by manipulating the kernel objects. As kernel-level rootkits change the kernel, it is difficult for user-level security tools to detect the kernel-level rootkits. In the past few years, many approaches have been proposed to detect kernel-level rootkits. It is not much difficult for an attacker to evade the signature-based kernel-level rootkit detection system by slightly modifying the existing signature. To detect the evolving kernel-level rootkits, researchers have proposed and experimented with many detection systems. In this paper, we survey traditional kernel-level rootkit detection mechanisms in literature and propose a structured kernel-level rootkit detection taxonomy. We have discussed the strength and weaknesses or challenges of each detection approach. The prevention techniques and profiling kernel-level rootkit behavior affiliated literatures are also included in this survey. The paper ends with future research directions for kernel-level rootkit detection.
Kernel-level Rootkit Detection Taxonomy Survey Operating System Security
## 1 Introduction
The kernel is a core part of the computer operating system (OS) that plays an important role in managing computer resources. To conduct high privileged arbitrary malicious operations, attackers compromise the OS kernel by loading a malicious kernel module (kernel-level rootkit) into the kernel space. The kernel-level rootkits are the most sophisticated and destructive tools for attackers, because of its nature to hide its presence and obtained high or root privilege. Generally, it is difficult for an ordinary user to find the presence of the kernel rootkit in the system. The lack of protection and isolation in kernel space makes it vulnerable against kernel
level rootkit attack that can perform many malicious operations such as, process hiding, module hiding, network communication hiding, sensitive information gathering, and so on. Because the kernel is the lowest level of an operating system and has highest privileges to access resources, the attacker can access the resources of an operating system by exploiting kernel vulnerabilities. Recently, the kernel-level rootkit technique is employed by more and more malware to gain high privilege in the OS kernel so that they can hide their malicious activities. ZeroAccess - malware used rootkit techniques to hide itself in an infected machine and was used to download other malware form a - botnet [1]. It infected over millions of Microsoft windows operating system machines. Zacinlo malware leverages rootkit technique to propagate aware in Windows 10 operating system [2].
The detection module of kernel-level rootkit can be located at different layers of a system. Based on the location of the detection module, the mechanisms for the kernel-level rootkit detection can be grouped into three categories: Host-based, Virtualization-based, and External hardware-based. Starting from primitive host-based detection method, virtualization-based detection mechanisms have gained popularity replacing the host-based mechanisms because host-based methods are vulnerable to the kernel-level rootkit. Though hardware-based detection techniques show a good performance, they require expense of great cost. The detection method of kernel-level rootkit can be temporally classified into two different categories: static method and dynamic method. The static method classifies the malicious kernel drivers or modules by analyzing the code to distinguish malicious behavioral features. However, in some cases, obfuscation of code makes it difficult to statically analyze the kernel module, thus dynamic methods are proposed to address the obfuscation problem. The basic idea of detecting kernel-level rootkit by using the dynamic method is to execute the kernel-level rootkit in a proper environment and observe the run-time behavior. The observed run-time behavior is used as a signature to detect kernel-level rootkit in production environment. Some existing techniques use an emulator to execute kernel-level rootkit with some limitations in which, the kernel-level rootkits may not behave correctly in the emulator if they rely on the specific hardware devices. Another approach for the kernel-level rootkit execution is to create virtual machines with full operating system capabilities. Based on working principle, the kernel-level detection approaches can also be classified as signature-based, behavior-based, cross-view based, and integrity-based. A kernel-level rootkit can be detected by monitoring the kernel data structure invariants and creating hypothesized signatures. Hardware events occurred during the execution of system calls in a legitimate and infected system show the behavior of a kernel-level rootkit. The fingerprints of kernel-level rootkit infection can also be traced from the volatile memory to make a cross-view detection. Access control policy can be implemented to enforce the integrity protection of OS kernel against the kernel-level rootkit. The researchers are also focusing on learning-based detection techniques to detect kernel-level rootkit because machine learning and deep learning technology have proven high accuracy to automatically detect known and unknown malware.
Several works have been introduced to survey the prior malware analysis, classification, and detection techniques [3, 4, 5]. According to the interaction with operating system, Rutkowska [13] proposed a classification taxonomy of stealthy malware. Though kernel-level rootkit is a part of the malware family, it is highly distinct from other types of malware. Advantages and disadvantages of technologies to write and detect kernel-level rootkits are briefly discussed in [6]. Tyler Shields [7] presented a brief history as well as the evolution of the rootkits overviewing the detection techniques of different types of kernel-level rootkits including application-level, library-level, firmware-level, and virtualized rootkits. Finally, in the Shields' paper [7], the impact on the digital forensics process that rootkits have was analyzed. A comprehensive and structured view of the prior
kernel-level rootkit detection mechanisms was documented by Joy et al [8]. The authors classified the detection mechanism into three different categories based on the position of detection module. A survey on rootkit techniques is detailed by Kim et al [9]. In this survey, both user-level and kernel-level rootkit techniques are described utilizing rootkit samples and different hooking techniques like SSDT hooking, IDT hooking, Inline function hooking are briefly described by the authors. Bravo and Garcia [10] discussed the classification and techniques of rootkit followed by the rootkit detection approaches. Li et al. [11] surveyed the core implementation details of kernel malware by studying several Linux kernel malwares. Rudd et al [12] surveyed the stealth technologies highly adopted by the kernel-level rootkits with detailed discussion. They discussed different types of hooking techniques as well as the DKOM technique. Not only the stealth techniques but also their countermeasures are overviewed in this paper. Most importantly, prior machine learning-based countermeasures to detect stealth techniques are discussed briefly. The authors also identified some flawed algorithmic assumptions that hinder malware recognition in the machine learning approach.
### _Problem Statement_
Though the kernel-level rootkit attack number is small compared to all reported malware infections, the impact of the kernel-level rootkit is fairly large in terms of malicious activities. The elusive nature of kernel-level rootkit makes it difficult to detect, still different approaches have been introduced to detect kernel-level rootkit. There has been a lack of work that details most of the contemporary research affiliated to the kernel-level rootkit detection techniques in a structured way. Also, a comparison of strength and weakness / challenge between different detection approaches need to be addressed. The state-of-the-art research on the kernel-level rootkit prevention along with behavior profiling are required to be discussed in detail.
### _Contribution_
The contribution of this study briefly is:
1. This survey is an endeavor to provide a broad and structured overview of extensive research on the kernel-level rootkit detection techniques.
2. We have proposed a solution taxonomy on the kernel-level rootkit detection mechanism (figure 1).
3. Strength and weakness are compared between different kernel-level rootkit detection approaches.
4. Learning-based techniques for kernel-level rootkit detection are widely detailed in this study.
5. Profiling the elusive nature of kernel-level rootkit behavior affiliated prior literatures are included in this survey along with the contemporary research on kernel-level rootkit prevention techniques.
The rest of the paper is organized as follows: Section 2 briefly describes the kernel-level rootkit attack approaches; Section 3 categorizes kernel-level rootkit detection techniques in the literature. An overview of the kernel-level rootkit prevention techniques, existing literatures to profile kernel-level rootkit behavior are described in Section 4. Future research directions are described in Section 5 and Section 6 concludes this survey paper.
## 2 Kernel-Level Rootkit
The first generation of rootkits are mainly user-level rootkits that conceal themselves as disk-resident system programs by mimicking the system process files. Those rootkits are easy to detect and remove by using file
integrity tools and user-level security software. So, the modern rookits have evolved from disk-residency to memory-residency to evade the detection by file integrity tools. The second generation of rookits modify the control flow of the computer system to execute malicious code by using different hooking techniques. The return value or functionality requested from the operating system can be altered by executing the malicious code. User-mode hooking is comparatively easier to detect than kernel-mode hooking, as it is implemented in the user-space. Kernel-mode hooking usually injects malicious code into the kernel-space of an OS via device driver which makes it difficult to detect by user-mode intrusion detection system (IDS) and other security tools. System Service Descriptor Table (SSDT), Internrupt Descriptor Table (IDT) and I/O Request Packet (IRP) function tables are the most common target for implementing kernel hooks. The execution of malicious code by the second-generation rootkit leaves memory footprint in both user-space and kernel-space that can be detected and analyzed. The third generation of rootkits are mostly kernel-level rootkits. In spite of having limited applications, but they are difficult to detect as they can modify the dynamic kernel data structures. Direct Kernel Object Manipulation (DKOM) attack, implemented by the third-generation rootkits, targets the dynamic data structures in kernel whose values change during runtime. Kernel-level rootkit can be summarized into the following categories: System Service Hijacking (system call table hooking, replacing system call table), Dynamic Kernel Object Hooking (virtual file system hooking), and Direct Kernel Object Manipulation (DKOM).
### System Service Hijacking
A system call is basically an interface between user level processes and an operating system. User level programs access the system resources through this interface. All the actual system call routine addresses are stored in a table called system call table or system service descriptor table. The system calls can be differently attacked by the kernel-level rootkits. For example, attackers can replace the legitimate system call with own malicious system call by modifying the system call address in system call table. Attackers can also change the control flow of a system call by modifying the code in the target address. Usually by inserting jump instructions, the control is passed to the malicious code. Additionally, the whole system call table can be replaced by attackers with own version of system call table by overwriting the memory that contains the system call table address [19]. Another important hooking target is the Interrupt Descriptor Table (IDT). The processor uses the IDT to determine the correct response to interrupts and exceptions. As interrupts have no return values, interrupt requests can only be denied by hooking the IDT. In a multiprocessing system, an attacker needs to hook all IDTs as each CPU has its own IDT.
### Dynamic Kernel Object Hooking
The OS kernel uses Virtual File System (VFS) to handle the file system operations across different types of file systems such as EXT2, EXT3, and NTFS. Thus, VFS is a layer between the actual file systems and the user-level programs that make the file handling system calls to access the files. Different data structures are used by VFS to achieve a common file model such as the file object, in node object, and dentry object. The kernel-level rootkit can modify the file object data structure field that contains a pointer to the file_operation structure (f_op) to hide without modifying the system call table. Function pointers to in node operation functions such as lookup function are stored in the in node data structure. The kernel-level rootkit can hide a process by modifying the function pointer of the lookup function for the process directory's (/proc) inode data structure [14].
Figure 1: Proposed taxonomy of the Kernel-level rootkit detection approaches.
### Direct Kernel Object Manipulation (DKOM)
Kernel-level rootkits can also modify the kernel data structure by using DKOM technique. As DKOM technique aims to modify dynamic kernel data structures, it is harder to detect than kernel hooking because the dynamic object changes during normal runtime operations. Malicious process hiding is a perfect example of DKOM technique. In Windows OS, an_EPROCESS data structure is associated with each process. To hide a malicious process, kernel-level rootkits modify the_EPROCESS data structure that is maintained in a doubly linked list. Unlinking an element from the process list implemented in_EPROCESS data structure makes the process invisible to both user and kernel mode programs. Other than process to hide itself with the DKOM techniques, Kernel device drivers, active ports can also be hidden by using this technique. Implementation of DKOM is extremely difficult because incorrect change in operating system kernel data structure may result in system crashes.
Table 1 summarizes the kernel-level rootkit detection approaches selected for this study based on environment (Host, Virtual Machine, Emulator), focused feature (Static, Dynamic) and operating system (Windows, Linux, macOS).
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|c|c|c|c|} \hline & & \multicolumn{3}{c|}{Environment} & \multicolumn{3}{c|}{Focused} & \multicolumn{3}{c|}{Operating} \\ & & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{Feature} & \multicolumn{3}{c|}{System} \\ \cline{3-11} & & & Prior Works & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ \cline{3-11} & & & & Prior Works & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{} \\ \cline{3-11} & & & & & & & & & & \\ \hline \multirow{11}{*}{DKOM} & Kruegel et al. [15], Levine et al. [19; 20; 21], KRGuard [23; 24] & \(\surd\) & & & \(\surd\) & & & \(\surd\) & \\ \cline{2-11} & Zhou and Makris [22] & & & \(\surd\) & \(\surd\) & & & \(\surd\) & \\ \cline{2-11} & DataGene [25; 26] & & & \(\surd\) & & \(\surd\) & & \(\surd\) & \\ \hline \multirow{11}{*}{DKOM} & Ring and Cole [27], DCFI-Checker [35] & \(\surd\) & & & & \(\surd\) & \(\surd\) & \\ \cline{2-11} & KernelGuard [28] & & & \(\surd\) & \(\surd\) & & \(\surd\) & \\ \cline{2-11} & HookScout [29] & & & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \\ \cline{2-11} & Nunchecker [31; 32], Wang et al. [33], KLrtD [41] & & \(\surd\) & & & \(\surd\) & & \(\surd\) & \\ \cline{2-11} & Patchfinder [34] & \(\surd\) & & & \(\surd\) & & & \(\surd\) & \\ \cline{2-11} & Blacksheep [37], dAnubis [36] & & & \(\surd\) & & \(\surd\) & \(\surd\) & \\ \cline{2-11} & Fluorescence [38] & & \(\surd\) & & & \(\surd\) & \(\surd\) & \\ \cline{2-11} & Wang [39] & \(\surd\) & & & & \(\surd\) & \(\surd\) & \\ \hline \multirow{11}{*}{DKOM} & Strider GhostBuster [45] & \(\surd\) & \(\surd\) & & \(\surd\) & \(\surd\) & \(\surd\) & \\ \cline{2-11} & Wampler and Graham [50; 51] & \(\surd\) & & \(\surd\) & & & \(\surd\) & \\ \cline{1-1} \cline{2-11} & Molina et al. [42], KeRTD [43], Rkfinder [60], HyBIS [63], & & & & \(\surd\) & \(\surd\) & \\ \cline{1-1} \cline{2-11} & WinWizard [64], Dolan-Gavitt et al. [55] & & \(\surd\) & & & \(\surd\) & \(\surd\) & \\ \cline{1-1} \cline{2-11} & Lycsoid [49] & & \(\surd\) & & & \(\surd\) & \(\surd\) & \\ \cline{1-1} \cline{2-11} & XView [53], SigGENE [56] & & & \(\surd\) & & \(\surd\) & \(\surd\) & \\ \cline{1-1} \cline{2-11} & BeCFI [71] & \(\surd\) & & & & \(\surd\) & \(\surd\) & \\ \cline{1-1} \cline{2-11} & SigGraph [57] & & & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) & \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the Kernel-level rootkit detection approaches selected for this study.
## 3 Kernel-level rootkit detection
Kernel-level rootkit detection approaches can be categorized into six major classes: signature-based, behavior-based, cross-view-based, integrity-based, external hardware-based, and learning-based. Then each major category can be sub-categorized according to underlying working principles.
### Signature-based Detection
Signature-based detection is one of the most common techniques used to address software threats. This type of detection involves detection tools having a predefined repository of static signatures (fingerprints) that represent known threats. Different signature-based kernel-level rootkit detection techniques are discussed in detail in this section. The strengths and weaknesses or challenges of the signature-based kernel-level rootkit detection approaches are shown in Table 2.
#### 3.1.1 Module Static Analysis
The most common way of inserting kernel-level rootkits into the memory is through the loadable kernel module (LKM). The runtime behavior of kernel-level rootkits significantly differs from the one of the regular kernel modules or device drivers. Before loading into the kernel, a module's binary can be checked for malicious instruction sequences signature that either performs write operation to an illegal memory area or calculate an address in the kernel space using a forbidden kernel symbol reference and performs write operation using the calculated address. A similar approach is proposed by Kruegel et al. [15] to detect kernel-level rootkit module by leveraging symbolic execution. This method is ineffective against malicious code injection in the kernel which does not use module loading interface.
#### 3.1.2 Checking File Directories
Some primitive detection tools have used to look into file directories for kernel-level rootkit detection since some rootkits create a specific directory name in a certain directory (e.g., 'Knark' rootkit creates a directory named _'/proc/knark'_). Detection is performed by checking some predefined directories. Detection tools like Chkrootkit [16], OSSEC [17] combine file directory signature checking with other techniques to detect kernel-level rootkit. However, this type of detection can be easily evaded by slightly modifying the directory name.
#### 3.1.3 Checking System Call Table
As system calls are used to access the system resources, it is the most targeted object by the kernel-level rootkit. System call table data structure stores the system call addresses in the kernel memory. Kernel-level rootkit can tamper system calls in three ways: by modifying the system call address in the system call table to a malicious address; by overwriting first few instructions of the system call with jump instruction to execute malicious code; by redirecting the entire system call table to a new kernel memory location. Samhain Lab developed Kern_check [18] program that can compare current system call table with the original system call table stored in '_/boot/System.map_' system file of Linux OS to detect kernel-level rootkit that overwrite the system call table. Modification of system call is complicated due to rare condition. By comparing with hash values of uninfected system call can indicate a modification. Levine et al. [19] modified kern_check program to detect the system call table redirection. They assumed that the implementation of each malicious system call is unique for particular kernel-level rootkit resulting in signature that can be used to categorize the kernel-level rootkits [20, 21]. Zhou and Makris [22] used several x86 hardware conventions to detect system call table and system call routine modification. KRGuard [23, 24] uses recent hardware feature of the processor to detect kernel-level rootkit that modify the system call table. However, in this technique, it is not possible to detect DKOM attack for its nature not to affect the system calls.
#### 3.1.4 Kernel Data Access Pattern
A Kernel-level rootkits have evolved from injecting malicious code to maliciously reusing legitimate code. Unique data patterns exist when kernel-level rootkit tampers with the core kernel data. Kernel memory access information such as accessing code, the accessed memory type, and the accessed offset can create data access behavior signatures. DataGene [25, 26], a data-centric OS kernel malware characterization prototype, analyzes the data access behavior of the dynamic kernel objects of the monitored OS at runtime by using memory allocation events. These data access signatures can be used to detect the classes of kernel-level
rootkits that share the same data access pattern. The access patterns are not only common in a similar class of rootkits but also found across a variety of different classes.
### Behavior-based Detection
Behavior-based detection evaluates an attack based on its intended actions or behavior. Attempts to perform actions that are clearly abnormal or unauthorized would indicate the action is malicious, or at least suspicious. Different behavior-based kernel-level rootkit detection techniques are discussed in detail in this section. The strength and weaknesses or challenges of the behavior-based kernel-level rootkit detection approaches are shown in table 3.
#### 3.2.1 Detecting Hidden Objects on Host
Intruders often install kernel-level rootkits and later securely remove the binary from the disk to modify the kernel directly in the memory without leaving any trace against the traditional file discovery techniques. This type of rootkits can only be detected by monitoring behaviors of hiding objects like processes, modules, network connections etc. Ring and Cole [27] presented a design of a software-based forensics system that is capable to restore evidence of kernel-level rootkit from volatile memory. The design was implemented as a loadable kernel module to collect all running processes, dynamic kernel memory, system call addresses, all loadable kernel modules, and desired process information. The system freezes the processes, mounts the hard drive in read-only mode, and stores the evidence on a removable media to avoid being corrupted by kernel-level rootkit.
#### 3.2.2 Kernel Memory Access Behavior
Static kernel data are easy to determine from the kernel symbol table and can be protected without any sort of tracking by applying policies to any memory writes to the protected memory range. As the dynamic kernel data are dynamically allocated in any portion of the memory, first the location of data needs to be tracked before detecting any illegal memory access. Watchpoints, that watches memory accesses to a pointer to the protected data structure, need to be implemented to track dynamic data structure pointer and the data it points to. Then the illegal memory accesses can be observed by detecting data structure modification from unauthorized function. Based on the characteristics of kernel source code, one can enforce what kernel code is allowed to or prohibited from accessing protected kernel data. KernelGuard [28] is an example of detecting and preventing kernel-level rootkit using kernel memory accesses.
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} Approaches & Strengths & Challenges/Weaknesses \\ \hline Module static analysis. & Do not need to load the module. & Increased module loading time. \\ Checking file directories. & Fast detection. & Easy to evade by slight modification. \\ Checking system call table. & Easy to detect modification. & Values need to be stores and DKOM attack cannot be detected. \\ Kernel data access pattern. & Classes of kernel-level rootkit can be detected. & Performance overhead can occur. \\ \end{tabular}
\end{table}
Table 2: Signature-based Detection of Kernel-level Rootkit: Strengths and Challenges/Weaknesses.
#### 3.2.3 Function Pointer Hooks
Kernel-level rootkit can target dynamically allocated function pointers in kernel data structures to modify persistent control flow. The large number of kernel objects and function pointers along with closed-source operating system can make it difficult to generate effective hook detection policy. HookScout [29] used binary code analysis to track function pointers for generating hook detection policy without accessing OS kernel source code.
#### 3.2.4 Execution Path Analysis
An analysis [30] on Linux kernel-level rootkits shows that a significant number of kernel-level rootkits persistently violate control-flow integrity. The number of some hardware events occurred during the execution of a kernel function is different if the control-flow of that kernel function is maliciously modified. These events can be easily counted using hardware performance counter (HPC), a part of the performance monitoring unit in most modern processors. NumChecker [31, 32], a virtual machine monitor (VMM) based framework, can detect malicious modification to a system call by control-flow modifying kernel-level rootkits in the guest VM by checking the number of certain hardware events in host OS during system call's execution. Wang et al. [33] extended their hardware performance counter-based kernel-level rootkit detection approach to a new level that locally collect the hardware events sample but remotely analyze it. Remote analyzer reduces the computing resource overhead of the monitored system and compressive sensing technique [137] for compressed fine-grained HPC profiles minimizes the I/O bandwidth required for data transmission. Patchfinder [34], developed by Rutkowski, analyzes the execution path of system calls to calculate the number of instructions used to execute that system call. The number of instructions in an uninfected system needs to be calculated beforehand to compare them with the suspected system. This approach is not suitable to detect DKOM attack. DCFI-Checker [35] checks the dynamic control flow integrity by counting the executed branch instructions using performance monitoring counter.
#### 3.2.5 Device Driver Behavior
Kernel-level rootkit typically takes a form of device driver in Windows OS. To detect this type of rootkit, a comprehensive picture of the device driver needs to be provided by observing events such as the execution of driver's code, invocation of kernel functions, and access to the hardware. dAnubis [36] analyzes device driver's behavior by instrumenting the emulation environment and provides a human readable report. Along with common kernel-level rootkit techniques such as hooking, kernel patching and DKOM, dAnubis gives an overview of driver's interaction with other drivers and interface to user-space processes.
#### 3.2.6 Anomaly Within a Herd
By taking the advantage of the similarity amongst a group of analogous machines in a distributed system, one can effectively detect anomaly caused by kernel-level rootkit. Physical memory dumps can be used for configuration, kernel code, kernel data and kernel entry points comparison to detect an anomalous machine. As long as the majority of machines are uncompromised and viable memory dumps are available, Blacksheep [37] can distinguish compromised machines and also properly identify anti-virus software, self-modifying code used for security purposes. Fluorescence [38] is a detection approach with limited knowledge of kernel to detect infected virtual machine by kernel-level rootkit within a herd of similar virtual machines. The location of the page
global directory and the processor's instruction set are used to concisely fingerprint each kernel. Deep learning and clustering approaches are used in Fluorescence to find out the anomalous virtual machines.
#### 3.2.7 Rule-based Invariants
As kernel-level rootkit modifies the kernel data structure and kernel objects, it leaves some inconsistencies in the system. We can define some rules to hold for a clean system and indicate any deviation of the rules as an attack. For an example, we can define a rule such that in Linux OS, _task_struct_ and _run_list_ both data structures output should be the same. Wang [39] introduced a rule-based approach that chooses different data structures in different layers and performs an information calculation process to define rules as invariants based on the information. KLrHD [41] extracts whitelist rules from normal kernel execution during inference phase and uses those rules for checking data structures integrity violation during integrity checker phase.
### Cross-view-based Detection
The basic idea of cross-view-based detection is to compare two different views of the system. We can divide cross-view-based detection into two sub-categories: high-level view vs. low-level view and inside-the-box view vs. outside-the-box view. In the first category, it is easier to extract the views, but the data can be compromised by the kernel-level rootkit. In the second category, it is difficult to construct the view from outside the box, while the data is safe from the kernel-level rootkit. An overview of cross-view-based detection approach is shown in figure 2. The strength and weaknesses or challenges of the cross-view-based kernel-level rootkit detection approaches are shown in table 4.
#### 3.3.1 High-level View Vs Low-level View
#### 3.3.1.1 Multiple System Utilities
Any discrepancy between outputs in gathered data by multiple system utilities from user-space could lead to kernel-level rootkit detection. Molina et al. [42] proposed a live forensic tool based on this idea. However, the data of the forensic tool can be compromised by active kernel-level rootkits since the tool is running in user space with a lower privilege than rootkits.
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Approaches & Strength & Challenges/Weaknesses \\ \hline Detecting hidden objects on host. & Software evidence. & implementation to store evidence. & Need to rely on host OS. \\ Kernel memory access behavior. & Dynamic data can be protected. & Need OS kernel source code. \\ Function pointers hook. & No need to access OS kernel source code. & Detection system running inside the host can be subverted. \\ Execution path analysis. & Enhanced security with reduced performance overhead. & Vulnerable against DKOM attack. \\ Device driver behavior. & Malicious device driver behavior can be emulated. & Unable to analyze device driver exempt kernel rootkit injection. & Unable to analyze device driver exempt kernel rootkit injection. \\ Anomaly within a herd. & Effective for homogeneous corporate networks and clouds. & Will not work if majority of machines are compromised. \\ Rule-based invariants. & Do not need prior knowledge of kernel-level rootkit. & A large number of invariants set. \\ \hline \hline \end{tabular}
\end{table}
Table 3: Behavior-based Detection of Kernel-level Rootkit: Strengths and Challenges/Weaknesses.
#### 3.3.1.2 Device Driver at Low-level
A low-level view of the running system can be portrayed using a device driver implemented in the kernel. But this approach is vulnerable against kernel-level rootkit as both of device driver and rootkit have the same privilege. An access control list can be enforced to avoid the subversion. Kernel Rootkit Trojan Detection (KeRTD) [43], a cross-view-based solution implemented in the host, uses view-difference to detect kernel-level rootkits. DeepScanner [44], implemented as Loadable Kernel Module (LKM) in Linux OS, uses inter-structure signature and imported signature concepts to scan kernel memory for detecting hidden processes, sockets, and kernel modules according to proposed invariants. The output of system utilities including _ps_, _netstat_, and _Ismod_ is used for a cross-view comparison to detect kernel-level rootkits. Strider GhostBuster [45] also uses a driver to perform low-level scan and compare the result with a high-level scan.
#### 3.3.1.3 Memory Dump Inside Host
Korkin and Nesterov proposed Malware Analysis System for Hidden Knotty Anomalies (MASHKA) [46] for a memory dumping and analysis of a host that can be used to detect kernel-level rootkits. MASHKA uses encryption to protect the saved dump file from modification. The analysis system is implemented in a Windows OS and uses a dynamic bit signature (DBS) to obtain all process lists from dump memory file EPROCESS structure that can be compared with the list obtained by system utility tools. This system is also able to detect hidden drivers. The authors additionally discussed the possibility of MASHKA to be deployed as security as a service (SaaS) in the cloud.
#### 3.3.2 Inside-the-box View Vs Outside-the-box View
#### 3.3.2.1 Live Kernel Object Mapping
Snapshot-based memory mapping are time specific and kernel memory can be manipulated within the time-gap between two memory snapping by the kernel-level rootkit. And not all the data structures have an invariant to create an untampered view. By capturing the allocation and deallocation events of the kernel object, a live untampered view of that kernel object can be mapped. A difference between the set of kernel object found by
Figure 2: An overview of cross-view-based kernel-level rootkit detection mechanism.
traversing the kernel memory and a live untampered view indicates an anomaly caused by kernel-level rootkit. Using this approach, LiveDM [47] detects DKOM-based kernel-level rootkit. KOP [48] has the ability to map the kernel objects that can be used to detect objects hidden by kernel-level rootkit.
#### 3.3.2.2 Process List Length Hypothesis
The length of process lists obtained from a low-level and high-level can be compared to detect hidden process by kernel-level rootkit. It is sufficing on an idle system by taking a single instance of the two process lists and compare them. But on an active system, without perfect synchronization there could be false positive results. Lycosid [49] obtains a trusted view of guest processes from within a VMM and overcomes this problem by taking many pairs of measurements over time and then performs a paired sample hypothesis to estimate the number of hidden processes.
3.3.2.3 System Call Address DistributionThe knowledge about the distribution of system call addresses in a clean system can be a good measure for detecting kernel-level rootkits. Wampler and Graham [50] proposed a statistical technique that compares the distribution of system call addresses in a clean system and suspicious system. The experiment with a couple of kernel-level rootkits showed that the 'largest extreme value' distribution using Anderson-Darling (AD) test [138] can be used to detect kernel-level rootkit. The authors later experimented with Enyeklkm kernel-level rootkit that attacks the system via system call target modification [51]. In system call target modification attack, the system call table does not need to be changed, but only the first few instructions are overwritten with a jump instruction that redirects the control flow to malicious code. The authors first disassembled the running kernel to collect all conditional and unconditional jump instructions and then analyzed the memory address operands of those instructions. The appearance order of these memory addresses is considered as the second dimension. Then a normality-based detection is used to detect the malicious addresses.
3.3.2.4 System Call EventsDue to the semantic gap, it is difficult to acquire knowledge about guest kernel data structure from virtual machine monitor and also advanced attacks can tamper the guest kernel data structures layout [52]. Semantic gap problem to reconstruct process information can be overcome by intercepting and interpreting system call events of the guest operating system. Executed instructions can be tracked to intercept the beginning and return of a system call event. Then the parameter along with the system call can be interpreted by reading certain hardware register values. XView [53] constructs an outside-the-box view of active processes list from system call events and compares it with inside-the-box system utility tools output to detect hidden processes. VMDetector [54] uses system call events to construct active processes list from kernel-level view and VMM-level view and then compares it with a user-level view to detect hidden processes.
3.3.2.5 Dynamic Data Structure SignatureKernel-level rootkit often uses a DKOM technique to hide processes, threads, and modules. The hidden objects can be detected by scanning data structure objects signatures in the kernel memory and perform a cross-view detection. Kernel-level rootkit can modify non-essential fields of the data structures to evade the memory scanning detection relying on brittle signatures. The robust signatures of the data structure fields will make the object invalid if changed. A similar work has been proposed by Dolan-Gavitt et al. [55]. The authors have shown that it is possible to evade memory scanning by modifying the non-essential fields of the EPROCESS data
structure in Windows OS. The profile of data structure objects' robust fields during execution is also used as signatures to detect kernel-level rootkit. SigGENE [56] profiles kernel object features during malicious code execution. SigGraph [57] generates graph-based structural invariant signatures that can achieve high accuracy in recognizing kernel data structure instances.
#### 3.3.2.6 Volatile Memory Traces
Kernel-level rootkit may hide malicious modules, processes, network connections etc., but still it leaves its footprint to volatile memory while it is executed. Kernel-level rootkit that do not use DKOM techniques are easier to detect by simply reconstructing the corresponding data structure's view from volatile memory. For example, _PsActiveProcessHead_ and _init_task_ are the head of the process list in Windows and Linux OS, respectively. One can go through the complete process list starting from this position. Xie and Wang [58] applied this approach to other data structures to detect kernel-level rootkit. However, this approach is vulnerable against DKOM technique as it modifies the data structures in the memory. Dynamic data structure signatures described in the previous section, 4.3.2.5 are used to locate all data structure objects. Volatility [59] is well-known framework to reconstruct data structure view from volatile memory. Rkfinder [60] generated an abstract view of the system state to reveal the inconsistencies by integrating major capabilities of Volatility framework. The drawback of memory forensic tools is their dependency on up-to-date kernel information of the target OS. HyperLink [61] is an implementation of partial retrieval of process information using memory forensic without requiring OS kernel source code. Other literatures like Hua and Zhang [62], HyBIS [63], WinWizard [64], Zaki and Humphery [65] leverage traces from memory to detect kernel-level rootkit. MAS [66] uses memory traversing to find the visibility of data objects to system tools found from memory snapshot.
Most of the prior research on kernel-level rootkit detection were focused on Windows and Linux-based operating system. Case and Richard [67] proposed new memory forensic and analysis techniques for the Mac OS X system motivated by Windows and Linux-based detection strategies. The authors described the system service functionalities that can be abused and developed Volatility plugin for each of those services to detect tampering or malicious use. Volafox [68] is a memory analysis toolkit for Mac OS X that can be used to detect malicious modification of memory by a kernel-level rootkit. Kyeong-Sik Lee [69], the prime developer of Volafox, described the memory forensic technique adopted by Volafox.
#### 3.3.2.7 CPU Execution Time Metric
CPU execution time could be a reliable source for constructing a view of running processes list as it is very critical to forge the value. One can hook the tap points (an execution point where monitoring can be performed) of process data structure object creation and deletion, then count the CPU execution time of the executed process. A hash table can be used to store the accumulated CPU time for each process. AUTOTAP [70] uncovers such tap points for kernel data structure objects. A cross-view comparison between the running process list and the output of system utility can detect hidden process.
#### 3.3.2.8 Hidden Control Flow
Kernel-level rootkit introduces unintended or hidden control flow by injecting new instructions or misusing existing instructions. Since every instruction must be issued to the processors, it is impossible for kernel-level rootkit to fool the processor by modifying control flow. One can construct a hardware view of the sum of branch instructions issued to the processors with the support of performance monitoring counter. A cross-view
comparison with software view of executed instructions will show the hidden control flow. BeCFI [71] is an implementation of this approach.
#### 3.3.2.9 Process Switching
By monitoring the process switching and mapping the memory, it is possible to construct a semantic view of running processes inside a guest VM. One can monitor process switch to check kernel stack switching and extract the corresponding raw memory using memory mapping. Then the raw memory is translated into high-level semantics with the help of a semantic library. RMVP [72] creates a real-time process monitor to detect hidden processes.
#### 3.3.2.10 Walking Through Linked List
One can construct a kernel view of loaded modules and a list of running processes by walking through the corresponding linked link. Then the output of system utility tools can be used for a cross-view detection. This approach is vulnerable against DKOM attack, as the kernel-level rootkit unlink the data object from the linked list. XenKIMONO [73] uses this approach for cross-view-based detection along with integrity measurement.
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline Approaches & Strength & Challenges/Weaknesses \\ \hline \multicolumn{4}{c}{**High-level view Vs Low-level view**} \\ \hline Multiple system utilities. & Can be implemented inside the host. & Vulnerable against modern kernel-level rootkit. \\ Device driver at low-level. & Scanning can be done in a short time. & Have same privileges as kernel-level rootkit. \\ Memory dump inside host. & Kernel memory can be dumped inside the host with encryption. & Kernel-level rootkit can subvert the detection system. \\ \hline \multicolumn{4}{c}{**inside-the-box view Vs Outside-the-box view**} \\ \hline Live kernel object mapping. & Untampered view cannot be manipulated by kernel-level rootkit. & Obluscation technique can confuse the detector. \\ Process list length hypothesis. & Trusted view is constructed outside the host. & Only applicable to detect hidden process. \\ System call address distribution. & Effective for system call target modification attack. & Natural outlier may incur disturbance. \\ System call events. & Active running process list can be monitored. & Unable to detect hidden module. \\ Dynamic data structure signature. & Dynamic kernel objects can be detected. & Signatures can be evaded. \\ Volatile memory traces. & Detection system can be implemented remotely. & Depend on OS kernel information and transient attacks may remain undetected. \\ CPU execution time metric. & Difficult to forge the execution time value. & Need to store a hash table. \\ Hidden control flow. & Impossible to fool the processor by modifying control flow. & Need of hardware support increases overhead. \\ Process switching. & Real time process can be monitored. & Only hidden process can be detected. \\ Walking through linked list. & Kernel objects hidden from system utilities can be easily detected. & Vulnerable against DKOM attack. \\ \hline \end{tabular}
\end{table}
Table 4: Cross-view-based Detection of Kernel-level Rootkit: Strengths and Challenges/Weaknesses.
### Integrity-based Detection
The kernel-level rootkit tampers the integrity of both static region and dynamic region of the operating system. While some research focuses on only static region integrity, recent research focuses on dynamic region integrity as modern kernel-level rootkits mostly alter the dynamic data structures. It is comparatively easier to check the integrity of static region as the dynamic region changes during runtime operation. The strength and weaknesses or challenges of the integrity-based kernel-level rootkit detection approaches are shown in table 5.
#### 3.4.1 Static Region Integrity
#### 3.4.1.1 Write Attempt to Read-only Memory Section
In modern computer architecture, certain sections of memory are read-only as a part of memory protection interface. Kernel-level rootkits modify these sections by running with the highest privilege. A significant research in this area was done by Garfinkel and Rosenblum [74]. They built Livewire at hypervisor layer that detects any write attempt to modify the sensitive read-only memory section by leveraging the isolation, inspection, and interposition properties of virtual machine monitor. System states and events from the VMM are intercepted by a policy engine to take a decision of pausing the VM state or refusing access to the hardware resources. The policy engine acts as IDS (intrusion detection system) with strong isolation and also has good visibility into the state of the host that needs to be monitored. Paladin [75, 76] detects the kernel-level rootkit by monitoring the write access to the memory image of the kernel, various jump tables, and system files. StackSafe [77] also checks for the write attempt to the kernel code. OSck [78] detects static control-flow modifying kernel-level rootkits by write protecting kernel text, read-only data and special machine registers. Zhang et al. [79] use Kernel-based virtual machine (KVM) to protect the static kernel code and static kernel data structures against write attempts to those sections.
#### 3.4.1.2 Hashing Known Memory Region
Rootkit signatures or low-level filesystem scans can be easily fooled by advanced kernel-level rootkit. Unauthorized kernel modification caused by kernel-level rootkit can be detected by checking the periodic hashes of the static data structures and kernel code segment. Pioneer [80] uses a software-based code attestation approach to periodically verify the kernel code segment hashes by SHA-1 hash function. XenKIMONO [81] uses MD5 hashing algorithm to monitor the integrity of kernel text and jump tables. Psyo-Virt [82] computes hashes of critical kernel text using SHA512. RootkitDet [83] registers the kernel and the potential LKMs of the guest OS earlier and performs a comparison of SHA-1 checksums to detect malicious modification of legitimate code by kernel-level rootkits. Patagonix [84] verifies the integrity of all executing binaries by inspecting the code as it executes in the memory using an external database [85]. Another corresponding literature is Kvm-SMA [86]. Kvm-SMA is a security management architecture that monitors the integrity of guest VMs and does not any modification to guest VM. Win et al. [87] proposed to hash only 8 bytes from the initial starting offset of the 9\({}^{\text{th}}\) byte to reduce the overhead. EPA-RIMM [88] leverages System Management Mode (SMM), a privileged x86 CPU mode, to measure kernel integrity by periodically checking SHA-256 hash values of particular memory region, control registers and model-specific registers. SGX-Mon [89] leverages Intel's SGX [90] to enclave integrity monitor inside user-space and uses CRC-32, SHA-256 hashing algorithm for performing checksum operation. System call addresses and system call hash values are
used in CloudMon [91] to detect kernel-level rootkit in cloud environment. State-based control flow integrity, SBCFI [30] also uses hash function to validate the kernel text including static control flow transfer.
#### 3.4.1.3 Access Control Policy
The integrity of the kernel can be protected by imposing access control policy to sensitive kernel objects like kernel text, system call table, interrupt descriptor table etc. The policy module can be easily implemented in VMM layer as it has the higher privilege than the OS kernel. Xu et al. [92] described a flexible and fine-grained access control policy based on the usage control model (UCON) with decision continuity and attribute mutability properties for kernel integrity protection.
#### 3.4.1.4 Page-level Dynamic Tracing
A secure system call always executes unmodified pages and modified pages or new allocated pages are executed by a hooked system call. Page-level execution sequence of the system call and the content of these pages are monitored to create a secure control-flow database. Zhan et al. [93] presented a dynamic page-level kernel control-flow integrity checking solution in the cloud.
#### 3.4.2 Dynamic Region Integrity
3.4.2.1 Function Pointers Verification
Kernel-level rootkit can modify the OS control-flow by using function pointer to point to a malicious code to execute. Kernel-level rootkit can be detected by checking the function pointers if they are pointing to any untrusted code. KOP [48] performs a systematic analysis of function pointers in kernel memory snapshot that can be used to detect kernel-level rootkit. In kernel memory, the EIP register stores the address of the next instruction to be executed and EBP register contains the address located just behind the return address. If the function pointers executed in kernel mode point to an address outside of valid kernel code regions, a kernel control-flow integrity violation is triggered. This approach is used in StackSafe [77] to verify the control-flow integrity. OSck [78] verifies function pointers with the type-graph specified by the kernel code to detect kernel-level rootkit modifying dynamic control-flow. MAS [66] uses memory traversing to verify function pointers pointing to the trusted code. SBCFI [30] considers the dynamic state of the kernel and verifies that function pointers point valid code to validate the dynamic control flow transfer.
3.4.2.2 Kernel Data Layout PartitioningKernel memory can be partitioned with different access control policy to restrict access to the data in a protected region. Loaded modules and drivers can be restricted to write only driver data and portions of the core kernel data. Only trusted core kernel code is allowed to write any kernel data. In Linux kernel memory the code spans from _text to _text_. Sentry [94, 95] specifies what data objects can be written in what kernel code regions using kernel memory access control policy.
3.4.2.3 Secure Page MappingThe data that need to be protected are listed in a page table and virtual addresses that have privileges to modify protected dynamic data legally get whitelisted to detect kernel-level rootkit. Any virtual address outside of the whitelist trying to modify protected dynamic data indicates a suspicious attempt by kernel-level rootkit. An
instruction trying to modify protected virtual address not registered in the whitelist is skipped. MOSKG [96] implements secure page mapping in multiple OS to protect critical kernel data.
#### 3.4.2.4 Event-based Behavior Pattern
Traditional kernel-level rootkitts can be analyzed to characterize the malicious behavior patterns of OS events including register accesses, memory accesses, system calls, etc. If any pattern is matched during normal OS runtime, an integrity checker runs to check kernel invariants violation. The static memory region is checked with hash values and the dynamic kernel data are checked with sequences of basic events like in BehaviorKI [97].
### External Hardware-based Detection
Kernel-level rootkit can also be detected using external hardware devices and the detection system is isolated from the monitored system. Though this approach is not much popular, still there are some effective solutions to detect kernel-level rootkit. This detection approach can be divided into two sub-categories: Snap-based and Snoop-based. Figure 3 shows a simplified overview of external hardware-based detection approach using PCI card. The strength and weaknesses or challenges of the external hardware-based kernel-level rootkit detection approaches are shown in table 6.
#### 3.5.1 Snap-based Detection.
#### 3.5.1.1 Hashing Known Memory Region
By utilizing a Peripheral Component Interconnect (PCI) add-in card, host memory can be retrieved for examination without the knowledge about or intervention of the host kernel. A monitor is placed inside the add
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline Approaches & Strength & Challenges/Weaknesses \\ \hline \multicolumn{4}{c}{**Static Region Integrity**} \\ \hline Write attempt to read-only memory section. & Kernel-level rootkit can be prevented. & Only static region can be protected. \\ Hashing known memory region. & Difficult to fool or tamper the value. & Need to store a hash table. \\ Access control policy. & Integrity of the kernel can be protected. & Policy modules need to be implemented. \\ Page level dynamic tracing. & Improved execution time than branch or instruction monitoring. & DKOM attack cannot be detected. \\ \hline \multicolumn{4}{c}{**Dynamic Region Integrity**} \\ \hline Function pointer verification. & Static and dynamic function pointers can be verified. & May require OS kernel source code. \\ Kernel data layout partitioning. & Sensitive members of important data structures can be protected. Can be implemented in different OS. & Requires code revision of OS kernel source code. \\ Event-based behavior pattern. & Behavior pattern will trigger the integrity checking. & Whitelist can suffer lack of completeness and the extent of protection is not sufficient. \\ \hline \end{tabular}
\end{table}
Table 5: Integrity-based Detection of Kernel-level Rootkit: Strengths and Challenges/Weaknesses.
in card that creates known good hashes for kernel text, text of LKM, and critical data structures and then periodically checks for changes. Copilot [98] is one of the first external hardware-based kernel-level rootkit detection systems. Copilot uses MD5 hashing algorithm and depends on some specific features of the IBM PCI bus. Wang and Dasgupta [99] proposed a kernel-level rootkit detection system that checks part of the OS kernel integrity by external hardware, and which results in checking other static parts of the kernel using cryptographic hash. GRIM [100] leverages GPU architecture to improve the detection rate of snap-based system and shows the impact of multiple hashing algorithm to detection rate.
#### 3.5.1.2 Data Structure Invariants
Sophisticated kernel-level rootkits evolve to tamper kernel dynamic data structures instead of static kernel memory region. An external PCI-based monitor can be used to access low-level kernel data structures of the host and model a set of constraints that will remain correct at runtime for an unmodified kernel. Petroni et al. [101] demonstrated such constraints for detecting kernel-level rootkits. Gibralter [102, 103] also uses external PCI card to hypothesize and infer invariants on kernel data structures to detect kernel-level rootkit.
#### 3.5.2 Snoop-based Detection
#### 3.5.2.1 Write Operation to Immutable Region
The operation of the host system can be monitored from an independent system outside the host system by snooping the bus traffic of the host system. Any modification to kernel immutable region of the host OS becomes detectable by snooping the write operation on those addresses. Vigilare [104, 105] is claimed to be the first external hardware-based kernel-level rootkit detection system that has the snooping capability to monitor the kernel integrity.
#### 3.5.2 Event Triggered Mutable Object Monitoring
KI-Mon [106] is an event-triggered external hardware-based kernel integrity monitor for mutable kernel objects. To report the address and value pair of memory modification on a monitored object, KI-Mon generates an event. The system detects VFS modification by hardware-assisted whitelisting-based verification events and uses callback-based semantic verification events to detect LKM hiding modification. The authors extended their work [107] on ARM architecture to demonstrate the efficacy in terms of KI-Mon's performance overhead and processor usage.
Figure 3: A simplified overview of external hardware-based detection using PCI card.
### Learning-based Detection
With the increase of cybercrime in recent years, the automatic detection of known and unknown attacks now become important in modern security systems. A learning-based detection is an excellent approach to automatically detect known and unknown attacks with high accuracy. Figure 4 shows a general overview of learning-based detection approach. The strength and weaknesses or challenges of the learning-based kernel-level rootkit detection approaches are shown in Table 7. Table 8 shows the summary of the learning-based detection approaches for the kernel-level rootkit.
#### 3.6.1 Emulating Kernel Driver Behavior
Learning algorithm can be applied to a set of kernel driver run-time features derived from the execution behavior using emulator to distinguish between malicious and legitimate kernel drivers. Limbo [108] is more likely a
\begin{table}
\begin{tabular}{l l l l} \hline \hline Approaches & Strength & Challenges/Weaknesses \\ \hline \multicolumn{4}{c}{**Snap-based Detection**} \\ \hline Hashing known memory region. & Difficult to fool or tamper the value. & Transient attacks can evade detection. \\ Data structure invariants. & Both control and non-control modification can be detected. & OS kernel source code may require, and invariants can be incomplete. \\ \hline \multicolumn{4}{c}{**Snoop-based Detection**} \\ \hline Write operation to immutable region. & Transient attacks can be detected. & Cannot detect DKOM attack. \\ Event-triggered mutable & object & DKOM attack can be detected. & Additional cost for external hardware. \\ monitoring. & & & \\ \hline \hline \end{tabular}
\end{table}
Table 6: External Hardware-based Detection of Kernel-level Rootkit: Strengths and Challenges/Weaknesses.
Figure 4: A general overview of learning-based detection approach. In the training phase, learning models is trained using training data and optimized using hyper-parameters. The trained model is then used to predict the output of new data fed into the system.
preventure approach that forces the kernel driver to execute in an emulated environment and extract the features of the kernel driver. Selection of kernel driver features is based on their run-time behaviors and binary attributes. Limbo used a Naive Bayes classifier training tool to distinguish between legitimate and malicious Windows kernel drivers with the extracted features as input. The author classified the features into seven categories of which each member's value is either a logical value (true or false) or an integer count. As the Limbo executes the kernel driver in the emulator to extract features, it poses additional delay in loading time of the kernel driver.
#### 3.6.2 Statically Analyzing Kernel Driver
The obfuscation employed in kernel-level rootkit binaries makes the static analysis difficult. Still kernel-level rootkit can be detected through static analysis by disassembling the kernel driver and extract features like general behavior, communications, suspicious behaviors etc. Musavi and Kharrazi [109] focused on static analysis to detect kernel-level rootkit. When a user-level application installs or drops a driver, the detection process disassembles the driver to extract a set of features and use a binary classifier to distinguish between malicious and legitimate drivers.
#### 3.6.3 Virtual Memory Access Pattern
Memory access pattern of legitimate and infected execution of an application differ if kernel-level rootkit modifies associate control-flow or data structures. Instead of distinguishing malicious and benign applications, Xu et al. [110] proposed to use virtual memory access pattern to distinguish exploited execution and legitimate execution of each application. For each system call, four types of memory accesses are used as feature set to train the machine learning model.
#### 3.6.4 Event Counts Using Hardware Performance Counter
Events associated with hardware related activities such as clock cycles, cache hits/misses, branch behavior, memory resource access patterns etc. can be counted using HPC. The events count will differ from normal counts if kernel-level rootkit modifies the control-flow of the OS kernel. This approach will not work against DKOM attack as no malicious code will be executed during trace-collection. Singh et al. [111] designed five different synthetic rootkits with single rootkit functionality and used those rootkits to identify the most important HPCs. The authors used four machine learning classifiers (SVM, OC-SVM, Naive Bayes, and Decision Tree) to train the machine learning model with HPC traces data.
#### 3.6.5 Volatile Memory Traces
Memory forensic analysis can also be combined with learning-based approach to detect kernel-level rootkit. Volatility [59] plugins can be used to extract features from memory dumps. The extracted features may include hidden kernel modules, abnormal driver objects, SSDT hooking, abnormal callbacks and timers, orphan threads, and other hooking behaviors. TKRD [112] experimented with memory dump features using seven machine learning classifiers and evaluated their performance. Nadim et al. [139, 140] also proposed characteristic features of the kernel-level rootkit extracted from volatile memory traces to train learning-based models.
#### 3.6.6 Access Operation to Code, Data, and Register
The run time behavior of a kernel module can be divided into three following categories: code access, data access, and hardware register access. Hardware assisted virtualization technique can be used to isolate memory region and registers access for a kernel module, and then the behavior of that kernel module can be extracted. The behavior features of kernel module may include important kernel API invocation, executing code in kernel data region, write operation to kernel memory area, write operation to important hardware registers etc. VKRD [113] experimented with these features to train multiple machine learning algorithms. As the features are either binary or a counter value, they used Min-Max normalization method to normalize the values.
#### 3.6.7 System Call Execution Time
Since a large number of kernel-level rootkits modify the control flow by altering system calls, system call times can be an important feature to detect kernel-level rootkit. Luckett et al. [114] proposed a behavior-based analysis of system call execution times. The authors used the neural networks to classify system calls for detecting the presence of rootkit within a system.
#### 3.6.8 Process Execution Behavior Profile
Deviation from execution behavior profiles of dynamic intra-process based on architecture level semantics can be used to detect kernel-level rootkits. The key insight of this mechanism is that the kernel-level rootkit leaves abnormal traces in architecture-level semantics by maliciously modifying the kernel objects that distort the execution flow of benign processes. Hardware events like data dependencies between registers, OS privilege transition, and branches in program execution flow can be incorporated to interpret the program data/control transfer flow as features. Zhou and Makris [115] introduced a hardware-assisted machine learning-based rootkit detection mechanism that first identifies the process class and then employs Kernel Density Estimation (KDE) to indicate a compromise in process behavior caused by a kernel-level rootkit.
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline Approaches & Strength & Challenges/Weaknesses \\ \hline Emulating kernel driver behavior. & Prevent malicious driver to load. & Additional delay in driver loading time. \\ Statistically analyzing kernel driver. & Analysis can be done inside the host. & Detector is vulnerable to advanced kernel-level rootkit. \\ Virtual memory access pattern. & Malware leaves fingerprints on program memory accesses. & DKOM attacks may remain undetected. \\ Events count using HPC. & Control-flow modification can be detected with high accuracy. & DKOM attacks have no impact on HPC. \\ Volatile memory traces. & Detection system can be implemented separately. & Transient attacks can evade detection. \\ Access operation to code, data, registers. & Target kernel module can be isolated from kernel space. & Memory isolation may introduce significant performance overhead. \\ System call execution times. & System calls need to be executed to perform malicious activities. & May have no impact on DROM attack. \\ Process execution behavior profile. & Immune to software tampering. & Hardware assistance will cause performance overhead. \\ \hline \hline \end{tabular}
\end{table}
Table 7: Learning-based Detection of Kernel-level Rootkit: Strengths and Challenges/Weaknesses.
## 4 More Kernel-Level Rootkit Literatures
In this section we will discuss about the prior literature on preventing kernel-level rootkit and profiling the kernel-level rootkit behavior and widely used tools for detecting kernel-level rootkit.
### Kernel-level Rootkit Prevention
Zhao et al. [81] proposed a secure virtual file system (SVFS), a prevention system that provides secure data storage against a kernel-level rootkit. SVFS stores sensitive files in a dedicated virtual machine separate from application guest virtual machines. All the accesses to sensitive data are subject to be applied by access control policy when going through SVFS. Therefore, the kernel-level rootkits cannot bypass this protection by compromising application guest OS. The limitation of SVFS is that it does not prevent kernel-level rootkit to exploit guest OS, it only prevents kernel-level rootkits to run automatically when guest OS reboots.
Seshadri et al. [116] formulated SecVisor to ensure code integrity for OS kernels by allowing only user-approved code to execute in kernel mode. Hardware memory protections are used to ensure kernel code integrity. Both CPU's memory management unit (MMU) and I/O memory management unit (IOMMU) are modified to ensure that only kernel code confirmed by a user-supplied policy will be executed. By these
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Prior work & Feature & Learning Algorithm & Operating System \\ \hline Limbo [108] & Static attributes of driver’s binary and dynamic attributes like data structure access, descriptor table, and driver-related features. & & \\ \hline Musavi and Kharrazi [109] & Dis-assembled driver’s code including Kernel function calls, constants, assembly commands, variable type etc. & & \\ \hline Xu et al. [110] & Virtual memory access pattern of system call. & Random Forest, SVM, Logistic Regression & Linux Debian \\ \hline Singh et al. [111] & Event count using Hardware Performance Counter (HPC). & SVM, OC-SVM, Naive Bayes, Decision Tree & Windows 7 \\ \hline TKRD [112] & Volatile memory traces of modules, threads, drivers, IRP and SSDT hooks, callbacks, and timers. & Random Forest, J84, JRip, PART, BayesNet, Naive Bayes, SMO & Windows 7 \\ \hline VKRD [113] & Run-time features of kernel modules such as Kernel API invocation, Code write, Data write, and Register access operations. & SVM, Decision Tree, Random Forest, KNN & Windows XP \\ \hline Luckett et al. [114] & System call execution time. & Feed Forward, Nonlinear auto-regressive & Linux Ubuntu \\ \hline Zhou and Makris [115] & Data dependencies on general purpose registers and branches in program execution flow. & KNN, SVM, ANN & Linux \\ \hline \end{tabular}
\end{table}
Table 8: Summary of Learning-based Kernel-level Rootkit Detection Approaches.
modifications, the kernel can be protected against malicious writes via direct memory access (DMA) device. SecVisor works as a preventive tool against kernel-level rootkit after loading themselves into the memory. However, if the OS kernel has pages that contain both data and code, SecVisor does not function. Additionally, SecVisor requires modifying the source code of the kernel, which makes it difficult to support for closed source operating systems like Windows.
Butler et al. [117] introduced a rootkit-resistant disk (RRD) that label all configuration files and system binaries to prevent a compromised operating system from infecting its on-disk image. The RRD is implemented on a network storage device not to make the kernel-level rootkit become persistent. A tightly governed administrative token required for system write-capability blocks any malicious modification of the immutable memory block of the host OS during normal operation.
NICKLE is a virtual machine monitor (VMM) based kernel-level rootkit detection and prevention system presented by Riley et al [118]. It uses a memory shadowing scheme to store the authenticated kernel code in the shadow memory and at the runtime, transparently routes guest kernel instruction fetches to the shadow memory. The NICKLE system effectively works in Linux and Windows OSes targeting kernel-level rootkit. As NICKLE does not modify kernel code, it easily overcomes the drawbacks of SecVisor. However, NICKLE does not effectively protect the self-modifying kernel code, which is available in both Linux and Windows OS and does not support kernel page swapping.
One of the most commonly adopted techniques by kernel-level rootkits to evade detection is hooking the kernel object of the system. To efficiently protect the kernel hooks from being hijacked in a guest OS, Wang et al. [119] proposed HookSafe that relocates kernel hooks to a dedicated page-aligned memory space. Then the accesses to the kernel hooks are regulated with hardware-based page-level protection. Besides memory-based kernel hooks, HookSafe also regulates the accesses of hardware registers such as Interrupt Descriptor Table Register (IDTR), Global Descriptor Table Register (GDTR), SYSENTER MSR registers, and DRO-DR7 debug registers. The system successfully prevents modification of protected kernel hooks against real-world kernel-level rootkits.
Oliveira and Wu [120] proposed a solution that protects kernel code and data integrity by preventing kernel-level rootkits. At the architecture level (memory and registers), all the write attempts to kernel code and data segments are checked for validity by enforcing Biba's star [121]. The process associated with the illegal write operation is terminated but the rest of the system is allowed to continue execution.
Xuan et al. [122] presented DARK, a system that tracks LKM to prevent kernel-level rootkits. By dynamically switching a running system between virtualized and emulated execution, DARK thoroughly captures the target module's activity in a guest OS. It provides a flexible security policy framework with access control rules to detect malicious modules. The kernel rules are then experimented against kernel-level rootkits to find out effectiveness.
Rootkits often reside in the storage to survive from system reboots thus, pose a serious security threat being persistent. A hypervisor-based file protection scheme was presented by Chubachi et al. [123] to prevent persistent rootkits from residing in the storage. The authors run the target OS without hypervisor to create a security policy and map protected files to a set of regions in the storage with administrator mode. By making the critical file system always read-only, the target OS is then run with a hypervisor in normal mode. As the hypervisor has a higher privilege than the target OS's kernel, kernel-level rootkits are not able to overwrite the security policy by manipulating the kernel.
Grace et al. [124] introduced a hardware virtualization-based architecture to protect commodity OS kernel against kernel-level rootkits. This prevention system can effectively reduce performance overhead without modifying the commodity OS kernel. The authors use page-level redirection of instruction fetches and make them mode-sensitive by redirecting only kernel instruction fetches. However, the proposed prevention system does not protect kernel control-flow integrity and does not support self-modifying kernel code.
Schmidt et al. [125] presented an approach to prevent kernel-level rootkit attacks as well as to detect malware in the cloud computing environment. To load only cryptographically authorized and trusted kernel modules, the OS kernel is modified. By checking the integrity of the authorized kernel modules, kernel-level rootkit attacks through malicious modules can be prevented.
### _Profiling Kernel-level Rootkit Behavior_
To design an effective kernel-level rootkit detection solution, it is important to profile best behaviors that reveal kernel-level rootkits. The system proposed by Levine et al. [20] not only detects the kernel-level rootkits but also categorizes detected kernel-level rootkits based on the assumption that for a particular kernel-level rootkit, the implementation of each malicious system call is uniform. From the archived hash values of malicious system calls, they categorize a new unknown kernel-level rootkit to a modified version of previously known kernel-level rootkit or a new one. They conclude that a new kernel-level rootkit retrieved from honeynet is a combination of two previously known rootkits [126].
One of important kernel-level rootkit's tasks is to execute malicious code that manipulates the sensitive data accessed by user-level programs to reflect system states via system calls or critical data structures maintained by the kernel. K-Tracer, proposed by Lanzi et al. [127], is a dynamic kernel-level analysis engine for the Windows OS that performs data-flow analysis on sensitive data to extract the malicious behavior of kernel-level rootkit. To identify the rootkit behavior, K-Tracer uses a combination of forward and backward slicing techniques on selective stimulated kernel events. K-Tracer was implemented on the QEMU [128] emulator environment to perform instruction-level execution tracing, leaving a probability of evasion by malware that can detect underlying emulator [129]. This approach also has some limitations against sophisticated rootkit techniques such as DKOM (direct kernel object modification) for which authors discussed further improvement of the system to counter such sophisticated kernel-level rootkits.
Wang et al. [130] proposed a systematic approach named HookMap to identify the kernel hooks used for hiding the presence of rootkits. By their design, kernel-level rootkits attempt to conceal their presence from various system utility programs. HookMap analyzes the kernel side execution path of those programs to find the set of kernel hook that are potentially vulnerable for attack by kernel-level rootkits. The authors manually analyzed Linux-based rootkits and found that all identified kernel hooks are listed in their results. This approach is only effective when applying to the kernel-level rootkits that attack the kernel control flow.
HookFinder, a prototype developed by Yin et al. [131] automatically identifies hooking behavior of malicious code and extract hook implementation mechanisms without any prior knowledge. To identify a hook, they observe the instruction pointer. The change in memory with other machine states are labeled as _impact_. If the instruction pointer is loaded with marked _impact_ and the execution jumps immediately into the malicious code, they identify the hook. An emulator is used for implementing the HookFinder, which provides isolation between the analysis environment and the malware.
PoKeR, a virtualization-based kernel-level rootkit profiler introduced by Riley et al. [132] is comprised of four aspects: hooking behavior, targeted kernel object, user-level impact, and injected code. It profiles not only traditional system call hook-based rootkits but also DKOM-based rootkits. To accurately determine the kernel objects that are modified by a kernel-level rootkit, PoKeR uses a combat tracking technique that maintains a map of dynamic kernel objects. The authors used NICKLE as the detection system to generate a kernel-level rootkit detection point.
Rkprofiler [133], an analysis and profiling system for Windows OS kernel running in a VM, inspects each instruction executed and captures all function calls to construct a call graph for kernel malware execution. It also tracks dynamic data objects and hardware access events of kernel malware. With the extracted information, Rkprofiler reports the kernel malware behavior in a guest OS. DORF, Data Only Rootkit Framework [134] is an object-oriented framework designed by Ryan Riley that allows researchers to prototype and test data only kernel-level rootkit attacks in various Linux distributions and versions. The author also divided the kernel-level rootkit attacks based on their influence and clarified their definitions to defend them. Using the DORF prototype, researchers can easily test their developed defense system against various kernel-level rootkits. Kernel-level rootkits not only modify user-level activities like system call and APIs but also modify kernel-level activities. MrKIP, a system developed by Wang et al. [135], semi-automatically profiles kernel-space activities of kernel-level rootkits. The invocations of important in-kernel functions with the associated arguments construct the behavior profile. New variants of rootkit families can be recognized with those collected behavior profiles.
HProve [136] is a hypervisor level provenance tracing system that reveals causality dependencies among kernel-level rootkit behaviors and impacts on the victim system by replaying the kernel-level rootkit attack. The proposed system records the whole system execution of the guest OS through a lightweight manner and keeps track of a series of kernel functions and memory access traces to sensitive kernel objects.
## 5 Future research directions
Many approaches have been proposed including the learning-based approaches to successfully detect and prevent the kernel-level rootkit. However, still many challenges need to be addressed that are crucial for the high accuracy of kernel-level rootkit detection. In this section, we present conceivable forthcoming research directions that can be considered by the researchers as a future work.
_Artificial Intelligence_: Artificial intelligence (AI) methods have shown their success in countless domains to learn complex systems and make an informed decision. This is an umbrella term under which machine learning and deep learning take place. Though there are few published research in the kernel-level rootkit detection domain using AI, it is still not the most popular approach in this domain. Most of the published works in this domain either suffer to detect the DKOM attacks or introduce performance overhead. Overcoming these drawbacks can be a direction to future research. Unfortunately, there has been a lack of open-source dataset for kernel-level rootkit detection. The prior work of the kernel-level rootkit detection in AI used their own dataset, which are not available for others. A standardized and updated publicly available dataset is required to perform detection analysis in an efficient way. Future research will look into building an open-source dataset for kernel-level rootkit detection resulting in detecting unknown new attacks by training an AI model. Additionally, because the characteristic features of the kernel-level rootkits are continuously evolving, the training data set
should dynamically include new samples using incremental learning to make the AI model remain effective._
### Container Environment:
_In recent years, container-based service has been increasingly deployed by the service providers for its flexibility and efficiency. We can define a container as a software unit with all dependencies installed that helps applications to run quickly and reliably_ _[_40_]__. Unlike the virtual machines, containers are isolated using kernel functionalities such as namespace, c-group, etc. Despite its benefit of the portability and the ease of deployment, the container is less secure than the fully isolated virtual machines. The isolation of the container can be invalidated when the kernel-level rootkits exploit vulnerabilities existing in the kernel. This may lead to critical security incidents that need to be addressed as a future work._
### Zero-Day Attack Defect:
_Most of the current approaches of detecting the kernel-level rootkit are postmortem type. They only detect the kernel-level rootkit after the intruders compromise the system. Because it is quite difficult to predict the attack scenario, a highly intelligent and lightweight approach is required to examine the OS behavior at run time and detect a zero-day attack._
## 6Conclusion
_A systematic literature survey of the kernel-level rootkit detection approaches is presented in this paper. The reviewed papers have been cautiously investigated to provide a broad and structured solution taxonomy for the kernel-level rootkit detection. The detection approach of the kernel-level rootkit is classified into six main categories: Signature-based, Behavior-based, Cross-view-based, Integrity-based, External hardware-based, and Learning-based. The strengths and weaknesses or challenges of each detection approach are identified in this paper. Most of the prior kernel-level rootkit detection approaches are cross-view-based and integrity-based. Learning-based detection has been proposed in the last few years. This detection is sub-categorized based on the features used to train the learning model. The prevention techniques against the kernel-level rootkit in prior literatures are also reviewed along with the literatures about profiling of the kernel-level rootkit behavior. This work introduced a broad overview of the kernel-level rootkit detection, prevention, and behavior profiling for the future research._
|
2305.17226 | A Finite Element Approach For Modeling Biomembranes In Incompressible
Power-Law Flow | We present a numerical method to model the dynamics of inextensible
biomembranes in a quasi-Newtonian incompressible flow, which better describes
hemorheology in the small vasculature. We consider a level set model for the
fluid-membrane coupling, while the local inextensibility condition is relaxed
by introducing a penalty term. The penalty method is straightforward to
implement from any Navier-Stokes/level set solver and allows substantial
computational savings over a mixed formulation. A standard Galerkin finite
element framework is used with an arbitrarily high order polynomial
approximation for better accuracy in computing the bending force. The PDE
system is solved using a partitioned strongly coupled scheme based on
Crank-Nicolson time integration. Numerical experiments are provided to validate
and assess the main features of the method. | Aymen Laadhari, Ahmad Deeb | 2023-05-26T19:35:07Z | http://arxiv.org/abs/2305.17226v1 | # A Finite Element Approach For Modeling Biomembranes In Incompressible Power-Law Flow
###### Abstract
We present a numerical method to model the dynamics of inextensible biomembranes in a quasi-Newtonian incompressible flow, which better describes hemorheology in the small vasculature. We consider a level set model for the fluid-membrane coupling, while the local inextensibility condition is relaxed by introducing a penalty term. The penalty method is straightforward to implement from any Navier-Stokes/level set solver and allows substantial computational savings over a mixed formulation. A standard Galerkin finite element framework is used with an arbitrarily high order polynomial approximation for better accuracy in computing the bending force. The PDE system is solved using a partitioned strongly coupled scheme based on Crank-Nicolson time integration. Numerical experiments are provided to validate and assess the main features of the method.
Introduction
This paper is concerned with the numerical study of the time-dependent dynamics of biomembranes in a surrounding Newtonian and non-Newtonian flow. The coupled fluid-membrane problem is highly nonlinear and time consuming.
Blood is a very complex fluid. Its rheology at the macroscopic scale depends both on the individual dynamics of its embedded entities and their fluid-structure interactions at the microscopic level. Red blood cells, referred to as RBCs, represent its main cellular component; They are responsible for the supply of oxygen and the capture of carbon dioxide. In the laboratory, giant unilamellar vesicles (diameter \(\approx 10\mu\)m) are biomimetic artificial liquid drops, used in vitro and in silico to study the RBCs. Understanding the dynamics of RBCs in flow remain a difficult problem in the field of computational physics and at the theoretical level as well, consequently leading to a growing interest in the past two decades. In the published literature, several works have covered the areas of experimental biology[1], theoretical biology[2], physics[3; 4; 5] and applied mathematics[6; 7].
From a mechanical continuum perspective, Canham[8], Helfrich[9] and Evans[10] independently introduced in the early 1970s a model to describe the mechanics of lipid bilayer membranes, where cellular deformations are driven by the principal curvatures. This results in a highly nonlinear membrane force with respect to shape, see a mathematical derivation for a generalized energy functional based on shape optimization in[11].
Different methods have been developed to study the dynamics of biomembranes in a Newtonian flow. We can distinguish the level set method[12; 13; 14], the phase field method[15], the immersed boundary method[16], the boundary integral method[17], the parametric finite elements[7], and the lattice Boltzmann method[18]. From a numerical point of view, iterative and fully explicit decoupling strategies for the membrane-fluid problem are the most used techniques[14; 19]. An explicit treatment of the bending force usually leads to numerical instability problems and severe time step limitations, depending on the local mesh size and bending stiffness. However, only few works devised semi-implicit[7] or fully implicit time integration schemes[13; 20]. Although stability is improved, a high computational burden is generally obtained with implicit strategies. Other interesting decoupling strategies can be found in[21; 22; 23; 24].
While blood flow behaves like Newtonian fluid in larger diameter arteries at high shear rates, it exhibits non-Newtonian behavior in small diameter arteries with low shear rates at the microscopic scale[25]. Non-Newtonian rheology is mainly due to polymerization and the underlying mechanisms
leading to the activation and deactivation of platelets and the interactions between different microscopic entities. Blood viscosity tends to increase at low shear rates as RBCs aggregate into a roller shape. The Casson, Power-Law, and Quemada models are the most widely used generalised Newtonian rheologies for blood [26; 27]. To our knowledge, such models have not yet been studied for the current problem. In this work, we consider a quasi-Newtonian power law model to describe the hemorheology.
The aim of this paper is to study the dynamics of biomembranes in a complex non-Newtonian incompressible viscous flow. In order to keep a reasonable computational cost compared to a fully mixed formulation, we design a penalty method to account for the local inextensibility of the membrane. Various higher-order finite element approximations are used to better approximate the bending force. We present a set of numerical examples to validate and show the main features of the method.
## II Mathematical setting
### Membrane model
The deformations of the membrane allow minimizing the Canham-Helfrich-Evans [8; 9] bending energy while preserving the local inextensibility of the membrane. Let \(H\) be the mean curvature, corresponding to the sum of the principal curvatures on the membrane. In the two-dimensional
Figure 1: Sketch of the membrane \(\Gamma\) embedded into a computational domain \(\Lambda\), while \(\Omega\) is the inner region.
case, the membrane minimizes the bending energy given by:
\[\mathrm{J}(\Omega)=\frac{k_{b}}{2}\int_{\partial\Omega}\left(H(\Omega)\right)^{2 }\mathrm{d}s, \tag{1}\]
where \(k_{b}\approx 10^{-20}/10^{-19}kg\,m^{2}\,s^{-2}\) is the bending rigidity modulus. The energy is a variant of the Willmore energy [28]. Let \(T\) be the final time of the experiment. For any time \(t\in[0,T]\), \(\Omega(t)\subset\mathbb{R}^{d}\), \(d=2,3\), is the interior domain of the membrane \(\Gamma(t)=\partial\Omega(t)\), assumed Lipschitz continuous. The membrane is embedded in the domain \(\Lambda\) which is large enough so that \(\Gamma(t)\cap\partial\Lambda=\emptyset\), see Fig. 1. Hereafter, the dependence of \(\Omega\) and \(\Gamma\) upon t is dropped to alleviate notations.
For a membrane with fixed topology, the Gauss-Bonnet theorem [29] states that the energy term weighted by \(k_{g}\) is constant and can be ignored. The spontaneous curvature helps describe the asymmetry of phospholipid bilayers at rest, e.g. when different chemical environments exist on either side of the membrane. We assume \(H_{0}=0\). Let \(\mathbf{n}\) and \(\mathbf{\nu}\) be the outward unit normal vector \(\Gamma(t)\) and on \(\partial\Lambda\), respectively. We introduce the surface gradient \(\mathbf{\nabla}_{s}\cdot=\left(\mathbf{Id}-\mathbf{n}\otimes\mathbf{n}\right)\,\mathbf{\nabla}\cdot\), surface divergence \(\mathrm{div}_{s}\cdot=\mathrm{tr}(\mathbf{\nabla}_{s}\cdot)\) and surface Laplacian \(\Delta_{s^{\ast}}=\mathrm{div}_{s}\left(\mathbf{\nabla}_{s}\cdot\right)\), where \(\mathbf{Id}\) is the identity tensor. The expression and derivation of the bending force using shape optimization tools can be found in [11].
Membrane deformations are subject to specific constraints. Fluid incompressibility is assumed, this is \(\mathrm{div}\,\mathbf{u}=0\) in \(\Lambda\). In addition, RBCs are phospholipid bilayers with local membrane inextensibility. This corresponds to a zero surface divergence, i.e. \(\mathrm{div}_{s}\mathbf{u}=0\) over \(\Gamma\), that helps preserve the local perimeter. Global perimeter conservation follows from Reynolds' lemma [13]. As a consequence, a saddle point formulation results in a membrane surface force that balances the jump in hydrodynamic stress tensor and appears in the right side of (3g).
### Level set description
The motion of the membrane is followed implicitly in a level set framework as the zero level set of a function \(\varphi\). For \(t\in]0,T[\), \(\varphi\) is initialized by a signed distance \(\varphi_{0}\) to \(\Gamma(0)\) and satisfies the transport equation (3a), with \(\mathbf{u}\) the advection vector and \(\varphi=\varphi_{b}\) on the upstream boundary \(\Sigma_{-}=\{\mathbf{x}\in\partial\Lambda:\mathbf{u}\cdot\mathbf{\nu}(\mathbf{x})<0\}\). Geometric quantities such as \(\mathbf{n}=\nabla\varphi/|\nabla\varphi|\), \(H=\mathrm{div}_{s}\mathbf{n}\) and bending force are coded in terms of \(\varphi\) and are then extended to the entire computational domain \(\Lambda\). Over time, a redistancing problem is resolved to maintain the signed distance property lost by advection [30]. Indeed, a too large or too small gradient of \(\varphi\) close to \(\Gamma\) deteriorates the precise
computing of the surface terms. Let \(\varepsilon\) be a regularization parameter. We introduce the regularized Heaviside \(\mathcal{H}_{\varepsilon}\) and Dirac \(\delta_{\varepsilon}\) functions:
\[\mathcal{H}_{\varepsilon}(\varphi)=\left\{\begin{array}{ll}0,&\text{when } \varphi<-\varepsilon\\ \frac{1}{2}\left(1+\frac{\varphi}{\varepsilon}+\frac{1}{\pi}\sin\left(\frac{ \pi\varphi}{\varepsilon}\right)\right),&\text{when }\left|\varphi\right|\leqslant \varepsilon,\qquad\text{and}\quad\delta_{\varepsilon}(\varphi)=\frac{ \mathrm{d}\mathcal{H}_{\varepsilon}}{\mathrm{d}\varphi}(\varphi).\\ 1,&\text{otherwise}\end{array}\right.\]
Given a function \(\zeta\) defined on \(\Gamma\) and its extension \(\tilde{\zeta}\) to \(\Lambda\), surface integrals are approximated as follows:
\[\int_{\Gamma}\zeta(\mathbf{x})\,\mathrm{d}s\approx\int_{\Lambda}\left|\nabla \varphi\right|\delta_{\varepsilon}\left(\varphi\right)\,\tilde{\zeta}(\mathbf{x}) \,\mathrm{d}x.\]
### Governing equations
We assume constant densities \(\rho_{i}\) and \(\rho_{o}\) inside and outside of the membrane, respectively. Let us introduce the fluid velocity \(\mathbf{u}\) and the pressure \(p\) which represent a Lagrange multiplier corresponding to the incompressibility constraint on \(\Lambda\). Analogously, a position-dependent surface tension \(\lambda\) helps imposing the local inextensibility constraint on \(\Gamma\). Let \(\mathbf{D}(\mathbf{u})=(\mathbf{\nabla}\mathbf{u}+\mathbf{\nabla}\mathbf{u}^{T})/2\) be the shear strain rate tensor, so the fluid Cauchy stress tensor is \(\mathbf{\sigma}=\mathbf{T}-p\mathbf{I}\) where \(\mathbf{T}\) is the stress deviator. The normal stress jump \([\mathbf{\sigma}\mathbf{n}]_{-}^{+}=\mathbf{\sigma}_{+}\mathbf{n}-\mathbf{\sigma}_{-}\mathbf{n}\) on \(\Gamma\) describes the interactions of the membrane with the surrounding fluid [20], while the stress discontinuity is calibrated by (3f). For a simple shear flow, \(\mathbf{u}_{b}\) is the shear rate on \(\Sigma_{D}\subset\partial\Lambda\), while natural boundary conditions are prescribed on \(\Sigma_{N}\subset\partial\Lambda\).
We assume a quasi-Newtonian power-law model [26] where the nonlinear constitutive equation expresses the stress deviator with a power-law viscosity function as
\[\mathbf{T}=2\eta\left(\left|\mathbf{D}(\mathbf{u})\right|^{2}\right)\mathbf{D}( \mathbf{u}),\text{ with }\eta\left(\gamma\right)=K\gamma(\upsilon-1)/2,\text{ for all }\gamma\in\mathbb{R}, \tag{2}\]
where \(\upsilon>0\) and \(K\) are the power index and consistency index, respectively. According to [31], \(\upsilon=0.7755<1\) (i.e. a shear thinning fluid) and \(K=14.67\times 10^{-3}\) Pa s for normal blood samples obtained using a multiple regression technique. The Newtonian case \(\upsilon=1\) corresponds to a linear stress-strain relationship that reduces the viscosity function \(\eta(\gamma)=K\) to a constant. By analogy with the Newtonian case, \(K=\mu_{i}\) and \(K=\mu_{o}\) stand for the values of the consistency index in the intra- and extra-membrane domains, respectively.
We perform a dimensionless analysis. Let U be the maximum velocity on \(\Sigma_{D}\) and \(D\) the diameter of a circle having the same membrane perimeter. We consider the dimensionless Reynolds number \(\mathrm{Re}=\rho_{o}UD\mu_{o}^{-1}\) which expresses the ratio between the inertial and viscous forces, and the capillary number \(\mathrm{Ca}=\mu_{o}D^{2}Uk_{b}^{-1}\) which compares the flow force to the bending resistance of the membrane. Furthermore, the parameter \(\beta=\mu_{i}/\mu_{o}\) represents the ratio of consistency indices and corresponds to the viscosity ratio with respect to extracellular viscosity in the Newtonian case. The regularized dimensionless viscosity function is:
\[\mathbf{\mu}_{\varepsilon}(\varphi)|\mathbf{D}(\mathbf{u})|^{\upsilon-1}=\left(\mathcal{ H}_{\varepsilon}(\varphi)+\beta\left(1-\mathcal{H}_{\varepsilon}(\varphi) \right)\right)|\mathbf{D}(\mathbf{u})|^{\upsilon-1}.\]
Following [20], we choose \(\rho_{i}=\rho_{o}\). Let \(\mathbf{\sigma}_{\varepsilon}\) stand for the regularized Cauchy stress tensor. The dimensionless reduced area \(\Xi_{2d}=4\pi|\Omega|/|\Gamma|^{2}\in]0,1]\) compares the area of the interior domain to that of a circle with the same perimeter. The dimensionless coupled problem writes: find \(\varphi\), \(\mathbf{u}\), \(p\) and \(\lambda\) such that
\[\partial_{t}\varphi+\mathbf{u}.\nabla\varphi = 0\;\;\mathrm{in}\;]0,T[\times\Lambda \tag{3a}\] \[\mathrm{Re}\;\left(\partial_{t}\mathbf{u}+\mathbf{u}.\nabla\mathbf{u }\right)-\mathbf{div}\big{(}\mathbf{\sigma}_{\varepsilon}(D(\mathbf{u}),p, \varphi)\big{)} = 0\;\;\mathrm{in}\;]0,T[\times(\Lambda\backslash\partial\Omega)\] (3b) \[\mathrm{div}\,\mathbf{u} = 0\;\;\mathrm{in}\;]0,T[\times\Lambda\] (3c) \[\mathrm{div}_{s}\,\mathbf{u} = 0\;\;\mathrm{on}\;]0,T[\times\partial\Omega\] (3d) \[= 0\;\;\mathrm{on}\;]0,T[\times\partial\Omega\] (3e) \[= \mathbf{\nabla}_{s}\lambda-\lambda H\mathbf{n}+\left(2\mathrm{Ca}\right)^ {-1}\left(2\Delta_{s}H+H^{3}\right)\mathbf{n}\;\;\mathrm{on}\;]0,T[\times\partial \Omega],\] \[\varphi = \varphi_{b}\;\;\mathrm{on}\;\;]0,T[\times\Sigma_{-}\] (3g) \[\mathbf{u} = \mathbf{u}_{b}\;\;\mathrm{on}\;\;]0,T[\times\Sigma_{D}\] (3h) \[\mathbf{\sigma}.\mathbf{\nu} = 0\;\;\mathrm{on}\;\;]0,T[\times\Sigma_{N}\] (3i) \[\varphi(0) = \varphi_{0}\;\;\mathrm{in}\;\;\Lambda\] (3j) \[\mathbf{u}(0) = \mathbf{u}_{0}\;\;\mathrm{in}\;\;\Lambda. \tag{3k}\]
Let \(\varepsilon_{\lambda}=10^{-8}\) be the penalty parameter. To make the method straightforward to implement from any Level Set / Navier-Stokes solver and considerably reduce the size of the linear system to be solved, the inextensibility constraint is relaxed by introducing a penalty term. Indeed, the corresponding minimization problem should be approximated by another minimization problem by penalizing the local inextensibility constraint for the velocity (3d). See analogous penalty method for other applications in [32].
To overcome instability problems when solving the level set equation using the standard Galerkin method, there are a variety of stabilization methods such as the streamline diffusion method, the subgrid viscosity method and the Streamline Upwind Petrov-Galerkin (SUPG) method used in this work. The latter introduces a stabilization term by adding a diffusion in the streamline direction.
We introduce the functional spaces of admissible velocity \(\mathbf{u}\), pressure \(p\) and level set \(\varphi\):
\[\mathbb{V}(\mathbf{u}_{b}) =\Big{\{}\mathbf{v}\in\big{(}H^{1}\left(\Lambda\right)\big{)}^{d}:\mathbf{ v}=\mathbf{u}_{b},\text{ on }\Sigma_{D}\Big{\}},\qquad\mathbb{Q}=\left\{q\in L^{2}\left(\Lambda\right): \int_{\Omega}q=0\right\},\] \[\mathbb{X}(\mathbf{\varphi}_{b}) =\left\{\mathbf{\psi}\in W^{1,\infty}\left(\Lambda\right)\cap H^{1} \left(\Lambda\right):\text{ }\psi=\varphi_{b},\text{ on }\Sigma_{-}\right\}.\]
To reduce a derivation order of \(\varphi\) when evaluating the bending strength, we use the Green formula on a closed surface. See e.g. [13]. Testing with appropriate test functions and integrating (3b) over \(\Omega\) and \(\Lambda\backslash\overline{\Omega}\) separately, the variational problem writes:
Find \(\mathbf{u}\in\mathcal{C}^{0}\Big{(}|0,T[,L^{2}(\Lambda)^{d}\Big{)}\cap L^{2} \Big{(}|0,T[,\mathbb{V}(\mathbf{u}_{b})\Big{)}\), \(p\in L^{2}\Big{(}|0,T[,\mathbb{Q}\Big{)}\), and \(\varphi\in\mathcal{C}^{0}\Big{(}|0,T[,L^{2}(\Lambda)^{d}\Big{)}\cap L^{2} \Big{(}|0,T[,\mathbb{X}\left(\varphi_{b}\right)\Big{)}\) such that
\[\text{Re}\int_{\Lambda}\left(\frac{\partial\mathbf{u}}{dt}+\mathbf{u} \cdot\nabla\mathbf{u}\right)\cdot\mathbf{v}+\int_{\Lambda}2\mu_{\varepsilon}(\varphi) |\mathbf{D}(\mathbf{u})|^{v-1}\mathbf{D}(\mathbf{u}):\mathbf{D}(\mathbf{v})+\frac{1}{ \varepsilon_{\lambda}}\int_{\Lambda}\text{div}_{s}(\mathbf{u})\text{ div}_{s}(\mathbf{v})|\nabla\varphi|\delta_{\varepsilon}(\varphi)\] \[\quad-\int_{\Lambda}p\text{ div }\mathbf{v}+\frac{1}{2\text{Ca}}\int_{\Lambda}\delta_{ \varepsilon}(\varphi)|\nabla\varphi|\Big{(}2\mathbf{\nabla}_{s}H\cdot\mathbf{\nabla}_{ s}(\mathbf{n}\cdot\mathbf{v})-H^{3}\mathbf{n}\cdot\mathbf{v}\Big{)}=\int_{\Sigma_{N}}\mathbf{\sigma}\mathbf{\nu} \cdot\mathbf{v}, \forall\mathbf{v}\in\mathbb{V}(\mathbf{0}),\] \[\int_{\Lambda}q\text{ div }\mathbf{u}=0, \forall q\in\mathbb{Q}, \tag{4b}\] \[\int_{\Lambda}\frac{\partial\varphi}{\partial t}\mathbf{\psi}+\int_{ \Lambda}\left(\mathbf{u}\cdot\nabla\varphi\right)\mathbf{\psi}+\int_{\Lambda}\xi\left( \tau;\varphi,\psi\right)=0, \forall\psi\in\mathbb{X}\left(0\right). \tag{4c}\]
Here, \(\xi\left(\tau;\varphi,\psi\right)\) stands for the SUPG stabilisation term and \(\tau\) is a stabilization parameter defined element wise to control the amount of diffusion.
## III Numerical approach
The interval \([0,T]\) is divided into \(N\) sub-intervals \([t^{n},t^{n+1})\) with \(0\leqslant n\leqslant N-1\) of constant step \(\Delta t\). For \(n>0\), \(\mathbf{u}^{n}\), \(p^{n}\) and \(\varphi^{n}\) are computed by induction to approximate \(\mathbf{u}\), \(\varphi\) and \(p\) at \(t^{n}\). We use the Crank-Nicolson scheme for the time discretization of (3a) and (3b) without the need to bootstrap the initial conditions. The choice of this scheme was for its simplicity to implement and being a second order one-step integrator. The discretized (3a) writes
\[\varphi^{n+1}=\varphi^{n}+\frac{\Delta t}{2}\left(\mathbf{u}\cdot\nabla\varphi^{n+ 1}+\mathbf{u}\cdot\nabla\varphi^{n}\right)\quad\text{ in }\Lambda.\]
For the spatial discretization, we consider a partition \(\mathcal{T}_{h}\) of \(\Lambda\) consisting of geometrically conformal open simplicial elements \(K\). We define the mesh size as the diameter of the largest mesh element \(h=\max h_{K}\) with \(K\in\mathcal{T}_{h}\).
We consider a Taylor-Hood finite element approximation for \(\mathbf{u}\) and \(p\). After using a surface Green's transformation, the evaluation of the Canham-Helfrich-Evans force requires a third-order derivative in \(\varphi\) which induces numerical oscillations when using lower-order polynomial approximations. To avoid introducing additional mixed variables and additional equations as in [13], higher degree polynomials are considered for the discretization of \(\varphi\) because the bending force requires its fourth order derivatives. For the SUPG method, the streamline diffusion parameter is chosen numerically proportional to the local mesh size, this is \(\tau_{K}=Ch_{K}/\max\left\{|\mathbf{u}|_{0,\infty,K},\text{tol}/h_{K}\right\}\), where C is a scaling constant and \(\text{tol}/h_{K}\) helps to avoid division by zero. To overcome the instability problems induced by an explicit decoupling, we consider a partitioned implicit strategy based on a fixed point algorithm, as detailed in Alg. 1.
```
1:\(n=0\): let \(\varphi^{0}\) and \(\mathbf{u}^{0}\) being given
2:for\(n=0,\ldots,N-1\)do
3: Initialize \(\mathbf{u}^{n+1,0}=\mathbf{u}^{n}\), \(\varphi^{n+1,0}=\varphi^{n}\)
4:while\(e^{k}<10^{-6}\)do
5: Compute \(\varphi^{n+1,k+1}\) using \(\mathbf{u}^{n+1,k}\)
6: Compute \(\mathbf{u}^{n+1,k+1}\), \(p^{n+1,k+1}\) using \(\varphi^{n+1,k+1}\)
7: Compute the error \(e^{k}=|\mathbf{u}^{n+1,k+1}-\mathbf{u}^{n+1,k}|_{1,2,\Lambda}/|\mathbf{u}^{n+1,k}|_{0,2, \Lambda}+|\varphi^{n+1,k+1}-\varphi^{n+1,k}|_{0,2,\Lambda}/|\varphi^{n+1,k}|_{0,2,\Lambda}\)
8:endwhile
9: Update \(\mathbf{u}^{n+1}=\mathbf{u}^{n+1,k+1}\), \(\varphi^{n+1}=\varphi^{n+1,k+1}\)
10:endfor
```
**Algorithm 1** Fluid-membrane coupling
## IV Numerical examples
### Example 1: Reversible Vortex - Grid convergence.
Simulations were performed using FEniCSx [33]. To evaluate the capability of the level set solver for high-order finite elements, necessary afterwards for an accurate assessment of highly nonlinear
bending force, we consider a reversible vortex test case featuring large deformations of the interface. The computational domain is \(\Lambda=[0,1]^{2}\). A circular interface of radius \(R=0.15\) initially centered at \((0.7,0.7)\) is stretched into thin filaments which are coiled like a starfish by a vortex flow field. The deformations are periodic and the stretching of the membrane unravels before the interface regains its circular shape after a period at \(t=T\). The maximal deformation \(\psi\) occur at \(t=T/2\), with \(\psi=3\) and \(T=1\) in numerical computations. Similar 2D and 3D test cases are widely used to test interface tracking methods. We follow LeVeque's test [34] (Example 9.5) and consider a velocity field at \(\mathbf{x}=(x,y)^{T}\in\Lambda\) given by
\[\mathbf{u}(t,\mathbf{x})=\left(-2\sin(\psi\pi x)^{2}\ sin(\psi\pi y)\cos(\psi\pi y)\cos (\pi t/T),2\sin(\psi\pi y)^{2}\sin(\psi\pi x)\cos(\psi\pi x)\cos(\pi t/T)\right) ^{T}.\]
The spatial accuracy of the finite element numerical approximations is studied by computing the errors in \(L^{2}(\Lambda)\) norm on successively refined meshes with respect to an exact reference solution \(\pi_{h}\varphi\) at \(t=T\), where \(\pi_{h}\) represents the Lagrange interpolation operator. Errors are calculated after one stretching period. For \(k\) the degree of the polynomial approximation, the time step \(\Delta t=h^{k}\) is chosen small enough not to significantly influence the overall accuracy. Fig. 2 reports the convergence of calculated errors with respect to the mesh size for several polynomial finite element approximations. Convergence rates are also displayed, showing for instance an almost second-order accuracy for \(k=1\) and fifth-order accuracy for \(k=4\).
Figure 2: Reversible vortex. (Left) Snapshots showing the interface deformations at \(t\in\{0,0.25,0.57,0.75,0.875,1\}\) with \(h=0.01\). (Right) Spatial convergence in \(L^{2}\) norm for high-order finite element approximations.
### Example 2: Dynamics of the biomembrane in Newtonian and quasi-Newtonian flows.
We first proceed to a quantitative validation with some experimental and numerical results available in the literature in the case of a purely Newtonian flow. We set \(\upsilon=1\), a viscosity contrast \(\beta=1\), \(\text{Ca}=10^{2}\) and \(\text{Re}=9\times 10^{-3}\). More details on the physiological values of the Reynolds number at the level of RBCs are available in [19]. The membrane follows a tank-treading type movement, called TT, where it reaches a steady state characterized by a fixed angle of inclination; The surrounding fluid continues its rotation tangentially to the membrane. We consider different values of the reduced areas \(\Xi_{2d}\in[0.6,1]\), and calculate the angle of inclination at equilibrium \(\mathbf{\theta}^{\star}\). Fig. 3 and Fig. 4 plot the change in \(\theta^{\star}/\pi\) against \(\Xi_{2d}\) in both Newtonian and quasi-Newtonian cases for different values of the viscosity ratio \(\beta\). The results are compared with those of Kraus et al. [36], Zhao et al. [35], Salac et al. [19] and Laadhari et al. [20], showing good overall consistency. However, note that the values obtained with \(\upsilon=0.7755\) fit slightly better compared to those of the Newton
Figure 4: TT regime: Change in \(\theta^{\star}/\pi\) with respect to \(\Xi_{2d}\) for \(\beta=2.7\). Comparison of non-Newtonian model with results in [20; 35] and measurements in [37].
nian model which are a little higher than the other curves. This cannot be confirmed at all, given the setting of different dimensionless values such as Re and Ca in the different experiments. An in-depth study is in progress and will be the subject of a forthcoming work.
Simulations are now performed using a different ratio \(\beta^{\star}=2.7\) in the non-Newtonian case. We calculate the angle of inclination \(\theta^{\star}/\pi\) and compare with some numerical [35] and experimental [37] results available only for larger reduced areas. Fig. 3(right) shows close but slightly higher equilibrium angles when the shape of the membrane becomes close to a circle. The deviations can be mainly due to the non-Newtonian model, but also to the different values of the confinement levels and the boundary conditions used in the different works.
According to an experimental systematic study on individual individual red cells in a simple shear flow, a change in dynamics occurs when the viscosity ratio exceeds a critical value depending on the reduced area [38]. This is the tumbling regime, noted by TB, which is characterized by the periodic rotation of the membrane around its axis. A well-known empirical model was developed by Keller and Skalak [4]. This dynamics was obtained in the simulations with the non-Newtonian model, see Fig. 5 and Fig. 6 for the snapshots of the TT and TB dynamics obtained with the same set of parameters but with \(\beta=1\) and \(\beta=10\), respectively.
## V Conclusion
We have presented in this paper a relatively simple method for simulating the dynamics of an individual red blood cell, or inextensible biological membrane in general, in a surrounding incompressible non-Newtonian flow that better describes the hemorheology in small capillaries.
We validated our framework using high-order finite element approximations in the case of a membrane in a simple shear flow. Simulations have shown that the method is capable of capturing the basic cellular dynamics, namely the well-known tank treading and tumbling motions. This is
part of a larger ongoing work to explore the dynamics of red blood cells in small capillaries, while accounting for cell elasticity [39] in non-Newtonian surrounding flow.
## Acknowledgments
The authors acknowledge financial support from KUST through the grant FSU-2021-027.
|
2307.02734 | Spin transport from order to disorder | Schwinger boson mean-field theory (SBMFT) is a non-perturbative approach
which treats ordered and disordered phases of magnetic systems on equal
footing. We leverage its versatility to evaluate the spin correlators which
determine thermally-induced spin transport (the spin Seebeck effect) in
Heisenberg ferromagnets (FMs) and antiferromagnets (AFs), at arbitrary
temperatures. In SBMFT, the spin current, $J_s$, is made up of
particle-hole-like excitations which carry integral spin angular momentum. Well
below the ordering temperature, $J_s$ is dominated by a magnonic contribution,
reproducing the behavior of a dilute-magnon gas. Near the transition
temperature, an additional, paramagnetic-like contribution becomes significant.
In the AF, the two contributions come with opposite signs, resulting in a
signature, rapid inversion of the spin Seebeck coefficient as a function of
temperature. Ultimately, at high temperatures, the low-field behavior of the
paramagnetic SSE reduces to Curie-Weiss physics. Analysis based on our theory
confirms that in recent experiments on gadolinium gallium garnet, the low-field
spin Seebeck coefficient $\mathcal{S}(T) \propto \chi(T)$, the spin
susceptibility, down to the Curie-Weiss temperature. At lower temperatures in
the disordered phase, our theory shows a deviation of $\mathcal{S}(T)$ relative
to $\chi(T)$ in both FMs and AFs, which increases with decreasing temperature
and arises due to a paramagnetic liquid phase in our theory. These results
demonstrate that the SSE can be a probe of the short-ranged magnetic
correlations in disordered correlated spin systems and spin liquids. | Derek Reitz, Yaroslav Tserkovnyak | 2023-07-06T02:38:43Z | http://arxiv.org/abs/2307.02734v1 | # Spin transport from order to disorder
###### Abstract
Schwinger boson mean-field theory (SBMFT) is a non-perturbative approach which treats ordered and disordered phases of magnetic systems on equal footing. We leverage its versatility to evaluate the spin correlators which determine thermally-induced spin transport (the spin Seebeck effect) in Heisenberg ferromagnets (FMs) and antiferromagnets (AFs), at arbitrary temperatures. In SBMFT, the spin current, \(J_{s}\), is made up of particle-hole-like excitations which carry integral spin angular momentum. Well below the ordering temperature, \(J_{s}\) is dominated by a magnonic contribution, reproducing the behavior of a dilute-magnon gas. Near the transition temperature, an additional, paramagnetic-like contribution becomes significant. In the AF, the two contributions come with opposite signs, resulting in a signature, rapid inversion of the spin Seebeck coefficient as a function of temperature. Ultimately, at high temperatures, the low-field behavior of the paramagnetic SSE reduces to Curie-Weiss physics. Analysis based on our theory confirms that in recent experiments on gadolinium gallium garnet, the low-field spin Seebeck coefficient \(\mathcal{S}(T)\propto\chi(T)\), the spin susceptibility, down to the Curie-Weiss temperature. At lower temperatures in the disordered phase, our theory shows a deviation of \(\mathcal{S}(T)\) relative to \(\chi(T)\) in both FMs and AFs, which increases with decreasing temperature and arises due to a paramagnetic liquid phase in our theory. These results demonstrate the SSE can be a probe of the short-ranged magnetic correlations in disordered correlated spin systems and spin liquids.
_Introduction.--_ Most works in spintronics based on magnetic systems are asymptotic expansions or tailored phenomenological models which can be loosely divided into three categories: the strongly ordered regime that is handled by the Holstein-Primakoff approximation (HPA) and related treatements in 3D, the nonlinear-\(\sigma\) model, or the Landau-Lifshitz-Gilbert phenomenology; the completely disordered paramagnetic Curie-Weiss regime; or criticality described by Landau theory. While the associated theories may work well in their respective small-parameter regimes, they fail outside of them. Moreover, phenomenology must be supported by an underlying fundamental description which contains the basic physical ingredients. The Schwinger boson transformation takes \(\mathrm{SU}(\mathcal{N})\) generators to a product of \(\mathcal{N}\) bosonic operators. The Hamiltonian is then decoupled by a Hubbard-Stratonovich transformation where the mean-field theory is the saddle point (SP), and the order \(n\) fluctuations about the SP scale as \(O(1/\mathcal{N}^{n})\)[1; 2]. This approach, on the other hand, has no small or large parameter for fixed \(\mathcal{N}\sim 1\), but still has the ability to qualitatively capture essential physics in regimes where we do not have an accurate theory.
The spin Seebeck effect is generated by thermalized spin excitations and requires broken symmetry in spin space. Starting at \(T\ll T_{C(N)}\) in ordered magnets, spin Seebeck coefficients theoretically [3; 4; 5; 6; 7; 8] and experimentally [9; 10; 11] are generally expected to be enhanced by increasing temperature, while the opposite holds for paramagnets [12; 13; 14; 15; 16; 17], with the largest signals near the transition temperatures [18; 19; 4; 14]. These results suggest that the optimal regimes for thermoelectric applications may be distinct from the ones best described by HPA or the Curie-Weiss law, for example, which are designed to incorporate disorder or order, respectively, as minor corrections. In SBMFT, the FM, AF, and PM spin Seebeck coefficients reach their maxima around \(T_{C(N)}\), where they reach the same order of magnitude when the Zeeman energy \(\hbar\gamma B\approx J\), the exchange constant. While the SBMFT spin Seebeck coefficients in FMs and PMs have the same sign, in AFs the SSE inverts in sign slightly below \(T_{N}\) due to the competition between antiferromagnetic and paramagnetic fluctuations.
The liquid-gas crossover in Heisenberg FMs and AFs appears as a continuous transition in SBMFT, and occurs at their Curie-Weiss temperatures \(\Theta_{CW}\), with frustration parameter \(f\equiv|\Theta_{CW}|/T_{C(N)}\gtrsim 1\) in 3D. The liquid phase of the Heisenberg model in SBMFT is a simple setting for studying correlations effects in disordered spin systems, in 3D, as shown here, and also 2D [20; 21; 22; 23; 24]. For example, by evaluating the spin correlators involved in thermally-induced spin transport across the paramagnetic phase, we show how spin Seebeck experiments can probe the properties of interacting spin liquids. SBMFT may play an important role for understanding spin transport measurements that can be used to manifest the magnetic properties of spin liquids [25; 26]. This would complement indirect measurements such as the thermal conductivity and can support the limited information extracted from NMR and magnetic susceptibility measurements [27]. Along these lines, we introduce the parameter \(p(T)\equiv\partial_{B}\mathcal{S}/\chi\), the ratio of the SSE to the spin susceptibility, which is \(T\)-independent when a magnet is completely disordered and becomes \(T\)-dependent when short-ranged spin correlations are significant to spin transport. \(p(T)\) is then an indicator for spin correlations in the paramagnetic regime.
_Mean-field theory.--_ The Schwinger boson transfor
mation replaces the spin operators by a product of bosonic creation and annihilation operators, \(\mathcal{S}^{+}=a_{\uparrow}^{\dagger}a_{\downarrow}\), \(\mathcal{S}^{-}=a_{\downarrow}^{\dagger}a_{\uparrow}\), \(S^{z}=\sum_{\sigma}\sigma a_{\sigma}^{\dagger}a_{\sigma}/2\), with the spin length fixed on each site by the constraint \(S=\sum_{\sigma}a_{\sigma}^{\dagger}a_{\sigma}/2\). The SU(2)-preserving mean-field decomposition of the nearest-neighbor Heisenberg Hamiltonian on a bipartite lattice, written in terms of SBs \(a_{\sigma}\) and \(b_{\sigma}\) for sublattices \(\mathcal{A}\) and \(\mathcal{B}\), respectively, is
\[H_{\rm mf}^{\rm SU(2)}= -2J\sum_{\langle ij\rangle}\left[\alpha F_{ij}^{\dagger}F-(1- \alpha)A_{ij}^{\dagger}A\right]+{\rm H.c.}\] \[-\mu_{\mathcal{A}}\sum_{i\in\mathcal{A},\sigma}a_{i\sigma}^{ \dagger}a_{i\sigma}-\mu_{\mathcal{B}}\sum_{i\in\mathcal{B},\sigma}b_{i\sigma} ^{\dagger}b_{i\sigma}.\] (1a) Here, summing over \[\langle ij\rangle\] avoids double counting, \[F_{ij}=\sum_{\sigma}a_{i\sigma}^{\dagger}b_{j\sigma}/2\] is a "ferromagnetic" contribution, and \[A_{ij}=\sum_{\sigma}\sigma a_{i\sigma}b_{j\sigma}/2\] is an "antiferromagnetic" contribution [28]. These quartic terms are approximated in our MF decomposition by the product of a quadratic term and the mean fields \[F=\langle F_{ij}\rangle\] and \[A=\langle A_{ij}\rangle\], and in the same spirit the spin length constraints are implemented via two aggregate Lagrange multipliers \[\mu_{\mathcal{A}(\mathcal{B})}\]. This decomposition applies to isotropic lattice models where there is a single \[F\] and single \[A\] parameter. Note that while the exact constraint fixes the sum of the SB species' number operators on each site, \[\mu_{\mathcal{A}(\mathcal{B})}\] instead fix the expectation value of this operator sum on each sublattice. \[\alpha\] is a parameter that is free to vary in the exact Hamiltonian, but parameterizes separate mean-field Hamiltonians [2, 28]. To fix \[\alpha\], we match the poles of the dynamic susceptibilities to the Holstein-Primakoff result at \[T=0\], giving the usual [1] \[\alpha=1\] for the FM and \[\alpha=0\] for the AF, and for simplicity fix these values for \[\alpha\] at all \[T\]. In total, the bipartite FM (uniaxial AF below spin flop) has three mean-field parameters: \[F\] ( \[A\] ), \[\mu\equiv(\mu_{\mathcal{A}}+\mu_{\mathcal{B}})/2\], and \[\delta\mu\equiv(\mu_{\mathcal{A}}-\mu_{\mathcal{B}})/2\]. For the most general (Hartree-Fock-Bogoliubov) U(1)-preserving mean-field decomposition, see the Supplemental Material.
When \(T\ll T_{C(N)}\), thermal equilibrium described by the Holstein-Primakoff picture is characterized by a dilute magnon gas with a single band for each sublattice [29], which slightly depolarizes the spin ordering. In SBMFT, there are twice as many bands as in HPA, and each SB band carries half-integer spin. At a glance, the two pictures may seem reconcilable. However, at \(T_{C}\) in FMs the lowest-energy modes of one SB spin species (in the axially-symmetric case, for example) reach zero energy and form a Bose-Einstein condensate, resulting in long-ranged ordering along that species' spin polarization. At \(T_{N}\) in AFs, long-ranged staggering ordering arises from condensation of one spin species on sublattice \(\mathcal{A}\), and the opposite spin species on sublattice \(\mathcal{B}\). Magnons in SBMFT are then spinful excitations associated with transitions from the condensates to the thermal cloud, as shown in Fig. 1. Thus, the SB bands on each sublattice which carry spin opposite to the local order mimick the magnon bands in Holstein-Primakoff. As we will see, these magnonic excitations will dominate spin transport at \(T\ll T_{C(N)}\).
The SU(2)-preserving MFT yields a first-order Curie transition on cubic Bravais lattices, but is second-order on the diamond lattice, possibly due to its higher-order connectivity [30]. The FM mean-field Hamiltonian plus applied field on the diamond lattice, setting \(\delta\mu=0\), after Fourier transforming and casting in terms of sublattice pseudospin, \(\psi_{\mathbf{k}\sigma}=(a_{\mathbf{k}\sigma},b_{\mathbf{k}\sigma})\), is
\[H_{\rm mf}^{\rm FM}=\sum_{\mathbf{k}\sigma}\psi_{\mathbf{k}\sigma}^{\dagger}\left[-( \mu+b\sigma/2)+\mathbf{\eta}_{\mathbf{k}}\cdot\mathbf{\tau}\right]\psi_{\mathbf{k}\sigma}, \tag{2}\]
where \(b\equiv\hbar\gamma B\), \(\mathbf{\eta}_{\mathbf{k}}=JF\left(-\operatorname{Re}\gamma_{\mathbf{k}},\,\operatorname {Im}\gamma_{\mathbf{k}},0\right)\), \(\gamma_{\mathbf{k}}=Z^{-1}\sum_{\mathbf{\delta}}e^{i\mathbf{k}\cdot\mathbf{\delta}}\) is the structure factor, \(\mathbf{\delta}\) is the vector between nearest neighbors on sublattice \(\mathcal{A}\) to \(\mathcal{B}\), and \(\mathbf{\tau}\) is the vector of Pauli matrices. There are four bands with energies
\[\epsilon_{\mathbf{k}\sigma}^{\pm}=JZF(1\pm|\gamma_{\mathbf{k}}|)-(\mu+b\sigma/2), \tag{3}\]
where a factor of \(JZF\) was absorbed into the definition of \(\mu\). The eigenvectors are \(v_{\mathbf{k}\sigma}^{\pm}=(1,\mp|\gamma_{\mathbf{k}}|/\gamma_{\mathbf{k}})/\sqrt{2}\). If \(\mu\) reaches \(-b/2\) the lowest energy branch, \(\epsilon_{\mathbf{k}\uparrow}^{-}\), has zero-energy modes that condense, resulting in long-ranged spin ordering along the \(+\hat{\mathbf{z}}\) axis in the language of SBs [1, 31]. The lower-energy \(\epsilon^{-}\) bands are shown in Fig. 1, and shown along with the high-energy \(\epsilon^{+}\) bands in Supplemental Material Fig. 3. At arbitrary temperatures, the self-consistent mean-field equations for \(F\) and \(S\) give the solutions to \(F(T)\) and either the condensate density \(n_{c}(T)\) or \(\mu(T)\) according to
\[F=-(4N)^{-1}\sum_{\mathbf{k}\sigma\lambda}n_{\mathbf{k}\sigma}^{\lambda}\lambda|\gamma_{ \mathbf{k}}|,\ \ \ S=(4N)^{-1}\sum_{\mathbf{k}\sigma\lambda}n_{\mathbf{k}\sigma}^{\lambda}, \tag{4}\]
where \(n_{\mathbf{k}\sigma}^{\lambda}\) is the Bose-Einstein distribution function for energy \(\epsilon_{\mathbf{k}\sigma}^{\lambda}\), and \(N\) is the number of sites per sublattice. In order to solve Eqs. (4) at \(T<T_{C}\), the sums are
Figure 1: Schematic depiction of the magnonic (1) and paramagnetic-like (2) contributions to \(J_{s}\). Each color specifies a combination of the bands’ lower-indexed spin polarization and upper-indexed pseudospin. In SBMFT for FMs (AFs), at \(T\leq T_{C(N)}\), Bose-Einstein condensation occurs at the lowest-energy modes with momentum \(\mathbf{k}_{\rm e}\). At \(T>T_{C(N)}\) a self-consistent gap \(-\mu\) opens up.
converted to integrals with the contributions from the condensate density separated explicitly: for an arbitrary function \(z\) and a single condensation point at momentum \(\mathbf{k_{c}}\), \(\sum_{\mathbf{k}}z_{\mathbf{k}}/N\approx z(\mathbf{k_{c}})n_{c}+\mathcal{V}\int_{\mathrm{BZ}} d^{3}\mathbf{k}z(\mathbf{k})/(2\pi)^{3}\), where \(n_{c}\equiv N_{c}/N\) and \(\mathcal{V}\) is the unit cell volume.
On the other hand, we find the Neel transition is second-order on all cubic Bravais lattices, so we take the simple cubic lattice for simplicity. The AF mean-field Hamiltonian with easy-axis anisotropy constant \(K\) plus collinear applied field is
\[H^{\mathrm{AF}}_{\mathrm{mf}}=\sum_{\mathbf{k}\sigma}\psi^{\dagger}_{ \mathbf{k}\sigma}\left[\zeta_{\sigma}-(\delta\mu+b\sigma/2)\tau_{z}\right]\psi_{ \mathbf{k}\sigma}+\\ \sum_{\mathbf{k}\sigma}(i\sigma\psi^{\intercal}_{\mathbf{k}\sigma}\mathbf{ \eta}_{\mathbf{k}}\cdot\mathbf{\tau}\psi_{-\mathbf{k}\overline{\sigma}}/2+\mathrm{H.c.}), \tag{5}\]
where we consider \(b\ll\sqrt{JK}\), the spin-flop field; here \(\zeta_{\sigma}=-\mu-KL^{z}\sigma/2\) for mean staggered spin polarization \(L^{z}=(S^{z}_{\mathcal{A}}-S^{z}_{\mathcal{B}})/2\), \(\mathbf{\eta}_{\mathbf{k}}=JA\left(\mathrm{Im}\,\gamma_{\mathbf{k}},\,\mathrm{Re}\,\gamma _{\mathbf{k}},0\right)\), and \(\psi^{\intercal}\) is the vector transpose. Diagonalizing the Hamiltonian via a Bogoliubov transformation for each \(\sigma\) yields four bands (see SM), we get energies
\[\epsilon^{+}_{\mathbf{k}\sigma} =-\delta\mu-b\sigma/2+\epsilon_{\mathbf{k}\sigma},\ \epsilon^{-}_{\mathbf{k}\sigma}=\delta\mu-b\sigma/2+\epsilon_{\mathbf{k}\overline{ \sigma}}, \tag{6}\] \[\epsilon_{\mathbf{k}\sigma} \equiv\sqrt{\zeta_{\sigma}(2JZA+\zeta_{\sigma})+(JZA)^{2}(1- \gamma^{2}_{\mathbf{k}})},\]
where, like for the FM, we shifted \(\mu\) by a factor of \(JZA\), and \(\overline{\sigma}=-\sigma\). Here, the ansatz \(\delta\mu=-b/2\) was found by matching the field splitting of \(\epsilon^{+}_{\mathbf{k}\downarrow}\) and \(\epsilon^{-}_{\mathbf{k}\uparrow}\) to that of the usual AF magnon modes from HPA. This is a self-consistent solution for \(T<T_{N}\), and then \(\delta\mu=0\) for \(T\geq T_{N}\). Analogously to the FM, BEC occurs when the lowest-energy modes of \(\epsilon^{+}_{\uparrow}\) and \(\epsilon^{-}_{\downarrow}\) become gapless at \(\mu=-KL^{z}/2\), so that \(\zeta_{\sigma}=KL^{z}(1-\sigma)/2\)[32], resulting in long-ranged staggered ordering. The modes are depicted in Fig. 1. The equations for \(T<T_{N}\) are obtained by eliminating \(n_{c}(T)\) to give two independent equations for \(A(T)\) and \(L^{z}(T)\), which in the limit \(K\ll J\) (e.g., in Cr\({}_{2}\)O\({}_{3}\), \(K\approx 7\times 10^{-2}J\)[33]) are:
\[A=S+C^{A}-(4N)^{-1}\sum_{\mathbf{k}\sigma}(n^{+}_{\mathbf{k}\sigma}+n^{- }_{\mathbf{k}\overline{\sigma}})\sqrt{1-\gamma^{2}_{\mathbf{k}}}, \tag{7a}\] \[L^{z}=S-C^{z}-(2N)^{-1}\sum_{\mathbf{k}}(n^{+}_{\mathbf{k}\downarrow}+n^ {-}_{\mathbf{k}\overline{\uparrow}})/\sqrt{1-\gamma^{2}_{\mathbf{k}}}, \tag{7b}\]
where \(C_{A}=1/2-(2N)^{-1}\sum_{\mathbf{k}}\sqrt{1-\gamma^{2}_{\mathbf{k}}}\approx 0.13\), \(C_{z}=1/2-(N)^{-1}\sum_{\mathbf{k}}1/\sqrt{1-\gamma^{2}_{\mathbf{k}}}\approx 0.25\), the contributions from the zero-energy modes vanish in Eq. (7a), and Eq. (7b) only contains finite-energy modes. At \(T>T_{N}\): \(L^{z}=0\) and \(\mu(T)\) is no longer fixed so the mean-field equations are:
\[A=(2N)^{-1}\sum_{\mathbf{k}\sigma}\left(n_{\mathbf{k}\sigma}+1/2\right) \sqrt{(-\mu+JZA)^{2}/\epsilon^{2}_{\mathbf{k}\sigma}-1}, \tag{8a}\] \[S=-1/2+(2N)^{-1}\sum_{\mathbf{k}\sigma}\left(n_{\mathbf{k}\sigma}+1/2 \right)(-\mu+JZA)/\epsilon_{\mathbf{k}\sigma}, \tag{8b}\]
where we took \(n^{+}_{\mathbf{k}\sigma}\approx n^{-}_{\mathbf{k}\sigma}\equiv n_{\mathbf{k}\sigma}\) (valid when \(K\ll J\)).
Finally, we compare the SBMFT magnonic excitations to the HPA dispersions in the strongly ordered phases. In the diamond-lattice FM, the lowest-energy modes of the \(\epsilon^{-}_{\uparrow}\) band condense and the two \(\epsilon^{\pm}_{\downarrow}\) bands match the magnon bands from HPA, which reproduces the usual Bloch \(T^{3/2}\) law for demagnetization at \(T\ll T_{C}\)[34]. In the simple-cubic-lattice AF, the the lowest-energy modes of the \(\epsilon^{+}_{\uparrow}\) and \(\epsilon^{-}_{\downarrow}\) bands condense at \(T_{N}\) forming staggered ordering while the \(\epsilon^{+}_{\downarrow}\) and \(\epsilon^{-}_{\uparrow}\) bands qualitatively match the magnon bands from HPA. They are \(\epsilon^{+}_{\mathbf{k}\downarrow},\epsilon^{-}_{\mathbf{k}\uparrow}=\pm b+\epsilon_{ \mathbf{k}}\), where \(\epsilon_{\mathbf{k}}=\sqrt{\epsilon^{2}_{0}+(JZA)^{2}(1-\gamma^{2}_{\mathbf{k}})}\) with \(\epsilon^{2}_{0}=\epsilon_{K}(\epsilon_{K}+2JZA)\) and \(\epsilon_{K}=KL^{z}\). At \(T\ll T_{N}\), the dispersive term \((JZA)^{2}(1-\gamma^{2}_{\mathbf{k}})\) with \(A/S=1+C_{A}/S\) differs by a constant factor from the HPA value, and the gap \(\epsilon_{0}\) is proportional to \(\epsilon_{K}=K(S-1/2)\) in HPA while it is \(\epsilon_{K}=K(S-1/2+C_{z})\) in SBMFT. The complete numerical solutions of the MFT for \(B=0\) with \(S=1/2\) for the FM, where \(n_{c}\propto S^{z}\), and \(S=3/2\) for the AF, where \(n_{c}\propto L^{z}\), are plotted in Fig. 2 (\(T_{C}=0.633J\) and \(T_{N}=5.12J\) in units where the Boltzmann constant \(k_{B}=1\)).
_Spin transport._-- The net interfacial spin current between a magnetic insulator at \(T_{1}\) and a metal at
Figure 2: Mean-field solutions for the \(S=1/2\) FM on the diamond lattice and the \(S=3/2\) AF on the simple cubic lattice. For the FM (AF), (a) shows \(F\) (\(A\)), (b) shows \(S^{z}\) (\(L^{z}\)) and (c) shows \(-\mu\) in units of \(\mu_{C(N)}=-T_{C(N)}\ln(1/S+1)\). Triangular markers denote the positions of the liquid-gas crossover.
\(T_{2}\) may be computed by treating the interfacial exchange Hamiltonian perturbatively with respect to the bulk. If we consider a ferromagnetic Bravais lattice with interfacial Hamiltonian in momentum space \(H_{\text{int}}=(V/N)\sum_{\mathbf{k},\mathbf{k}^{\prime},q,q^{\prime}}a^{\dagger}_{\mathbf{ k}\uparrow}a_{\mathbf{k}^{\prime}\downarrow}c^{\dagger}_{q\downarrow}c_{q^{ \prime}\uparrow}+\text{H.c.}\), we get via FGR for the interfacial spin current density (in units of energy per area),
\[J_{s}=\frac{g_{\uparrow\downarrow}}{2SN^{2}}\sum_{\mathbf{k},\mathbf{k}^ {\prime}}\epsilon_{\mathbf{k}\mathbf{k}^{\prime}\uparrow\downarrow}\mathbf{\times}\\ \left[n_{1}(\epsilon_{\mathbf{k}\uparrow})-n_{1}(\epsilon_{\mathbf{k}^{ \prime}\downarrow})\right]\left[n_{1}(\epsilon_{\mathbf{k}\mathbf{k}^{\prime}\uparrow \downarrow})-n_{2}(\epsilon_{\mathbf{k}\mathbf{k}^{\prime}\uparrow\downarrow})\right], \tag{9}\]
where \(\epsilon_{\mathbf{k}\mathbf{k}^{\prime}\uparrow\downarrow}\equiv\epsilon_{\mathbf{k} \uparrow}-\epsilon_{\mathbf{k}^{\prime}\downarrow}\), and \(g_{\uparrow\downarrow}\equiv 4\pi SD^{2}V^{2}/\mathcal{A}\)[35] is in units of inverse area where \(D\) is the metal's density of states at the Fermi level in units of \((\text{energy}\cdot\text{volume})^{-1}\) and \(\mathcal{A}\) is the area per site of the interface. Eq. (9) shows that \(J_{s}\) is made up of particle-hole like excitations which carry spin angular momentum. In the bipartite FM and AFs, the SBs on each sublattice split into mixtures of the two pseudospin SBs (for the full expressions for \(J_{s}\) there, see the Supplemental Material). Finally, the spin Seebeck coefficient for \(J_{s}(T_{1},T_{2})\) is defined as \(\mathcal{S}(T)\equiv J_{s}(T+\delta T,T-\delta T)/\delta T\) in the limit \(\delta T\ll T\) of linear response.
In the ordered phases, the condensates grow macroscopically large. In the thermodynamic limit, they must be separated from the integrals over the BZ. The contribution to the FM spin Seebeck coefficient on diamond due to the condensate density \(n_{c}\propto S^{z}\) is
\[\mathcal{S}^{\text{FM}}=\frac{g_{\uparrow\downarrow}}{2s}S^{z}\int\frac{d^{3} \mathbf{k}}{(2\pi)^{3}}\partial_{T}\left(\epsilon^{+}_{\mathbf{k}\downarrow}n^{+}_{ \mathbf{k}\downarrow}+\epsilon^{-}_{\mathbf{k}\downarrow}n^{-}_{\mathbf{k}\downarrow} \right), \tag{10}\]
where \(s\equiv S/\mathcal{V}\), and \(\epsilon^{\pm}_{\mathbf{k}\downarrow}\) are the magnon energies. For the AF, we consider an interface which is compensated in aggregate but is comprised of separate islands where the metal couples directly to either one of the two sublattices, and negligibly to the other [36; 7]. In this scenario, the AF spin current is \(J_{s}=J_{s}^{\mathcal{A}}+J_{s}^{\mathcal{B}}\), where \(J_{s}^{\mathcal{A}}\) is generated by the coupling \(H_{\text{int}}^{\text{AF}}=(V/N)\sum_{\mathbf{k},\mathbf{k}^{\prime},q,q^{\prime}}a^{ \dagger}_{\mathbf{k}\uparrow}a^{\dagger}_{\mathbf{k}^{\prime}\downarrow}c^{\dagger}_{ q\downarrow}c_{q^{\prime}\uparrow}+\text{H.c.}\) and \(J_{s}^{\mathcal{B}}\) by \(H_{\text{int}}^{\mathcal{B}}=(V/N)\sum_{\mathbf{k},\mathbf{k}^{\prime},q,q^{\prime}}b^ {\dagger}_{\mathbf{k}\uparrow}b^{\dagger}_{\mathbf{k}^{\prime}\downarrow}c^{\dagger}_ {q\downarrow}c_{q^{\prime}\uparrow}+\text{H.c.}\). The contribution to the AF spin Seebeck coefficient due to the condensate density \(n_{c}\propto L^{z}\) is
\[\mathcal{S}^{\text{AF}}=\frac{g_{\uparrow\downarrow}}{2s}L^{z}\int\frac{d^{3} \mathbf{k}}{(2\pi)^{3}}\frac{2JZA}{\epsilon^{+}_{\mathbf{k}\downarrow}+\epsilon^{-}_ {\mathbf{k}\uparrow}}\partial_{T}\left(\epsilon^{+}_{\mathbf{k}\downarrow}n^{+}_{\bm {k}\downarrow}-\epsilon^{-}_{\mathbf{k}\uparrow}n^{-}_{\mathbf{k}\uparrow}\right). \tag{11}\]
The AF SSE has contributions at the two magnon energies, \(\epsilon^{+}_{\mathbf{k}\downarrow}\) and \(\epsilon^{-}_{\mathbf{k}\uparrow}\), which come with opposite signs since they carry oppositely-oriented spin angular momentum. Eq. (11) at \(T\ll T_{N}\) reproduces the semiclassical Neel spin current derived in Ref. [8].
At larger temperatures, \(J_{s}\) also contains a contribution from scattering between bands in the thermal cloud, as shown in Fig. 1. This contribution is relatively smaller at \(T\ll T_{C(N)}\) and becomes the paramagnetic spin current at \(T>T_{C(N)}\). In order to carry out the two sets of integrals numerically in \(\mathcal{S}^{\text{PM}}\), we approximate the band structure with the low-energy, long-wavelength dispersion: \(\epsilon^{\pm}_{\mathbf{k}\sigma}\approx JFk^{2}-(\mu+b\sigma/2)\) for the FM and \(\epsilon^{\pm}_{\mathbf{k}\sigma}\approx\pm(1-\sigma)b/2+\sqrt{\zeta_{a}^{2}-2Z( JAk)^{2}}\) for the AF. The SBMFT spin Seebeck coefficients are compared to those computed in the same fashion using the Holstein-Primakoff transformation [35], expanded to second order in the magnon over spin densities (defined as the Holstein-Primakoff approximation, HPA), and plotted as a function of temperature in Fig. 3.
In strongly disordered spin systems, spin correlations decay on the scale of the lattice spacing. In SBMFT, this corresponds to \(JF\), \(JA\ll T\) and is described by the gaseous phase of the theory. In the gaseous phase at \(b\ll T\), we get \(\partial_{B}\mathcal{S}^{\text{PM}}=\chi g^{\uparrow\downarrow}\) where \(\chi\equiv\partial_{B}S^{z}/S\) is the normalized spin susceptibility. As \(T\) decreases below \(\Theta_{CW}\) in the SBMFT, this treatment has a continuous liquid-gas phase transition and spin correlations start to become significant. When \(JF\) or \(JA\sim T\), \(\partial_{B}\mathcal{S}^{\text{PM}}\) de
Figure 3: The spin Seebeck coefficients for the \(S=1/2\) FM on the diamond lattice and the negative field derivative \(-\partial_{\mathbf{k}}\mathcal{S}\) (with \(b=\hbar\gamma B\) in units of \(J\)) for the \(S=3/2\) AF on the simple cubic lattice computed in the limit \(B\to 0\) using SBMFT and HPA.
Figure 4: Field derivative of the paramagnetic SSE relative to the spin susceptibility in FMs and AFs. \(\partial_{B}\mathcal{S}/g^{\uparrow\downarrow}\) begins to deviate from \(\chi\) at the liquid-gas crossovers denoted by triangular markers.
viates from \(\chi\). Based on this analysis of the Heisenberg model in SBMFT, we introduce a new frustration parameter \(p(T)\equiv\partial_{B}\mathcal{S}/\chi\), whose temperature-dependence is an indicator for short-ranged spin correlations as shown in Fig. 4 (for comparison purposes, \(\chi\) is also computed in the same fashion as \(\mathcal{S}^{\mathrm{PM}}\) discussed above).
_Conclusion.--_ Experimentally, extracting \(p(T)\equiv\partial_{B}\mathcal{S}/\chi\) (Fig. 4) is complicated since the measured spin Seebeck voltage, \(V(B,T)=\mathcal{S}(B,T)f(T)\), contains additional temperature-dependent factors in \(f(T)\), such as the interfacial thermal conductivity and metallic resistivity [8, 17]. However, we can analyze how the magnetic field profile, of the measured \(V(B,T)\) and theoretical \(\mathcal{S}(B,T)\), evolve with temperature. We illustrate this by comparing our theory for the SSE at \(T\gg T_{C(N)}\) to experiments in gadolinium gallium garnet (GGG) [13, 14]. We identify the field position of the peak in the SSE, at a given temperature, as a quantity which contains information about \(\mathcal{S}(B,T)\), but is independent of \(f(T)\). The peak data points are extracted from SSE field sweeps, and our theoretical values rely solely on the magnet's Curie-Weiss temperature. When we use an independently-measured value for \(\Theta_{CW}\) from the static susceptibility in GGG [37], we find that our theory quantitatively reproduces the experimental SSE peak positions down to \(T\geq 2\) K \(\approx\Theta_{CW}\) (this is the lowest-temperature data currently available; for more details, see the Supplemental Material). At lower temperatures, a similar type of analysis could be used to investigate the emerging effects of short-ranged spin correlations in spin transport.
The sign change of the AF spin Seebeck coefficient as a function of temperature, below spin flop, at \(T^{*}\approx 0.85T_{N}\) (Fig. 3) is another feature which is insensitive to \(f(T)\) because it is unlikely to change sign in the same region of \(T\). The spin Seebeck coefficient in a Landau theory for the Neel transition has the paramagnetic sign [38], which is consistent with the SBMFT result in that the latter finds \(T^{*}\) lies appreciably to the left of the transition temperature. While a bulk thermal gradient can drive an interfacial spin accumulation with the same sign as Eq. (11) [5], this accumulation may be reduced and possibly invert in sign when Umklapp scattering becomes significant. It can reduce the magnon diffusion length and occurs when the temperature becomes comparable to the energy of magnons at the Brilluoin zone boundary. This occurs for the lower energy magnon branch before the higher energy branch, possibly leading to a lower value for \(T^{*}\). To give a more quantitative estimate for \(T^{*}\), a bulk spin transport theory for SBs must then be developed.
_Acknowledgements.--_The work was supported by the U.S. Department of Energy, Office of Basic Energy Sciences under Award No. DE-SC0012190.
|
2306.15161 | Wespeaker baselines for VoxSRC2023 | This report showcases the results achieved using the wespeaker toolkit for
the VoxSRC2023 Challenge. Our aim is to provide participants, especially those
with limited experience, with clear and straightforward guidelines to develop
their initial systems. Via well-structured recipes and strong results, we hope
to offer an accessible and good enough start point for all interested
individuals. In this report, we describe the results achieved on the VoxSRC2023
dev set using the pretrained models, you can check the CodaLab evaluation
server for the results on the evaluation set. | Shuai Wang, Chengdong Liang, Xu Xiang, Bing Han, Zhengyang Chen, Hongji Wang, Wen Ding | 2023-06-27T02:44:06Z | http://arxiv.org/abs/2306.15161v2 | # Wespeaker Baselines for VoxSRC2023
###### Abstract
This report showcases the results achieved using the web-peaker toolkit for the VoxSRC2023 Challenge. Our aim is to provide participants, especially those with limited experience, with clear and straightforward guidelines to develop their initial systems. Via well-structured recipes and strong results, we hope to offer an accessible and good enough start point for all interested individuals. In this report, we describe the results achieved on the VoxSRC2023 dev set using the pre-trained models, you can check the CodaLab evaluation server for the results on the evaluation set. **Any feedback and contribution are always welcome Index Terms**: wespeaker, voxsrc2023
Shuai Wang, Chengdong Liang, Xu Xiang, Bing Han, Zhengyang Chen, Hongji Wang, Wen Ding Wespeaker Team, WeNet Open Source Community
[email protected]
## 1 The VoxSRC Challenges
The VoxSRC (VoxCeleb Speaker Recognition Challenge) is an annual competition that focuses on the task of speaker recognition using the VoxCeleb dataset. Speaker recognition is a field within audio processing that aims to identify and authenticate individuals based on their unique vocal characteristics.
The VoxSRC Challenge serves as a platform for researchers and practitioners to showcase their advancements in speaker recognition technology. It provides a standardized evaluation framework, allowing participants to compare their methods and algorithms against each other.
VoxSRC 2023 consists of four tracks, which are consistent with the previous year's competition. Tracks 1, 2, and 3 are dedicated to speaker verification, where participants are required to determine whether two speech samples originate from the same person. The evaluation for Tracks 1 and 2 will be conducted on the same dataset, with Track 1's training data restricted to the VoxCeleb2 dev set, while participants can freely use any data for Track 2.
Track 3 aims to promote domain adaptation research, providing an evaluation set from another domain (CnCeleb dataset). It includes a large set of unlabelled data and a small set of labelled data from the target domain to serve as the adaptation data. The objective is to address the challenges of adapting speaker verification models to different domains.
On the other hand, Track 4 focuses on speaker diarisation, challenging participants to accurately segment multi-speaker audio into distinct portions that correspond to individual speakers. This track addresses the problem of determining "who spoke when" in a given audio recording.
## 2 Wespeaker: Speaker Embedding Toolkit for Research & Production
### Open-source speech processing toolkits
In the field of speech processing, the research community has made significant contributions to the open-source domain. Initially, toolkits such as HTK (Hidden Markov Model Toolkit) [1] and Kaldi [2] played a pivotal role in enabling researchers and industry applications. However, the emergence of deep learning toolkits like PyTorch and TensorFlow has brought about a shift in the landscape.
Recently, PyTorch-based toolkits such as SpeechBrain [3] and ESPnet [4] have gained popularity due to their user-friendly interfaces and support for rapid prototyping, making them accessible to new researchers. While these toolkits serve a broad range of applications, Went stands out by focusing specifically on end-to-end speech recognition. Its primary aim is to bridge the gap between research advancements and practical deployment in real-world scenarios.
### Wespeaker
In [5], we introduced Wespeaker, a speaker embedding learning toolkit designed for research and production purposes. Wespeaker is characterized by its lightweight code base and emphasis on high-quality speaker embedding learning, demonstrating impressive performance on multiple datasets. While prioritizing accessibility for researchers, Wespeaker also incorporates deployment codes that are compatible with both CPUs and GPUs, thereby facilitating the integration of research findings into practical production systems.
### Design principles
As mentioned in previous section, there are different speech toolkits which include speaker embedding learning functions, our proposed wespeaker stands out for its simpliness, effectiveness and deployment friendliness. THe design principles are as follows,
* **Light-weight**: Wespeaker is designed specifically for deep speaker embedding learning with clean and simple codes1. It is purely built upon PyTorch and its ecosystem, and has no dependencies on Kaldi [2]. Footnote 1: If you are interested in other tasks such as ASR, KWS, TTS, etc. We have different speficfly designed toolkits for different tasks, please visit [https://github.com/went-e2e](https://github.com/went-e2e) for more details
* **Production oriented**: All models in Wespeaker can be easily exported by torch Just In Time (JIT) or as the ONNX format, which can be easily adopted in the deployment environment. Sample deployment codes are also provided.
### Supported functionalities
Wespeaker supports different popular speaker embedding learning models, margin based softmax training objectives and several pooling functions.
### Easy hands-on
We have included pretrained models in the toolkit to assist users in quickly verifying results on relevant datasets. However, we would like to emphasize that **we DO NOT recommend users to solely submit results based on the provided single systems**. We encourage users to explore different methods of combining systems, either among the models we provide or with ones trained by themselves.
We provide the python binding for wespeaker for users to quickly try the pretrained models, further details could be found on the project webpage [https://github.com/wentet-e2e/wespeaker/tree/master/runtime/binding/python](https://github.com/wentet-e2e/wespeaker/tree/master/runtime/binding/python)
With the wespeakeruntime package installed, you can easily extract embeddings from WAV files specified in the wav.scp file and save them into embed.ark using the following code:
```
1importwsepeakerruntimeaswsepeaker
2wav_scp_path="path/to/wav.scp"
3embed_ark_path="embed.ark"
4
5speaker=wespeaker.Speaker(lang='chs')
6speaker.extract_embedding_kaldiio(
7wav_scp_path,embed_ark_path
8)
```
Moreover, we released several pretrained models as dipicted in Table 2, both in pytorch ".pt" format and runtime ".onnx" format, check [https://github.com/wenet-e2e/wespeaker/blob/master/docs/pretrained.md](https://github.com/wenet-e2e/wespeaker/blob/master/docs/pretrained.md) for details on how to use it.
## 3 Results
### Track 1 & 2
The results on the VoxCeleb1 evaluation dataset and the VoxSRC 2023 development set are presented in Table 1. The models employed for these evaluations are specified in Table 2.
### Track 3
There are various technology roadmap for unsupervised domain adaptation, and we only provide the results of the voxceleb pre-trained model in Table 3.
### Track 4
We used the open-source pyannote [20] toolkit as our Voice Activity Detection (VAD) system 2. The ResNet34_LM model was adopted as the speaker embedding extractor. For speaker clustering, we implemented the spectral clustering algorithm and adapted it specifically for the diarization task. The results on the VoxConverse dev and test sets are shown in Table 43.
Footnote 2: This is different from the silero VAD used in [5]
## 4 Suggestions for Performance Improvement
Our primary objective is to offer a robust initial model that serves as a strong starting point for further improvement. We aim to provide researchers with a solid foundation from which they can develop and enhance new algorithms. By supplying a sufficiently good initial model, we aspire to facilitate the development of novel methodologies within the research community. We didn't specifically do optimizations for Track 2, 3, and 4. Instead, we would like to provide some suggestions to provide several potential directions to work on:
### Track 2
* Increase Data Volume: Expand the training dataset by adding more data.
* Explore Large Pretrained Models [21]: Consider utilizing large pretrained models like WavLM [22] to leverage their extensive knowledge learned from vast amounts of audio data.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Test set & MISS(\%) & FA(\%) & SC(\%) & DER(\%) \\ \hline VoxConverse dev & 2.7 & 0.2 & 1.8 & 4.8 \\ \hline VoxConverse test & 3.2 & 0.7 & 3.0 & 7.0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results on the VoxConverse dev and test sets
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Datasets & Languages & Pretrained model \\ \hline VoxCeleb & EN & CAM++ / CAM++.LM \\ \hline VoxCeleb & EN & ResNet34 / ResNet34\_LM \\ \hline VoxCeleb & EN & ResNet152\_LM \\ \hline VoxCeleb & EN & ResNet221\_LM \\ \hline VoxCeleb & EN & ResNet293\_LM \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pretrained models provided
\begin{table}
\begin{tabular}{c c c c} \hline \hline Architecture & Mean Normalization & EER(\%) & minDCF \\ \hline \multirow{2}{*}{ResNet34} & N & 14.570 & 0.617 \\ \cline{2-4} & Y & 11.395 & 0.594 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results on the validation set of Track3. The \(p_{target}\) value is set to \(0.01\).
* Pretrained ASR Model Initialization: Phoneme information has been proven to be beneficial for building speaker verification systems [23]. Consider initializing your speaker embedding model with pretrained Automatic Speech Recognition (ASR) models. Several papers presented during ICASSP 2023 verified the effectiveness [24, 25].
* Hard Mining Strategy: Find confused speakers and add an extra inter-topK penalty on them is an effective way to improve performance in challenges [26, 27, 28]. Some of them have already been supported in Wesspaker4. Footnote 4: [https://github.com/went-e2e/wespeaker/pull/115](https://github.com/went-e2e/wespeaker/pull/115)
### Track 3
* Distribution Alignment: Employ adversarial training or other strategies to align the distributions of the source and target domains.
* Pseudo Label Learning: Utilize clustering algorithms or other methods to assign pseudo labels to unlabeled data from the target domain. It is important to note that these pseudo labels may contain noise, and exploring techniques [28, 29, 30] for training robust systems with noisy labels is a crucial topic.
* Unsupervised PLDA Adaptation: Building upon the implemented PLDA codes, you can incorporate PLDA adaptation mechanisms, such as the Kaldi version [2] and the CORAL series [17, 18], to further enhance performance5. Footnote 5: A good reference on PLDA adaptation can be found as [31]
### Track 4
* VAD Tuning. Currently, the errors caused by VAD is still quite high, improving the VAD might be a good choice.
* We only include the basic clustering algorithms here, you can try more alogrithms and reclustering methods such as VBx [32, 33].
### Results on the evaluation set
We submit the best single system (ResNet293) to the CodaLab evaluation server, check the following links for the numbers and rankings.
* Track 1 [https://zeus.robots.ox.ac.uk/competitions/competitions/17#results](https://zeus.robots.ox.ac.uk/competitions/competitions/17#results)
* Track 2: [https://zeus.robots.ox.ac.uk/competitions/competitions/16#results](https://zeus.robots.ox.ac.uk/competitions/competitions/16#results)
* Track 3: [https://zeus.robots.ox.ac.uk/competitions/competitions/14#results](https://zeus.robots.ox.ac.uk/competitions/competitions/14#results)
* Track 4: [https://zeus.robots.ox.ac.uk/competitions/competitions/18#results](https://zeus.robots.ox.ac.uk/competitions/competitions/18#results)
## 5 The Story
The voxceleb dataset is the largest opensource and high-quality dataset for speaker recognition and the web speaker team has a long history of supporting the voxceleb dataset and voxsrc challenges, the core members have achieved top rankings in previous VoxSRC competitions [7, 34, 35]6. Footnote 6: VoxSRC2019: 1st place, VoxSRC2020: 2nd place, VoxSRC2022: 3rd place
We have observed that there is often a disparity between the results reported in current research papers and the performance achieved in system reports for challenges, even when the training and evaluation data are the same. In order to provide a reliable starting point for researchers, we initiated webspeaker, aimed at delivering a reliable baseline system and a user-friendly toolkit. Moreover, contributers from the Wenet opensource community helped with the efficient data management techniques that enable scaling to industrial-sized datasets, as well as deployment codes for rapid prototyping in production environments.
Knowing that this would be the final VoxSRC challenge, the Wespeaker team is eager to contribute and support the event by providing an easy-to-use toolkit and baseline systems. We hope more participants can enjoy the challenge and focus on the algorithm improvement, without struggling with basic experimental setups.
## 6 Acknowledgement
We would like to extend our sincere appreciation to the VoxSRC challenge organizers for their invaluable contribution in open-sourcing this remarkable dataset and organizing such meaningful challenges. We would also like to express our gratitude to the wenet open-source community, whose dedication and collective efforts have played a pivotal role in the success and growth of webspeaker. Enjoy the Challenge and Welcome to contribute.
|
2302.07092 | The Gravitational-Wave Signature of Core-Collapse Supernovae | We calculate the gravitational-wave (GW) signatures of detailed 3D
core-collapse supernova simulations spanning a range of massive stars. Most of
the simulations are carried out to times late enough to capture more than 95%
of the total GW emission. We find that the f/g-mode and f-mode of proto-neutron
star oscillations carry away most of the GW power. The f-mode frequency
inexorably rises as the proto-neutron star (PNS) core shrinks. We demonstrate
that the GW emission is excited mostly by accretion plumes onto the PNS that
energize modal oscillations and also high-frequency (``haze") emission
correlated with the phase of violent accretion. The duration of the major phase
of emission varies with exploding progenitor and there is a strong correlation
between the total GW energy radiated and the compactness of the progenitor.
Moreover, the total GW emissions vary by as much as three orders of magnitude
from star to star. For black-hole formation, the GW signal tapers off slowly
and does not manifest the haze seen for the exploding models. For such failed
models, we also witness the emergence of a spiral shock motion that modulates
the GW emission at a frequency near $\sim$100 Hertz that slowly increases as
the stalled shock sinks. We find significant angular anisotropy of both the
high- and low-frequency (memory) GW emissions, though the latter have very
little power. | David Vartanyan, Adam Burrows, Tianshu Wang, Matthew S. B. Coleman, Christopher J. White | 2023-02-06T19:00:01Z | http://arxiv.org/abs/2302.07092v2 | # The Gravitational-Wave Signature of Core-Collapse Supernovae
###### Abstract
We calculate the gravitational-wave (GW) signatures of detailed 3D core-collapse supernova simulations spanning a range of massive stars. Most of the simulations are carried out to times late enough to capture more than 95% of the total GW emission. We find that the f/g-mode and f-mode of proto-neutron star oscillations carry away most of the GW power. The f-mode frequency inex-orably rises as the proto-neutron star (PNS) core shrinks. We demonstrate that the GW emission is excited mostly by accretion plumes onto the PNS that energize modal oscillations and also high-frequency ("haze") emission correlated with the phase of violent accretion. The duration of the major phase of emission varies with exploding progenitor and there is a strong correlation between the total GW energy radiated and the compactness of the progenitor. Moreover, the total GW emissions vary by as much as three orders of magnitude from star to star. For black-hole formation, the GW signal tapers off slowly and does not manifest the haze seen for the exploding models. For such failed models, we also witness the emergence of a spiral shock motion that modulates the GW emission at a frequency near \(\sim\)100 Hertz that slowly increases as the stalled shock sinks. We find significant angular anisotropy of both the high- and low-frequency (memory) GW emissions, though the latter have very little power.
The theory of core-collapse supernova (CCSN) explosions has been developed over the last six decades and is now a mature field at the interface of gravitational, particle, nuclear, statistical, and numerical physics. The majority of explosions are thought to be driven by neutrino heating behind a shock wave formed upon the collision of the rebounding inner core with the infalling mantle of the Chandrasekhar mass bitthed in the center of stars more massive than \(\sim\)8 M\({}_{\odot}\)[1, 2, 3, 4, 5]. After implosion ensues, this inner white dwarf core, with a mass near \(\sim\) 1.5M\({}_{\odot}\) and a radius of only a few thousand kilometers, requires only hundreds of milliseconds of implosion to achieve a central density above that of the atomic nucleus. At this point, the inner core stiffens, rebounds, and collides with the outer core, thereby generating a shock wave that should be the supernova explosion in its infancy. However, detailed 3D simulations [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22] and physical understanding dictate that this shock generally stalls into accretion, but is often reenergized into explosion by heating via the neutrinos emerging from the hot, dense, accreting proto-neutron star (PNS), aided by the effects of vigorous neutrino-driven turbulent convection behind that shock [23, 24, 25, 26, 15, 22]. The delay to explosion can last another few hundred milliseconds, after which the explosion is driven to an asymptotic state in a period of from \(\sim\)a few to \(\sim\)10 seconds. An extended period of neutrino heating seems required [27, 15, 5]. The shock wave then takes a minute to a day to emerge from the massive star, and this emergence inaugurates the brilliant electromagnetic display that is the supernova. The outcomes and timescales depend upon the progenitor core density and thermal structure at the time of collapse [28, 29], which itself is an important function of progenitor mass, metallicity, and rotational profile. If a black hole eventually forms, the core must still go through the PNS stage, and it is still possible to launch an explosion, even when a black hole is the residue. There is never a direct collapse to a black hole. A small fraction of supernova (hypernovae?) may be driven by magnetic jets from the PNS if the cores are rotating at millisecond periods. Otherwise, magnetic effects are generally subdominant, but of persistent interest in the context of pulsar and magnetar birth [30, 31, 32, 33, 34, 35, 36].
Though this scenario is buttressed by extensive simulation and theory, and most 3D (and 2D) models now explode without artifice [8, 12, 13, 15, 17, 20, 21, 22, 26, 37, 38], direct verification of the details and the timeline articulated above are difficult to come by. However, the neutrino and gravitational-wave signatures of this dynamical event would allow one to follow the theoretically expected sequence of events in real time. The detection of 19 neutrinos from SN1987A was a landmark [39, 40], but little was learned, other than that a copious burst of neutrinos, whose properties are roughly in line with theory, attends core-collapse supernovae and the birth of a compact object. The real-time witnessing of the events described as they unfold is the promise of gravitational-wave (GW) detection from a supernova explosion. The detection of these GWs and the simultaneous detection of the neutrinos overlapping in time is the holy grail of the discipline.
The CCSN gravitational wave signal [41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54] from bounce through supernova explosion and into the late-time (many-second) proto-neutron star (PNS) [55, 32, 56] cooling phase (or black hole formation) can be decomposed into various stages with characteristic features, frequency spectra, strains and polarizations [57]. GWs are generated immediately around bounce in core
collapse supernovae by time-dependent rotational flattening (if initially rotating, see e.g. [58]) and prompt post-shock overturning convection (lasting tens of milliseconds) [59, 24], then during a low phase (lasting perhaps \(\sim\)50\(-\)200 milliseconds) during which post-shock neutrino-driven turbulence builds, followed by vigorous accretion-plume-energergized PNS modal oscillations (predominantly a mixed \(\mathrm{f/g}\)-mode early, then a pure f-mode later, see SSII.1) [41, 51]. If an explosion ensues, these components are accompanied by low-frequency (\(\sim\)1\(-\)25 Hertz (Hz)) gravitational-wave "memory" (see SSII.6) due to asymmetric emission of neutrinos and aspherical explosive mass motions [60, 61, 62, 63]. The duration of the more quiescent phase between prompt overturning convection and vigorous turbulence depends upon the seed perturbations in the progenitor core [15], of which the duration of this phase is diagnostic. Relevant for the GW signature is the equation of state (EOS) [64, 65, 59, 66], the rate of neutronization and neutrino cooling of the core [67, 56], and the stellar core's initial angular momentum and mass density distributions. Measurable GW signatures of rotation require particular progenitors that rotate fast, while all other phases/phenomena are expected to be operative in any core-collapse context, except matter memory, which requires an asymmetric explosion. The rotational signature is primarily through its dependence upon the ratio of rotational to gravitational energy (\(T/W\)) [44, 50, 18], and, for fast rotating cores, to the degree of initial differential rotation. Black hole formation will approximately recapitulate the sequence followed by neutron star formation, except that when the black hole forms long after collapse the GW signal ceases abruptly [68] and that due to the shrinking stalled shock radius a spiral mode is excited that quasi-periodically modulates its GW signal (see SSII.2). The termination of the neutrino emission is correspondingly abrupt. Though these phases are generic, their duration, strain magnitudes, and degree of stochastic variation and episodic bursting (due in part to episodic accretion and fallback) is a function of massive star progenitor density structure, degree of rotation, and the chaoticity of the turbulence.
Though neutrino-driven convection generally overwhelms manifestations of the "SASI" (Standing Accretion Shock Instability [69, 70]), the SASI is sometimes discernible as a near 100\(-\)200 Hz subdominant component. However, when the explosion aborts and the average shock radius sinks deeper below \(\sim\)100 km, the so-called "spiral SASI" [71, 46, 72] emerges (SSII.2). This spiral rotational mode has a frequency of \(\sim\)100-200 Hz, is interestingly polarized, and if present clearly modulates both the neutrino and GW signatures until the general-relativistic instability that leads to black hole formation.
Therefore, each phase of a supernova has a range of characteristic signatures in gravitational waves (GWs) that can provide diagnostic constraints on the evolution and physical parameters of a CCSN and on the dynamics of the nascent PNS. Core bounce and rotation, the excitation of core oscillatory modes, neutrino-driven convection, explosion onset, explosion asymmetries, the magnitude and geometry of mass accretion, and black hole formation all have unique signatures that, if measured, would speak volumes about the supernova phenomenon in real time.
For this paper, we have run a broad set of 3D simulations (11 progenitors in total, with progenitors ranging from 9 to 23 \(\mathrm{M}_{\odot}\)) out to the late post-bounce times (up to \(\sim\)6 seconds post-bounce). These late-time 3D simulations illustrate the sustained multi-second gravitational wave signal and are the longest 3D core-collapse simulations with sophisticated neutrino transport performed to date. Simulations out to one second or less do not capture the entire time evolution of the gravitational wave signal. We find that the gravitational wave signal persists out to late times for all models, is strongly correlated with the turbulence interior to the shock (see SSII.7), and is not correlated with proto-neutron star convection (SSII.7.1). We see a memory signature at \(\leq\)25 Hz associated with large-scale ejecta for our exploding models, and a spiral SASI signature at \(\sim\)100 Hz for the non-exploding models. All our models show an early prompt-convection phase at \(\sim\)50 milliseconds (ms) associated with a negative entropy gradient interior to the stalled shock front at \(\sim\)150 km. For most models, this is followed by a quiescent phase of duration \(\leq\)50 ms, after which the strain grows in association with turbulent motion interior to the stalled shock. The 23-M\(\odot\) model shows an interesting exception, as the strain illustrates another feature lasting from \(\sim\)175 to 350 ms, coincident with the shock receding before reviving. As the accretion rate and turbulence diminish, so too does the strain. However, at late times, we see a consistent offset in the strain associated with matter and neutrino memory in the exploding models (SSII.6).
We now proceed to a more detailed discussion of our new results that span a broad progenitor mass range, capture for the first time and for most models the entire gravitational wave signal of CCSNe, and do not suffer from Nyquist sampling problems. In this paper, we focus on initially non-rotating progenitors, whose general behavior should also encompass models for slowly rotating initial cores. We note that even initially non-rotating 3D CCSN models experience core spin up due to stochastic fallback [73], and this effect is a normal by product of sophisticated 3D simulations. In SSI, we provide information on the simulation suite and its characteristics and briefly describe the various models' hydrodynamic developments. Then, in SSII we present our comprehensive set of findings concerning the complete gravitational-wave signature of initially non-rotating core-collapse supernovae, partitioned into subsections that each focus on a different aspect of this signature and its import. In subsection SSII.1, we lay out the basic signal behaviors as a function of progenitor. This section contains our major results and then is followed in SSII.2 by a digression into the GW signature of black hole formation. In SSII.3, we present an interesting finding concerning the dependence of the total radiated GW energy on compactness
[74] and in SSII.4 we note the avoided crossing that is universally manifest in all CCSN GW spectra and seems a consequence of the presence of inner proto-neutron star (PNS) convection and its growth with time. In SSII.5 we discuss the solid-angle dependence of the matter-sourced GW emission, both at high and low (matter "memory") frequencies, and in SSII.6 we present our results concerning neutrino memory at low frequencies. Then, we transition in SSII.7 to a discussion of the predominant excitation mechanism. Finally, in SSIII we recapitulate our basic findings and wrap up with some observations.
## I Setup and hydrodynamics summary
We present in this paper a theoretical study of the gravitational-wave emission of eleven core-collapse supernovae (CCSNe) progenitors, from 9 to 23 M\({}_{\odot}\), evolved in three dimensions using the radiation-hydrodynamic code Fornax[75]. To calculate the quadrupole tensor and the gravitational-wave strains we employ the formalisms of references [76] and [77] (see also SSA) and we dump these data at high cadences near the LIGO sampling rate (Table 1). The progenitor models were selected from [78] for the 14-, 15.01-, and 23-M\({}_{\odot}\) models and from [79] for models between 9 and 12.25 M\({}_{\odot}\). The radial extent of the models spans 20,000 kilometers (km) to 100,000 km, generally increasing with progenitor mass. All the models (except the 9-M\({}_{\odot}\) model on Blue Waters, which had 648 radial zones) were run with 1024\(\times\)128\(\times\)256 cells in radius, \(\theta\), and \(\phi\). We employ 12 neutrino energy groups for each of the \(\nu_{e}\), \(\bar{\nu}_{e}\), and "\(\nu_{\mu}\)"s followed (see [73; 21]) and the SFHo equation of state [80]. The progenitor models are non-rotating, though some degree of rotation is naturally induced due to fallback [73]. These simulations include two of the longest 3D CCSNe simulations run to date, a 11-M\({}_{\odot}\) model evolved past 4.5 seconds post-bounce, and a 23-M\({}_{\odot}\) model evolved to \(\sim\)6.2 seconds post-bounce. All of our models explode except the 12.25- and 14-M\({}_{\odot}\) progenitors, and all models besides the 14-M\({}_{\odot}\) are evolved beyond one second post-bounce. We include four simulations of the 9-M\({}_{\odot}\) model on different high-performance clusters (Frontera, Theta, and Blue Waters) at various stages of code evolution. The low-mass 9-M\({}_{\odot}\) models asymptote early on in diagnostic quantities such as explosion energy and residual mass, and one iteration has been evolved past two seconds post-bounce. Our models, run times, and explosion outcome are summarized in Table 1. Several of the models have been published before, including three of the four 9-M\({}_{\odot}\) iterations (the fourth, the longest simulation, is new) in references [81], [21], and [73].
In Figure 1, we plot the density profiles against enclosed mass for all eleven models studied here. Note the association of the silicon-oxygen (Si/O) interface density drop (see, e.g. [5; 14; 21; 28; 29; 38; 83; 82]) with the onset of successful shock revival. The 14-M\({}_{\odot}\) and 12.25-M\({}_{\odot}\) progenitors lack such a strong interface and do not explode. Low-mass progenitors (e.g. 9-9.5 M\({}_{\odot}\)) have a steep density profile and explode easily (though still with the aid of turbulent convection). Models 11-, 15.01-, and 23-M\({}_{\odot}\) have Si/O interfaces successively further out, and explode successively later.
We show the angle-averaged shock radii at early and late-times in Figure 2. All models except the 14 and 12.25 M\({}_{\odot}\) models explode, with an approximate correlation between progenitor compactness [74] and explosion time. The two non-exploding models experience \(\sim\)10 ms oscillations in the shock radii due to a spiral SASI that manifests itself after \(\sim\)350 ms. We also see a longer secular timescale oscillation of \(\sim\)70 ms in the 12.25-M\({}_{\odot}\) and 14-M\({}_{\odot}\) black-hole formers. We summarize the eleven models in Table 1. The more massive 15.01- and 23-M\({}_{\odot}\) progenitors explode later, with the latter showing shock revival only after \(\sim\)0.5 seconds post-bounce. A later shock revival time, again \(\sim\)0.5 s, was also seen for the 25-M\({}_{\odot}\) progenitor in [21]. After the first \(\sim\)500 ms, the shock velocities settle into approximately asymptotic values that range from 7000 to 16000 km s\({}^{-1}\), inversely correlated crudely with the progenitor mass (see also [84]).
## II Results
### General Gravitational-Wave Signal Systematics of Core-Collapse Supernovae
As stated, we highlight for this study of the GW signatures of core-collapse supernovae eleven of our recent initially non-rotating 3D Fornax simulations. Care has been taken to calculate the quadrupole tensor with a high enough cadence to avoid Nyquist sampling problems, and we have been able to simulate to late enough times to capture what is effectively the entire GW signal after bounce for a large subset of the models. Table 1 provides the duration of each simulation and the minimum Nyquist frequency achieved during each run. We will discuss both matter and neutrino contributions to gravitational wave energy, and the relevant equations are summarized in Appendix A.
In Figure 3, we plot the plus (black) and cross (blue) polarizations in the x-direction of our computational grid of the strain multiplied by the distance to the source. Other orientations yield qualitatively similar numbers for the higher frequency components that dominate the GW power. However, there is a large variation at low frequencies (\(\leq\)25 Hz) of the matter and neutrino memories with solid angle (see SSII.6). Note that the x- and y-axes cover different ranges for each model. A red star on the panels indicates the rough time of explosion, defined loosely. As Figure 3 demonstrates, all the exploding models transition through similar phases. During the first \(\sim\)50 ms there is a burst of emission due to prompt overturning convection driven by the negative entropy gradient produced behind the shock wave as it stalls. The detailed time behavior of this overturn will depend on the initial
accreted perturbations, which will set the number of e-folds to the non-linear phase. However, the basic behavior and timescales are broadly similar. Figure 4 focuses on this early first 0.25 seconds. For the lower-mass progenitors, explosion (the red star) ensues towards the end, or not long after, the prompt signal, and this is followed by the early growth of the second phase. For the more massive progenitors (such as the 23-M\({}_{\odot}\) model), the onset of explosion can be much later. As Figure 3 shows, the growth phase of the GW emission continues beyond what is shown in Figure 4 to a strong peak. That peak phase is powered by the accretion of the infalling plumes during explosion. A core aspect of 3D core-collapse explosions is the breaking of spherical symmetry that allows simultaneous accretion in one direction and explosion in another [21; 5; 72; 28]. For exploding models, the infalling plumes that strike the surface of the PNS can achieve supersonic speeds before impact. For the black hole formers (the 14-M\({}_{\odot}\) and 12.25 M\({}_{\odot}\) models here), the accretion is maintained, but impinges upon the PNS core subsonically. This will have interesting consequences we discuss in SSII.2.
Figures 3 show that the lower-mass models have smaller strains and that the phase of high strain lasts for a shorter time. For the 9-M\({}_{\odot}\) through 9.5-M\({}_{\odot}\) models, much of the GW emission subsides by \(\sim\)0.25-0.5 seconds, while the high phase lasts \(\sim\)1.2 seconds for the 23-M\({}_{\odot}\) model and continues beyond \(\sim\)1.0 and \(\sim\)1.5 seconds for the 11-M\({}_{\odot}\) and 15.01-M\({}_{\odot}\) models, respectively. These differences reflect the differences in the initial density profiles (Figure 1) and the compactness (see also Figure 11).
After this vigorous phase, the bounding of the accretion plumes subsides, but the signal continues at a low amplitude. Though as much as \(\sim\)95% of the GW energy emission has already occurred, the f-mode continues to the latest times we have simulated as a low hum of progressively increasing frequency1. Hence, we see universally for the exploding models a transition from a high-amplitude, lower-frequency stage (\(\leq\)0.3-1.5 seconds, depending upon the progenitor) to a lower-amplitude high-frequency stage (\(\geq\)1.5 seconds). As Figure 3 indicates, for the exploding models a very-low frequency memory is superposed that represents a permanent metric strain. There is no such matter memory signal for the black-hole formers (SSII.6), but the accretion phase continues for them to very late times, abating only slowly as the mantle continues to accrete the mass of the outer mantle until the general-relativistic instability that leads to a black hole ensues.
Footnote 1: Sonifications of the signals are available upon request.
It has too often been thought that strain signals such as are depicted in Figures 3 and 4 are too noisy to be templated cleanly, and this to a degree is true. There is a lot of stochasticity due to chaotic turbulence. However, the frequency content of these signals tells a different story. In Figures 5 and 6, we plot spectrograms of the gravitational wave power versus time after bounce for our 3D models and see distinct structures. The most obvious feature is the f/g-mode [51; 58; 59; 60; 61] from \(\sim\)400 Hz early, rising to \(\sim\)1000-3000 Hz after \(\sim\)0.8 seconds after bounce. It is in this band that most of the emitted GW power of supernovae resides (see Table 1). This is a natural consequence of the fact that the peak in the eigenfunction of the f-mode is in the PNS periphery where the collisions of the accreta with the core are occurring. Hence, the excitation and the fundamental f-mode eigenfunction overlap nearly optimally. Associated with this feature in the earlier phases is a dark band near \(\sim\)1000-1300 Hz. This has been interpreted as a manifestation of an avoided crossing [51] between a trapped \(\ell=2\) g-mode and the \(\ell=2\) f-mode. All the spectrograms for all our models show the same modal interaction, though at slightly different frequencies. For instance, at two seconds after bounce, the f-mode frequency is \(\sim\)1.75 kHz, 1.8 kHz, 2 kHz, and 2.5 kHz for the 9-, 11-, 12.25-, and 23-M\({}_{\odot}\) models, respectively, reflecting the variation in model PNS masses. Early on power is in the lower frequency component (mostly a trapped g-mode, mixed with the f-mode), and then it jumps to the higher frequency component (mostly the f-mode). This modal repulsion, or "bumping," is a common feature in asteroseismology [88] and seems generic in core-collapse seismology.
All the models show the early prompt convection phase, with power from \(\sim\)300 to \(\sim\)2000 Hz. After this, all the models manifest a "haze" of emission that extends above the f/g-mode to frequencies up to \(\sim\)2000 to \(\sim\)5000 Hz. The duration of this haze is from \(\sim\)0.25 to \(\sim\)1.5 seconds and tracks the phase of vigorous accretion (see SSII.7). Individual PNS pulsation modes, likely the \(\ell=2;n=1\) p-mode, can also be seen superposed in this "haze" and extending beyond it to later times. This is particularly the case for the 12.25-, 15.01-, and 23-M\({}_{\odot}\) models. It is only for the models with the most vigorous accretion onto the core that this mode is clearly seen at later times.
The origin of this haze is still a bit unclear, though it is definitely excited by the pummeling accreta (SSII.7). The p-mode frequencies of the PNS for radial node numbers from 1 to 10 reside in this space and we could be seeing an overlapping and unresolved superposition of these modes. However, exploding models experience simultaneous explosion and accretion, the latter through infalling funnels that are few in number, can achieve supersonic speeds, and dance over the PNS surface. It is likely that the time-changing quadrupole moment of these funnels as they impinge upon the PNS surface is the source of this power. The timescales of their deceleration are about right, those timescales have a spread which could translate into a broad feature, and at any particular time they represent a low angular order perturbation. Importantly, however, we don't see this haze for the black-hole formers 12.25-M\({}_{\odot}\) and 14-M\({}_{\odot}\). It is only for the exploding models that there is a breaking of spherical symmetry that
results in simultaneous explosion and supersonic funnel infall. Figure 7 portrays two snapshots of the Mach-number distribution of the inner 100 km of the residue of the 23-M\({}_{\odot}\) model, clearly showing such funnel collisions. The accretion onto the proto-black-hole is always subsonic. Though the haze constitutes at most only a few 10's of percent of the total emission, its origin is clearly an interesting topic for future scrutiny.
In Figure 8, we plot spectrograms for two representative models of the effective strain versus time after bounce. The effective strain is defined as the average of both strain polarizations,
\[h_{\rm eff}=0.5\left(h_{+}+h_{\times}\right)\,. \tag{1}\]
This figure provides a focus on the low-frequency regions. We see for the 14-M\({}_{\odot}\) black-hole former some power near \(\sim\)100 Hz, which we identify with the spiral-SASI [71] (see SSII.2). Such a mode emerges in our model set only for the proto-black-holes (see SSII.2). In the red blot in the lower left-hand corner of the left panel may be a signature of the traditional SASI [69]. We generally see little evidence in the GW signature of this SASI, but always see the spiral SASI when the explosion is aborted and the stalled shock recodes. In the right panel of Figure 8, the red band is a signature of the matter memory associated with the asymmetrical explosion of the 23-M\({}_{\odot}\) model. Whether the traditional SASI is seen to the left of this band is unclear, but the early recession of its shock before explosion might be conducive to its brief appearance.
### Signatures of Failed Explosions - The Prelude to Black Hole Formation
If a black hole forms by late-time fallback after many, many seconds to hours after the launching of a stalled shock that seemed to herald a successful explosion (but didn't), then the GW signal will be similar to those seen in the context of successful explosions. If, however, the stalled shock is never "reignited," it will slowly settle to progressively smaller radii and the mantle of the progenitor core will continue to accrete through it onto the PNS. Eventually, the fattening PNS will experience the general-relativistic instability to a black hole, at which time the GW emission abruptly terminates within less than a millisecond. This latter phase could take many seconds to many minutes to reach2. The GW signature of this modality of black hole formation, representatives of which are our 14- and 12.25-M\({}_{\odot}\) models, has particular diagnostic features that set these evolutions apart. First, the breaking of symmetry that results in the simultaneous accretion of lower-entropy plumes with the explosion of high-entropy bubbles does not occur. The result is that for this channel of black hole formation the infalling plumes do not dance over the PNS, do have high Mach numbers, and don't excite the higher-frequency "haze" that we have identified for the exploding models seen in the associated spectrograms (Figures 5 and 6). We do see in Figure 6 power not only in the dominant f-mode, but weakly in an overtone p-mode as well. However, as shown in Table 1 for the 12.25 M\({}_{\odot}\) black-hole former, the fraction of the total GW energy radiated in the f-mode is correspondingly higher, as much as \(\sim\)95% of the total, than for the exploding models that also generate power in the haze.
Footnote 2: For the 14-M\({}_{\odot}\) model, we estimate a black hole formation time (using a maximum baryon mass of 2.477 M\({}_{\odot}\) from Steiner et al. [80] at the onset of collapse) of \(\sim\)500 seconds.
This channel of black hole formation also experiences the emergence of what we identify as the spiral-SASI [71]. This is seen in Figure 2 in the clear \(\sim\)100-200 Hz periodicity of the late-time mean shock position of both the 14- and 12.25-M\({}_{\odot}\) models after \(\sim\)300\(-\)400 milliseconds after bounce and very clearly in the spectrogram of the shock dipole depicted in Figure 9. Generally, this feature emerges after the mean stalled shock radius sinks below \(\sim\)100 km and is not seen in exploding models. The timescale of the periodicity scales roughly with \(\Delta R_{s}/c_{s}+\Delta R_{s}/v_{acc}\), where \(R_{s}\), \(c_{s}\), and \(v_{acc}\) are the shock radius, speed of sound, and post-shock accretion speed [70].
Another feature seen most clearly in Figure 2 in the context of these black-hole formers is a much longer-timescale modulation of the mean shock position with a period near \(\sim\)70 ms. Not seen clearly in the GW spectrograms or strain plots (though there may be a hint in the strain plot for the 14-M\({}_{\odot}\) model), this oscillation may be due to a global pulsation mode associated with the neutrino heating, cooling, and transport of the mantle, but this speculation remains to be verified. Nevertheless, this feature has never before been identified in studies of 3D CCSNe and is interesting in itself. Finally, as Figure 3 suggests for the 14- and 12.25-M\({}_{\odot}\) models, since those cores that form black holes by this channel do not explode, they are expected to have no net low-frequency matter memory component.
### Total Gravitational Wave Energy Radiated
In Figure 10, we plot versus time the integrated radiated gravitational wave energy due to matter motions. Model 23-M\({}_{\odot}\) radiates the most gravitational wave energy (\(\sim\)3.0\(\times\)10\({}^{46}\) erg, or \(\sim\)2\(\times\)10\({}^{-8}\) M\({}_{\odot}\) c\({}^{2}\) after \(\sim\)5 seconds), while the collection of 9-M\({}_{\odot}\) models radiates the least. There are a few important features of this plot. First, we see that we have captured what is basically the entire GW signal for many of the models (the 15.01-M\({}_{\odot}\), 14-M\({}_{\odot}\), and 12.25-M\({}_{\odot}\) model emissions are still climbing). Within \(\sim\)1.5 seconds, most exploding models have radiated \(\geq\)95% of the total energy to be radiated, and after \(\sim\)2 seconds they have radiated \(\geq\)98%. Table 1 provides the total energy radiated via the f/g-mode, as well
as the fraction of this total radiated in the f-mode after 1.5 seconds. Not shown in Table 1 is the fact that more than 95% of the total GW energy radiated after 1.5 seconds is via the f-mode. As indicated for the 12.25-M\({}_{\odot}\) model in Table 1 and Figure 10, due to continuing accretion the black hole formers radiate to later times than the exploding models, and this mostly in the f-mode.
Figure 10 shows that the various phases described in SSII.1 are recapitulated via stair steps until finally asymptoting. Moreover, the continuum of models highlighted in this paper demonstrate collectively that the radiated GW signal energies vary by as much as three orders of magnitude from the lowest-mass to the higher-mass models. This is a consequence of the differences in their initial density profiles (see Figure 1), and directly from the resulting mass accretion histories. Even more directly, as Figure 11 demonstrates, there is a strong monotonic relation for exploding models between the total GW energy radiated and the compactness [74] of the initial progenitor "Chandrasekhar" core3. Though compactness does not correlate with explodability [28; 29; 5; 83; 21], it does seem to correlate with residual neutron star mass, radiated neutrino energy, and, as now indicated in Figure 11, the total gravitational-wave energy radiated. In fact, we derive a power-law with index \(\sim\)0.73 between the two.
Footnote 3: We define the compactness here as \(\xi_{M}=\frac{M/M_{\odot}}{R(M)/1000\,\mathrm{km}}\), where \(M=1.75\) M\({}_{\odot}\).
Finally, we note that the collection of 9-M\({}_{\odot}\) models don't behave exactly the same. This is due to the fact that these models were simulated with slightly different code variants (as we continued to update and upgrade Fornax); one (the 9a model) had small artificial imposed perturbations in the initial model and three different supercomputers were used. The natural chaos in the flow and in the simulations will pick up on any slight variations and amplify them, with the result that the evolution can be slightly different. Though the qualitative behavior of all four of these 9-M\({}_{\odot}\) models is the same, they exploded at slightly different times (see Figure 3), with the resulting different developments of their GW signals. As a consequence, the radiated energy varies by about a factor of two. In a crude sense, we might view this spread as an imperfect indicator of the likely spread in Nature due to the chaos of turbulence for the "same" progenitor, but we have certainly not demonstrated this.
### Avoided Crossing and Trapped g-mode
A distinctive feature seen clearly in the spectrograms for all the models (see Figures 5, 6) is a dark band near \(\sim\)1000\(-\)1300 Hz during the first \(\sim\)0.3\(-\)1.0 seconds of the post-bounce evolution. This is most likely due to an avoided crossing [88] of interfering \(\ell=2\) PNS pulsation modes that are coupled and mixed [51]. The best current thinking is that the interfering modes are a trapped g-mode and the \(\ell=2\) f-mode [86; 51], though much work remains to be done to determine the details and the nature of the couplings. Non-linear mode coupling may be involved. There is also evidence for some GW power in the "bumped" g-mode (that thereafter trends to lower frequencies), seen just after the mode repulsion in Figures 5 and 6, but most clearly in Figure 17 below. This is a qualitatively similar feature to that highlighted recently in [89]. Nevertheless, within \(\sim\)0.3 to \(\sim\)1.0 seconds (depending upon the progenitor) most power is clearly in the f-mode, where it persists thereafter.
G-modes, however, are generally at low frequencies below \(\sim\)500 Hz and don't contribute much to the GW signature. Moreover, the presence of lepton-gradient-driven PNS convection [55; 32] introduces a region in the PNS for which g-modes are evanescent and non-propagating. Figure 12 depicts the evolution of PNS convection for most of the models presented in this work. One sees clearly that the extent of PNS convection starts in a narrow shell, but grows wider with time. By \(\sim\)1.6 to \(\sim\)4.0 seconds PNS convection has grown to encompass the center and most of the residual PNS and will persist beyond the simulation times of this study [56]. During the early post-bounce phase, though most g-modes have frequencies too low to couple with the f- and p-modes, at an early stage before the region of PNS convection has grown too thick it is possible for a g-mode trapped mostly interior to PNS convection to couple with them. With time, the coupling will be broken by the evolving thickness of the convective shell; it is this growth that eventually severs the coupling with the outer regions where the impinging plumes are providing the excitation and that leads to the jump to the pure f-mode. However, much work remains to be done to fully demonstrate the details of this coupling and "bumping" transition. Nevertheless, the manifest presence of this avoided crossing in the GW spectrograms and in the GW signature of core-collapse universally is an interesting direct marker of the presence of PNS convection that deserves further study.
### The Angular Anisotropy of the Matter Gravitational-Wave Emissions
In Figure 13, we plot the matter strain for the 9b, 12.25-, and 23-M\({}_{\odot}\) models as a function of time after bounce to illustrate the anisotropy with various, arbitrarily chosen viewing angles. Note that significant anisotropy manifests at late times in the (low-frequency) memory component, which captures the large-scale asymmetry in the explosion ejecta. The non-exploding 12.25-M\({}_{\odot}\) model, by comparison, has virtually no anisotropy. The early high-frequency component is stochastic, whereas the low-frequency late-time memory show secular time-evolution that does not average to zero and indicates a metric shift, reaching values of \(\sim\)5 cm for the various massive models (showing a general trend with the progenitor mass/explosion asymmetry).
find similar significant anisotropy in the (low-frequency) neutrino memory component (discussed in SSII.6). Importantly, however, when calculating the total inferred "isotropically-equivalent" radiated GW energy, which is dominated by the higher-frequency component in and near the LIGO band, as a function of angle we find that it varies by \(\sim\)10 to \(\sim\)15% around an angle-averaged mean. This implies that, though the higher-frequency emissions are indeed anisotropic, the integrated high-frequency signals are only weakly dependent on angle.
### The Neutrino Memory Component
In addition to the matter memory, asymmetries in the emission of neutrinos produce another low-frequency memory component, the neutrino memory [60; 61; 62; 63; 90; 91]. In Figure 14, we plot the gravitational wave strain due to anisotropic neutrino emission as a function of time after bounce for the models studied here. The neutrino strain is significantly larger in magnitude than the matter contribution, reaching over 1000 cm for the most massive progenitors. There is generally a hierarchy of strain amplitude with progenitor mass, reflecting the sustained turbulent accretion in more massive progenitors, which results in higher neutrino luminosities and generally more anisotropic explosions. The 11-M\({}_{\odot}\) model is an exception and fields the highest strain amplitude. In addition, the neutrino memory shows much lower frequency evolution and more secular time-evolution than the matter component, which is fundamentally because it is a cumulative time-integral of the anisotropy-weighted neutrino luminosity (see Appendix A as well as [72]). The difference in the mean frequencies of the neutrino and matter memories may, therefore, provide a means someday to distinguish them observationally.
In Figure 15, we plot the gravitational wave energy due to neutrino emissions as a function of time after bounce for our various 3D models. There are several key and distinguishable features. First, like the gravitational wave energy due to matter motions displayed in Figure 10, we see growth by over two orders of magnitude (but not three, as in the former) in the neutrino memory. Additionally, we generally see a hierarchy with progenitor mass, however with the 11-M\({}_{\odot}\) surpassing the 23-M\({}_{\odot}\) until \(\sim\)4.5 seconds. Unlike the matter component of the gravitational wave energy, the neutrino component shows sustained growth for our longest duration model (the 23-M\({}_{\odot}\) model). In comparison with Figure 5 from [92], this emphasizes again the need to carry simulations out to late times to capture the entire signal. Note that, despite the higher strains seen in the neutrino component of the gravitational wave signature and its sustained growth, due to the much smaller frequencies it is still more than two orders of magnitude less energetic than the matter-sourced gravitational wave energy. In addition, though both components capture the development of turbulence, the neutrino component does not show a prompt convective phase and begins to develop \(\sim\)100 to 200 ms later than the matter component.
As with the matter component, the neutrino component is most pronounced for delayed explosions of models with higher compactness reflecting their more vigorous turbulent accretion history and more anisotropic explosions.
### Turbulent Accretion Excites Gravitational Wave Emission
The major excitation mechanism of gravitational waves from core-collapse supernovae and black hole formers is the bounding accretion onto the PNS core [41; 49; 48; 49; 51; 52]. As shown in SSII.1, much of the GW power comes out at the frequencies associated with the pulsational modes of the PNS. To demonstrate the correlation of the gravitational strain with the matter accretion, we first remove the matter memory by applying a high-pass Butterworth filter below 15 Hz to the strains. Then, in Figure 16 we plot the turbulent hydrodynamic luminosity evolution interior to the shock with this filtered strain timeline.
The turbulent hydrodynamic flux is defined following [35] as
\[F_{conv}=\left\langle\left(\frac{1}{2}\rho v_{turb}^{2}+u+p\right)v_{turb}^{r} \right\rangle_{d\Omega}, \tag{2}\]
including the turbulent kinetic energy, the internal energy \(u\), and the pressure \(p\). The turbulent velocity is defined here as the radial component of the turbulent velocity,
\[v_{conv}^{r}=\langle(v^{r}-v_{ave}^{r})^{2}\rangle_{d\Omega}^{1/2}\,, \tag{3}\]
where \(v_{ave}^{r}\) is the density-weighted angle average of the radial velocity. This is calculated at 110 km for all models except the non-exploding 12.25- and 14-M\({}_{\odot}\) progenitors, whose shocks early on sink below this radius; for these models we calculate the turbulent hydrodynamic flux at 2.5 times the PNS radius (defined here as the density cutoff at \(10^{10}\) g cm\({}^{-3}\)).
The strong correlation throughout their evolution (even in detail) between the turbulent hydrodynamic flux impinging onto the core and the GW strains demonstrates that turbulent accretion through the shock and onto the PNS core is the major agency of GW excitation and emission in core-collapse supernovae. We note that this correlation was demonstrated even though the flux was angle-averaged and the strain was for emission along the x-axis. No attempt was made to break the flux into \(\ell=2;m=[-2,-1,0,1,2]\) components, yet the correlation is clear.
To provide another perspective, in Figure 17, we show the relation between the gravitational wave energy spectrogram and the accretion rate "power" in frequency
components with \(f>25\) Hz (orange lines) for the 9.5- and 23-M\({}_{\odot}\) models. As the panels demonstrate, after the explosion and the period of heavy infall subsides (during which the effects of individual accretion events overlap), the accretion rate power shows clear correlations with excursions in the gravitational wave energy spectrogram. Spikes and gaps on the spectrogram coincide directly with the peaks and troughs on the curve, meaning that the gravitational waves are excited by, or at least correlated with, the short-period variations in accretion rate onto the core. Such mass accretion rate variations directly tie both episodic fallback and outflow events with the GW emission.
We note that the colormap used for these plots reveals, particularly for the 23-M\({}_{\odot}\) model, some power in the g-mode "bumped" by the f-mode and the repulsion between the two modes around \(\sim\)0.8 seconds (see SSII.4). Though weak, for the 23-M\({}_{\odot}\) model this signature continues almost to \(\sim\)2.0 seconds, and perhaps beyond.
#### ii.2.1 Possible Secondary Role of Proto-Neutron-Star Convection
Some have suggested that inner PNS convection itself excites much of the GW emission from CCSNe [47]. In Figures 18 and 19, we overplot the angle-averaged convective hydrodynamic luminosity at its peak inside the PNS (see also Figures 12) against the Butterworth filtered strain. We follow Eq. 3, but limit the radius to that of the maximum convective luminosity within the PNS (positive outwards), which lies near \(\sim\)15 km for the various models. As Figures 18 and 19 show there seems to be no correlation between the two. This is particularly clear at late times when the PNS convective flux is still large while the GW emissions have all but subsided. If PNS convection were the agency of excitation at all phases, GW emission would not have subsided to such a degree at later times (from 0.3 to 1.5 seconds after bounce). This comparison demonstrates the importance of simulating to late times to capture the entire GW signal.
However, as Figures 5 and 6 themselves show, the f-mode persists to the latest times and manifests an episodically modulated (see Figure 17), though continuous, signal. We have yet to identify the major excitation mechanism for this component at late times. Anisotropic winds and neutrino emissions from the residual core could be causes, but inner PNS convection may be also a factor here. It is also the case that the mode should ring down over a period given by the dominant damping mechanism. Such mechanisms include sound generation, the back reaction of anisotropic winds, neutrino emission coupling and viscosity, non-linear parent-daughter mode coupling, and numerical dissipation. We reiterate, however, that the f-mode continues to ring and produce a weak GW signal for the duration of all our simulations. Clearly, this topic deserves more detailed scrutiny in the future. Nevertheless, this later phase amounts to only a few percent of the total energy emitted.
## III Conclusions
In this paper, we have presented and analyzed the gravitational-wave signatures of an extensive suite of detailed initially non-rotating 3D core-collapse supernova simulations spanning a wide range of massive-star progenitors. For the first time, most of the published simulations were carried out to late enough times to capture more than 99% of the total GW emission from such events. Moreover, we have endeavored to dump the relevant quadrupole data at a rate sufficient to effectively eliminate Nyquist sampling errors. We see that the f/g-mode and f-mode oscillation modes of the PNS core carry away most of the GW power and that generically there are avoided crossings and modal interactions likely associated with the evolution, extent, and character of lepton-driven PNS convection. The f-mode frequency inexorably rises as the proto-neutron star core shrinks during its Kelvin-Helmholtz contraction phase, driven by neutrino loses, and its power and frequency behavior are central features of the GW emissions from the core-collapse event. Other modes are also seen in the GW spectra, in particular a \(\ell=2;n=1\) p-mode and, perhaps directly, a trapped g-mode, though most g-modes are not excited. Whether other p-modes are in evidence is to be determined.
We demonstrate that the GW emission is powered mostly by accretion plumes onto the PNS that excite its modal oscillations and also produce a "haze" of higher frequency emission also correlated with the phase of violent accretion, after which the signal subsides to be dominated by the chirp of the f-mode signal at low power that nevertheless continues beyond the duration of even these simulations, albeit weakly. The duration of the major phase of emission varies with exploding progenitor and is generally shorter for the lower-mass progenitors (\(\sim\)0.3-0.5 seconds) and longer for the higher-mass progenitors (\(\sim\)1.5 seconds). We find a strong correlation between the total GW energy radiated and the compactness of the progenitor whose mantle explodes as a supernova. Furthermore, we find that the total GW energy emissions can vary by as much as three orders of magnitude from star to star. Hence, there is a severe progenitor dependence that must be factored into any discussion of detectability. For the black-hole forming models, since accretion is not reversed at any time or at any solid angle, their GW signal lasts until the black hole forms, tapering off only slowly until then. In addition, they do not manifest the high-frequency haze seen for the exploding models. For these black-hole formers, we also witness the emergence of a spiral shock motion that modulates the GW emission at a frequency near \(\sim\)100 Hz that slowly increases as the stalled shock sinks.
In Figure 20, we plot the sensitivity curves with the
amplitude spectral densities at 10 kiloparsecs of our 3D models. More massive models generally leave larger footprints, with the 11 M\({}_{\odot}\) studied being the exception. Current and next-generation detectors can observe, for galactic events, a signature spanning orders of magnitude, from subHz to \(\sim\)3000 Hz. While Advanced LIGO/Virgo/Kagra (LVK, [93]) can detect \(\sim\)30\(-\)3000 Hz signals for the more massive progenitors, upcoming detectors, including the Einstein Telescope [94; 95], the Cosmic Explorer [96], BBO [97], and Decigo [98; 99] (at lower frequencies, see also [100]) should be able to detect galactic events for all progenitor masses studied here through almost three orders of magnitude in both frequency and total energy radiated. However, a detailed retrieval analysis, informed by the best signal-processing approaches, has yet to be performed and is an important topic for future work.
Though we have endeavored here to provide a comprehensive look at the gravitational-wave signatures of core collapse, there remains much yet to understand. Topics unaddressed here are the nuclear equation-of-state dependencies, the role of rapid rotation, the possible signatures of strong magnetic fields [89; 36](REF), and the results for other progenitor massive stars and from other stellar evolution codes. Importantly, the analysis of a detected GW signal would be significantly aided if done in concert with a corresponding analysis of the simultaneous neutrino signal. Optimal methodologies with which to extract physical information from such an analysis have yet to be designed. Nevertheless, we have in the GW signal of core-collapse supernovae a direct and real-time window into the supernova mechanism and PNS evolution. Therefore, such a methodology would likely pay rich scientific dividends when astronomy is finally presented with the opportunity to employ it.
## Data availability
The numerical data associated with this article and sonifications of the gravitational wave strains will be shared upon reasonable request to the corresponding author. The gravitational wave strains as well as the quadrupole data are available publicly at [https://dvartany.github.io/data/](https://dvartany.github.io/data/).
## Acknowledgments
We thank Jeremy Goodman, Eliot Quataert, David Radice, Viktoriya Morozova, Hiroki Nagakura, and Benny Tsang for insights and advice during the germination and execution of this project. DV acknowledges support from the NASA Hubble Fellowship Program grant HST-HF2-51520. We acknowledge support from the U. S. Department of Energy Office of Science and the Office of Advanced Scientific Computing Research via the Scientific Discovery through Advanced Computing (SciDAC4) program and Grant DE-SC0018297 (subward 00009650), support from the U. S. National Science Foundation (NSF) under Grants AST-1714267 and PHY-1804048 (the latter via the Max-Planck/Princeton Center (MPPC) for Plasma Physics), and support from NASA under award JWST-GO-01947.011-A. A generous award of computer time was provided by the INCITE program, using resources of the Argonne Leadership Computing Facility, a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. We also acknowledge access to the Frontera cluster (under awards AST20020 and AST21003); this research is part of the Frontera computing project at the Texas Advanced Computing Center [101] under NSF award OAC-1818253. In addition, one earlier simulation was performed on Blue Waters under the sustained-petascale computing project, which was supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters was a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. Finally, the authors acknowledge computational resources provided by the high-performance computer center at Princeton University, which is jointly supported by the Princeton Institute for Computational Science and Engineering (PICCiE) and the Princeton University Office of Information Technology, and our continuing allocation at the National Energy Research Scientific Computing Center (NERSC), which is supported by the Office of Science of the U. S. Department of Energy under contract DE-AC03-76SF00098.
## Appendix A General Equations
We follow the method in ([76], see also [77; 72; 102; 47]) to calculate the gravitational wave strain tensor. We calculate the first time derivative of the mass quadrupole with the following formula:
\[q_{ij}=\frac{d}{dt}Q_{ij}=\int d^{3}x\,(v_{i}x_{j}+v_{j}x_{i}-\frac{2}{3}v_{r} r), \tag{1}\]
where \(v\) is the velocity, \(x\) is the Cartesian coordinate, \(Q_{ij}\) is the transverse-traceless quadrupole tensor, and \(r\) is the radius. The transverse-traceless gravitational wave strain tensor \(h_{ij}^{TT}\) is calculated by taking the numerical time derivative of \(q_{ij}\), i.e.,
\[h_{ij}^{TT}=\frac{2G}{c^{4}D}\frac{dq_{ij}}{dt}\,, \tag{2}\]
where \(D\) is the distance to the source. Hereafter, we drop the superscript "TT". We also calculate and dump the quadrupole \(Q_{ij}=\int d^{3}x\,\rho(x_{i}x_{j}-\frac{1}{3}r^{2}\delta_{i}j)\), and its numerical derivatives are consistent with the values calculated by the above equation. However, taking numerical derivatives can be viewed as convolving the signal with
a window function, thus it will introduce a bit of extra noise at high frequencies.
The "plus" and "cross" polarized strains along \((\theta,\phi)\) direction are given by
\[h_{+}=\frac{G}{c^{4}D}\left(\frac{dq_{\theta\theta}}{dt}-\frac{dq_{\phi\phi}}{dt}\right) \tag{11}\]
and
\[h_{\times}=\frac{2G}{c^{4}D}\frac{dq_{\theta\phi}}{dt} \tag{12}\]
where (from [77])
\[q_{\theta\theta} =(q_{xx}\cos^{2}\phi+q_{yy}\sin^{2}\phi+2\,q_{xy}\sin\phi\cos\phi )\cos^{2}\theta\] \[\quad+q_{zz}\sin^{2}\theta-2\,(q_{xz}\cos\phi+q_{yz}\sin\phi)\sin \theta\cos\theta \tag{13}\]
\[q_{\phi\phi} =q_{xx}\sin^{2}\phi+q_{yy}\cos^{2}\phi-2\,q_{xy}\sin\phi\cos\phi \tag{14}\] \[q_{\theta\phi} =(q_{yy}-q_{xx})\cos\theta\sin\phi\cos\phi+q_{xy}\cos\theta\,( \cos^{2}\phi-\sin^{2}\phi)\] \[\quad+q_{xz}\sin\theta\sin\phi-q_{yz}\sin\theta\cos\phi\,. \tag{15}\]
The total energy emitted in gravitational waves is
\[E_{GW}=\int_{0}^{t}\sum_{ij}\frac{G}{5c^{5}}\left(\frac{d^{3}Q_{ij}}{dt^{3}} \right)^{2} \tag{16}\]
To calculate gravitational waves from neutrino asymmetries, we follow the prescription of ([91], see also [92]). We include angle-dependence of the observer through the viewing angles \(\alpha\in[-\pi,\pi]\) and \(\beta\in[0,\pi]\). The time-dependent neutrino emission anisotropy parameter for each polarization is defined as
\[\alpha_{S}(t,\alpha,\beta)=\frac{1}{\Lambda(t)}\int_{4\pi}d\Omega^{\prime}\,W _{S}(\Omega^{\prime},\alpha,\beta)\frac{d\Lambda}{d\Omega^{\prime}}(\Omega^{ \prime},t)\,, \tag{17}\]
where the subscript is \(S\in\{+,\times\}\) and the gravitational wave strain from neutrinos is defined as:
\[h_{S}(t,\alpha,\beta)=\frac{2G}{c^{4}D}\int_{0}^{t}dt^{\prime}\,\Lambda(t^{ \prime})\,\alpha_{S}(t^{\prime},\alpha,\beta)\,, \tag{18}\]
where (from [91])
\[D_{+} =[1+(\cos(\phi^{\prime})\cos(\alpha)+\sin(\phi^{\prime})\sin( \alpha))\sin(\theta^{\prime})\sin(\beta)+\cos(\theta^{\prime})\] \[\quad\cos(\beta)]\{[(\cos(\phi^{\prime})\cos(\alpha)+\sin(\phi^{ \prime})\sin(\alpha))\sin(\theta^{\prime})\cos(\beta)\] \[\quad-\cos(\theta^{\prime})\sin(\beta)]^{2}-\sin^{2}(\theta^{ \prime})(\sin(\phi^{\prime})\cos(\alpha)-\cos(\phi^{\prime})\sin(\alpha))^{2}\} \tag{19a}\] \[D_{\times} =[1+(\cos(\phi^{\prime})\cos(\alpha)+\sin(\phi^{\prime})\sin( \alpha))\sin(\theta^{\prime})\sin(\beta)+\cos(\theta^{\prime})\] \[\quad\cos(\beta)]2[(\cos(\phi^{\prime})\cos(\alpha)+\sin(\phi^{ \prime})\sin(\alpha))\sin(\theta^{\prime})\cos(\beta)\] \[\quad-\cos(\theta^{\prime})\sin(\beta)]\sin(\theta^{\prime})(\sin( \phi^{\prime})\cos(\alpha)-\cos(\phi^{\prime})\sin(\alpha))^{2}\] (19b) \[N =[(\cos(\phi^{\prime})\cos(\alpha)+\sin(\phi^{\prime})\sin(\alpha) )\sin(\theta^{\prime})\cos(\beta)-\cos(\theta^{\prime})\] \[\quad\sin(\beta)]^{2}+\sin^{2}(\theta^{\prime})(\sin(\phi^{\prime} )\cos(\alpha)-\cos(\phi^{\prime})\sin(\alpha))^{2}\,. \tag{19c}\] |
2303.08088 | The Evolving Effect Of Cosmic Web Environment On Galaxy Quenching | We investigate how cosmic web structures affect galaxy quenching in the
IllustrisTNG (TNG100) cosmological simulations by reconstructing the cosmic web
within each snapshot using the DisPerSE framework. We measure the comoving
distance from each galaxy with stellar mass $\log(M_{\ast}/\mathrm{M}_{\odot})
\geq 8$ to the nearest node ($d_{\mathrm{node}}$) and the nearest filament
spine ($d_{\mathrm{fil}}$) to study the dependence of both median specific star
formation rate (<sSFR>) and median gas fraction (<$f_{\mathrm{gas}}$>) on these
distances. We find that the <sSFR> of galaxies is only dependent on cosmic web
environment at $z<2$, with the dependence increasing with time. At $z\leq0.5$,
$8 \leq \log(M_{\ast}/\mathrm{M}_{\odot}) < 9$ galaxies are quenched at
$d_{\mathrm{node}}\lesssim1$~Mpc, and have significantly-suppressed star
formation at $d_{\mathrm{fil}}\lesssim1$~Mpc, trends driven mostly by satellite
galaxies. At $z\leq1$, in contrast to the monotonic drop in <sSFR> of
$\log(M_{\ast}/\mathrm{M}_{\odot}) <10$ galaxies with decreasing
$d_{\mathrm{node}}$ and $d_{\mathrm{fil}}$, $\log(M_{\ast}/\mathrm{M}_{\odot})
\geq 10$ galaxies - both centrals and satellites - experience an upturn in
<sSFR> at $d_{\mathrm{node}}\lesssim0.2$~Mpc. Much of this cosmic web
dependence of star formation activity can be explained by an evolution in
$<f_{\mathrm{gas}}>$. Our results suggest that in the past $\sim$10 Gyr,
low-mass satellites are quenched by rapid gas stripping in dense environments
near nodes and gradual gas starvation in intermediate-density environments near
filaments, while at earlier times cosmic web structures efficiently channeled
cold gas into most galaxies. State-of-the-art ongoing spectroscopic surveys
such as SDSS and DESI, as well as those planned with the Subaru Prime Focus
Spectrograph, JWST and Roman, are required to test our predictions against
observations. | Farhanul Hasan, Joseph N. Burchett, Alyssa Abeyta, Douglas Hellinger, Nir Mandelker, Joel R. Primack, S. M. Faber, David C. Koo, Oskar Elek, Daisuke Nagai | 2023-03-14T17:19:12Z | http://arxiv.org/abs/2303.08088v2 | # The Evolving Effect Of Cosmic Web Environment On Galaxy Quenching
###### Abstract
We investigate how cosmic web structures affect galaxy quenching in the IllustrisTNG (TNG100) cosmological simulations by reconstructing the cosmic web within each snapshot using the DisPerSE framework. We measure the comoving distance from each galaxy with stellar mass \(\log(M_{*}/\mathrm{M}_{\odot})\) \(\geq\) 8 to the nearest node (\(d_{\mathrm{node}}\)) and the nearest filament spine (\(d_{\mathrm{fil}}\)) to study the dependence of both median specific star formation rate (\(\langle\mathrm{sSFR}\rangle\)) and median gas fraction (\(\langle f_{\mathrm{gas}}\rangle\)) on these distances. _We find that the \(\langle\mathrm{sSFR}\rangle\) of galaxies is only dependent on cosmic web environment at \(z<2\)_, with the dependence increasing with time. At \(z\leq 0.5\), 8 \(\leq\) \(\log(M_{*}/\mathrm{M}_{\odot})\) \(<\) 9 galaxies are quenched at \(d_{\mathrm{node}}\lesssim 1\) Mpc, and have significantly-suppressed star formation at \(d_{\mathrm{fil}}\lesssim 1\) Mpc, trends driven mostly by satellite galaxies. At \(z\leq 1\), in contrast to the monotonic drop in \(\langle\mathrm{sSFR}\rangle\) of \(\log(M_{*}/\mathrm{M}_{\odot})\) \(<\) 10 galaxies with decreasing \(d_{\mathrm{node}}\) and \(d_{\mathrm{fil}}\), \(\log(M_{*}/\mathrm{M}_{\odot})\) \(\geq\) 10 galaxies - both centrals and satellites - experience an upturn in \(\langle\mathrm{sSFR}\rangle\) at \(d_{\mathrm{node}}\lesssim 0.2\) Mpc. Much of this cosmic web dependence of star formation activity can be explained by an evolution in \(\langle f_{\mathrm{gas}}\rangle\). Our results suggest that in the past \(\sim\)10 Gyr, low-mass satellites are quenched by rapid gas stripping in dense environments near nodes and gradual gas starvation in intermediate-density environments near filaments. At earlier times, cosmic web structures efficiently channeled cold gas into most galaxies. State-of-the-art ongoing spectroscopic surveys such as SDSS and DESI, as well as those planned with the Subaru Prime Focus Spectrograph, JWST, and Roman, are required to test our predictions against observations.
Cosmic web (330), Large-scale structure of the universe (902), Galaxy quenching (2040), Galaxy evolution (594), Intergalactic filaments (811), Magnetohydrodynamical simulations (1966)
## 1 Introduction
In the standard cosmological model, structure formation in the universe occurs at vastly different scales. Galaxies form stars within tens of kpc and grow inside dark matter (DM) halos that can be two orders of magnitude larger. At even larger scales, galaxies and their DM halos are embedded within an intricate network of strand-like filaments, diffuse sheets, dense nodes, and underdense voids, which is termed the "cosmic web" (e.g., Bond et al., 1996; Springel et al., 2005). While the cosmic web has been studied on both theoretical and observational grounds for decades, it remains one of the major outstanding questions in astrophysics whether and how the large-scale cosmic web environment influences the formation and evolution of galaxies.
A chief question in galaxy evolution is how star formation activity proceeds in galaxies and how it ceases, i.e., how quenching occurs. It has long been known that quenching depends on internal mechanisms characterized by the stellar mass \(M_{*}\) or halo mass \(M_{\mathrm{vir}}\)(e.g., Brinchmann et al., 2004; Cattaneo et al., 2006; Williams et al., 2009; Peng et al., 2010; Darvish et al., 2016), such that galaxy-scale processes, including supernovae (SNe) and Active Galactic Nuclei (AGN) feedback, can regulate and curtail star formation activity. A widely adopted theoretical viewpoint posits that galaxies in halos with mass \(\log(M_{\mathrm{vir}}/\mathrm{M}_{\odot})\gtrsim 11.5-12\) can form stable virial accretion shocks and, therefore, a hot, hydrodynamically stable circumgalactic medium (CGM) that suppresses accretion of cold gas to the interstellar medium (ISM) neces
sary for star formation (e.g., Dekel & Birnboim, 2006; Keres et al., 2005, 2009; Dekel et al., 2009; Stern et al., 2020, 2021). Lower-mass galaxies (typically with \(\log(M_{*}/\mathrm{M}_{\odot})<\) 10), especially at high redshifts (\(z\gtrsim 2\)), lack this ability to form a stable hot CGM and "self-quench" (e.g., Croton et al., 2006; Gabor & Dave, 2012).
External processes as characterized by their environment have also emerged as crucial factors in determining how galaxies quench (e.g., Elbaz et al., 2007; Peng et al., 2012; Eardley et al., 2015; Moutard et al., 2018; Bluck et al., 2020). However, the exact nature of the relationship between quenching and environment, and what physical mechanisms manifest this relationship, is a topic of widespread debate. In the hot, dense halos of galaxy groups and clusters, hydrodynamical interactions between the halo medium and satellite galaxies - most notably ram pressure stripping (e.g., Bahe & McCarthy, 2015; Boselli et al., 2022) - or tidal interactions between separate galaxies or between galaxies and the halo (e.g., Boselli & Gavazzi, 2006; Marasco et al., 2016) can remove the star-forming ISM of a galaxy. Over longer timescales, gas accretion onto the ISM can be halted, either due to lack of accretion from the intergalactic medium (IGM) to the CGM or from the CGM to the ISM via strangulation or starvation (e.g., Larson et al., 1980; Balogh & Morris, 2000; Peng et al., 2015).
The cosmic web itself has also been invoked in models of galaxy quenching. Aragon Calvo et al. (2019) proposed that "cosmic web detachment," wherein galaxies are detached from cold gas-supplying primordial filaments, can explain much of the observed quenching phenomena across time. Song et al. (2021) suggested that close to the edges of filaments there is coherent, high angular momentum supply of gas to the outer parts of halos, which prevents an efficient transfer of gas from the outer halo to galactic centers - ultimately quenching these galaxies (see also Peng & Renzini, 2020; Renzini, 2020). Pasha et al. (2022) found that cosmological accretion shocks at \(z\sim 2-5\) can produce a hot (\(T>10^{6}\) K) IGM at the edge of sheets, which can quench low-mass centrals at these epochs, as shocks around filaments, groups, and clusters can at lower redshifts (e.g., Birnboim et al., 2016; Zinger et al., 2018; Li et al., 2023).
Studies of the connection between galaxy quenching and the cosmic web in the past decade have yielded mixed results. While many observational studies have found that passive or quenched galaxies are typically located near nodes and filaments (e.g., Kuutma et al., 2017; Kraljic et al., 2018; Laigle et al., 2018; Winkel et al., 2021), some have shown that proximity to cosmic web filaments can also enhance star formation in galaxies (e.g., Darvish et al., 2014; Vulcani et al., 2019). Cosmological hydrodynamical simulations have also provided an inconclusive picture. In the IllustrisTNG simulations (Nelson et al., 2019), Malavasi et al. (2022) found that the specific star formation rate (\(\mathrm{sSFR}=\mathrm{SFR}/M_{*}\)) of galaxies is generally reduced with proximity to nodes and filaments at \(z=0\). Xu et al. (2020) found in the EAGLE simulations (Schaye et al., 2015) a characteristic stellar mass (\(\log(M_{*}/\mathrm{M}_{\odot})\sim 10.5\)) below which galaxies have lower sSFR in nodes than in filaments and above which this dependence vanishes. Both Kotecha et al. (2022) and Zheng et al. (2022) reported evidence, instead, of filaments increasing star formation activity or at least delaying quenching. Therefore, consensus is yet to be reached on the impact of cosmic web environment on galaxy quenching and how this varies with stellar mass and redshift.
In this paper, we employ the IllustrisTNG cosmological simulations to study the impacts of cosmic web environment, particularly the proximity to filaments and nodes, on star formation and gas content in galaxies across cosmic time. We reconstruct the cosmic web in IllustrisTNG using the topologically-motivated DisPerSE framework (Sousbie, 2011; Sousbie et al., 2011). This is the first study of the dependence of star formation quenching on the cosmic web in the TNG100-1 run across many different redshift snapshots. Malavasi et al. (2022) performed a similar analysis of the TNG300-1 run at \(z=0\).
This paper is organized as follows. In Section 2, we describe the simulation data used in this work and methods of reconstructing the cosmic web. We present our results in Section 3. We discuss the physical interpretations of our results and propose observational tests in Section 4, and conclude in Section 5. We adopt the _Planck 2015_ cosmology (Planck Collaboration et al., 2016), with \(H_{0}=67.74~{}\mathrm{km\,s^{-1}\,Mpc^{-1}}\), \(\Omega_{\mathrm{M,0}}=0.3089\), and \(\Omega_{\Lambda,0}=0.6911\). All distances are quoted in comoving units, unless stated otherwise.
## 2 Data and Methods
### TNG Simulations
We analyzed outputs from the IllustrisTNG magneto-hydrodynamical cosmological simulations, which use the AREPO moving-mesh hydrodynamics code (Springel, 2010) to simulate the evolution of gas, stars, DM, and black holes (BH) from the early universe (\(z=127\)) to the present day (\(z=0\)). The public data release of the simulations was presented in Nelson et al. (2019), while introductory results were presented in Pillepich et al. (2018), Nelson et al. (2018), Springel et al. (2018), Marinacci et al. (2018), and Naiman et al. (2018). In particular, we make use of TNG100-1, the highest resolution run of the TNG100 simulation, which has a box size of \(\sim\)110.7 comoving Mpc per side, minimum baryonic and DM particle mass of \(\sim 1.4\times 10^{6}~{}\mathrm{M}_{\odot}\) and \(\sim 7.5\times 10^{6}~{}\mathrm{M}_{\odot}\) respectively, a _Planck 2015_ cosmology (Planck Collaboration et al., 2016), and \(1820^{3}\) initial DM particles. While TNG300-1 provides greater statistics of galaxies and cosmic structures with \(\approx 20\) times the volume of TNG100-1, it has \(\approx\) 1/8 the particle mass resolution. On the other hand, TNG50-1 provides \(\approx 16\times\) greater particle mass resolution than TNG100-1, but has \(\approx\)1/10 the volume.
We obtain galaxy data for all 100 snapshots of the TNG100-1 simulation (hereafter TNG) from the online data repository1(Nelson et al., 2019). In each snapshot, "Group" catalogs are constructed using the friends-of-friends (FoF) substructure identification algorithm, while the Subfind al
gorithm and searches for gravitationally bound objects in each FoF group representing either subhalos or the main (host) halo (Springel et al., 2001; Dolag et al., 2009). We make use of both the group and subhalo catalogs to identify halos and galaxies, respectively.
For each snapshot, we set a minimum stellar mass of \(\log(M_{*}/\mathrm{M}_{\odot})\!=\!8\), which corresponds to a typical minimum observable stellar mass of galaxies in the (nearby) universe and also ensures that the galaxies are well-resolved with at least about 100 stellar particles in them. Similarly, we set a minimum halo mass - corresponding to the mass enclosed in a sphere whose mean density is 200 times the critical density of the Universe - of \(\log(M_{200,c}/\mathrm{M}_{\odot})\!=\!9\) to ensure that each of the galaxies in our catalog resides inside halos that are well-resolved with at least 100 DM particles. These criteria yielded \(\sim\)50,000 galaxies at \(z\!=\!0\) and \(\sim\)11,000 galaxies at \(z\!=\!5\). From these catalogs, we obtained the galaxy comoving position, star formation rate (SFR), stellar mass (\(M_{*}\)), halo mass (\(M_{200,c}\)), halo virial radius (\(R_{200,c}\); comoving radius at which \(M_{200,c}\) is calculated), and mass of all gas gravitationally bound to a subhalo (\(M_{\mathrm{gas}}\)). Hereafter, we refer to subhalos as galaxies and groups as halos.
### Reconstructing the Cosmic Web with DisPerSE
Next, we apply the Discrete Persistent Structures Extractor (DisPerSE) algorithm (Sousbie, 2011; Sousbie et al., 2011) to find cosmic web filaments and nodes in each TNG snapshot where at least 10,000 galaxies matched our selection criteria above. DisPerSE identifies the topology of space in any given volume based on an input distribution of discrete tracers, which in our case are the spatial locations of all galaxies matching our selection criteria above. To do this, it computes the density field from the inputs, using the Delaunay Tessellation Field Estimator (DTFE; Schaap & van de Weygaert, 2000), wherein the entire volume is divided into tetrahedrons, with the positions of individual galaxies as vertices. During the tessellation, the density field at the position of each vertex of the tessellation is smoothed by averaging it with its two nearest neighbors. This is done in order to minimize contamination by shot noise and the detection of small-scale spurious features (see, e.g., Malavasi et al., 2022). DisPerSE calculates the gradient of the density field and identifies critical points where the gradient is zero. These correspond to the voids (minima), saddle points, and nodes (maxima) of the density field. Filaments consist of a series of segments connecting maxima to other critical points.
For each topologically significant pair of critical points, DisPerSE computes the persistence, which is defined as the ratio of the density value at the two critical points. Persistence is a simple measure of how robust topological structures, i.e., the identified critical points and filament segments, are to local variations, in this case, of the density field measured from input galaxy positions. This sets the effective significance level of the detected filaments and allows us to quantify the effect of shot noise in the input data. For our fiducial run, we choose a persistence threshold of \(3\sigma\), which has been known to eliminate most spurious filamentary features in the TNG simulations (e.g., Galarraga-Espinosa et al., 2020). By experimenting with cuts of \(4\sigma\) and \(5\sigma\), we find that these miss fainter structures, but, nonetheless, do not significantly alter our results.
In addition, we choose to apply a smoothing to the position of the segments of the filamentary skeleton by averaging the initial positions of the extrema of a segment with those of the extrema of contiguous segments. In essence, the skeleton is smoothed by keeping the critical points fixed and averaging the coordinates of each point along a filament with that of its two neighbors. This is done to reduce sharp and/or unphysical shapes of filament segments caused by shot noise We apply one level of smoothing and find that increasing the amount of smoothing by one level or removing this smoothing does not have a significant effect on our statistical results.
Furthermore, we experimented with varying the minimum stellar mass cut of our galaxy catalog and cosmic web reconstruction, using cuts of \(\log(M_{*}/\mathrm{M}_{\odot})\geq 9\) and \(\log(M_{*}/\mathrm{M}_{\odot})\!\geq\!10\). These are more realistic minimum masses to compare to large observational surveys such as the Sloan Digital Sky Survey (SDSS; e.g., Strauss et al., 2002) DR17 (see, e.g., Wilde et al. (2023), and Section 4.3 of this paper). However, the sharp drop in the number of galaxies in TNG with these higher masses yielded increasingly fewer input tracers for DisPerSE, which resulted in far fewer identified filaments and nodes than in our fiducial \(\log(M_{*}/\mathrm{M}_{\odot})\!\geq\!8\) cut, and particularly very few short filaments (with length \(<\)1 Mpc). These higher mass cuts could therefore bias our results towards more prominent cosmic web features and longer filaments. In practice, varying the minimum \(M_{200,c}\) cut was mostly degenerate with varying the minimum \(M_{*}\) cut.
A 2D visual representation of the DisPerSE-identified filaments and nodes superimposed on the distribution of galaxies in TNG is shown in Fig. 1. The three panels show \(x-y\) projections in a 37 Mpc thick slice (corresponding to \(\sim 1/3\) of the total thickness) at \(z\!=\!0\) (left), \(z\!=\!1\) (middle), and \(z\!=\!2\) (right). In each panel, the filament spines and nodes are represented by black curves and grey circles, respectively, while galaxies are represented by scatter points, with sizes proportional to \(M_{*}\) and color-coding by sSFR.
From this visualization, we can qualitatively assess the spatial distribution of star formation activity in galaxies with respect to cosmic web nodes and filaments. At any redshift, higher sSFR galaxies are located throughout the volume, whereas lower sSFR galaxies are almost always located close to a filament spine or a node. From higher to lower redshift (right to left panel), there is a clear decline in global star formation activity which reflects the decline in cosmic star formation rate density after the so-called "cosmic noon" thoroughly chronicled in observations (e.g., Madau & Dickinson, 2014). The number of massive quiescent galaxies increases considerably from \(z\!=\!2\) to \(z\!=\!1\) and even more prominently from \(z=1\) to the present day. A rough qualitative visual check showed that virtually all \(\mathrm{SFR}\!=\!0\) galaxies at low redshift, regardless of mass, live near nodes and/or filaments (we quantify a galaxy's proximity to these cosmic web structures
below).
### Defining Distances
To quantitatively study the relationship between the physical properties of a galaxy and its cosmic web environment, we measure two different distances for each galaxy at each snapshot: \(d_{\rm node}\) - the comoving Euclidean distance from the center of a galaxy to the center of the nearest identified node, and \(d_{\rm fil}\) - the comoving transverse distance from the center of a galaxy to the nearest identified filament spine. We chose these cosmic web-centric distance characterizations as they are similar to those of Welker et al. (2020) and Malavasi et al. (2022), among others. Depending on the physical mechanisms affecting star formation quenching, use of physical distances or other parameters such as local gas density, pressure, angular momentum, etc. are also viable options for the study of the dependence of galaxy properties on the cosmic web environment. In the following, we investigate how star formation quenching and gas reservoirs of TNG galaxies depend on \(d_{\rm node}\) and \(d_{\rm fil}\) at different masses and redshifts.
## 3 Results
### Star Formation And Cosmic Web Environment
We first investigate the relationship between star formation activity in galaxies and their proximity to cosmic web structures. In each snapshot, we divide galaxies into three different stellar mass ranges - \(8\!\leq\!\log(M_{*}/{\rm M}_{\odot})\!<\!9\), \(9\!\leq\!\log(M_{*}/{\rm M}_{\odot})\!<\!10\), and \(\log(M_{*}/{\rm M}_{\odot})\!\geq\!10\) - and into seven bins of the distances \(d_{\rm node}\) and \(d_{\rm fil}\). These bins are chosen such that each has an equal number of galaxies. We measure the median sSFR, \(\langle{\rm sSFR}\rangle\), for each bin of \(d_{\rm node}\) and \(d_{\rm fil}\). Experimenting with different numbers of bins, we find the overall results to be insensitive to the number of bins.
The results of these binned statistics are presented in Fig. 2, the top row showing \(\langle{\rm sSFR}\rangle\) as a function of \(d_{\rm node}\) and the bottom row \(\langle{\rm sSFR}\rangle\) as a function of \(d_{\rm fil}\), and each column represent a different mass range. The \(\langle{\rm sSFR}\rangle\) are color-coded by redshift; vertical error bars represent \(\pm 1\sigma\) bootstrapped errors on \(\langle{\rm sSFR}\rangle\) in each \(d_{\rm node}\) or \(d_{\rm fil}\) bin, and horizontal error bars represent the width of the bin. Dotted curves represent simple spline interpolations to the \(\langle{\rm sSFR}\rangle\)-\(d_{\rm node}\) and \(\langle{\rm sSFR}\rangle\)-\(d_{\rm fil}\) relations. For a finer look at some intermediate redshifts, each panel also contains an inset showing a color contour plot for each snapshot between \(z\!=\!0.5\) and \(z\!=\!3\). For certain bins, we show an upper limit on \(\langle{\rm sSFR}\rangle\), corresponding to SFR= \(10^{-2.5}\ {\rm M}_{\odot}\,{\rm yr}^{-1}\), which is the minimum resolvable SFR (averaged over 200 Myr) in TNG due to stochastic star formation with a minimum star particle mass (see Terrazas et al., 2020). For example, for \(8\!\leq\!\log(M_{*}/{\rm M}_{\odot})\!<\!9\) galaxies, the maximum upper limit in sSFR is \(10^{-11.5}\,{\rm yr}^{-1}\), considering the most massive galaxies in this mass range.
In order to separate the effect of nearby nodes from that of filaments alone (because many galaxies that are close to filament spines are also close to nodes), we only considered galaxies that are \(d_{\rm node}\!>\!1\) Mpc away from the nearest node for the bottom row of Fig. 2. The choice of 1 Mpc is motivated by two considerations. 1) This is slightly higher than the virial radius \(R_{200,c}\) of the most massive galaxy cluster in TNG100. Therefore, this ensures that we remove possible halo-centric effects of nearby clusters and groups (which reside in nodes) and isolates the effect of nearby filaments. 2) This is four times the scale radius of the number density profile of galaxies in filaments derived for TNG300 by Galarraga-Espinosa et al. (2020) and corresponds to the width containing almost all of the matter inside of filaments. We vary this cut to be \(d_{\rm node}\!>\!0.5\), \(1.5\), and \(2\) Mpc as well and find that only galaxies at \(z\!\lesssim\!1\) with \(d_{\rm fil}\!<\!1\) Mpc show noticeable changes to our quantitative results, while the qualitative results described below remain unchanged. In essence, this \(d_{\rm node}\) cut ensures that only galaxies at intermediate to high, rather than extremely high local densities, are considered for
Figure 1: 2D visual representation of galaxies in the TNG100 simulation and cosmic web structures identified by DisPerSE at \(z=0\) (left), \(z=1\) (middle), and \(z=2\) (right). In each panel, filament spines are represented by black curves, and nodes are represented by semi-transparent grey circles, while galaxies are represented by scatter points sized by the stellar mass and color-coded by specific star formation rate. The same x-y projection of a slice 37 Mpc thick (\(\sim 1/3\) of the total box width) is shown for each redshift. The global star formation activity declines considerably from higher to lower redshift. Quiescent galaxies at lower redshifts appear to be clustered closer to nodes and filaments.
the \(d_{\rm fil}\) analysis (see Burchett et al. (2020) for an example of how cosmic matter densities relate to different filamentary environments).
First, examining the node-centric relationships, we find that \(d_{\rm node}\) is strongly correlated with quenching of star formation in galaxies of all masses at low redshifts (\(z\lesssim 0.5\)). For \(8\!\leq\!\log(M_{*}/{\rm M}_{\odot})\!<\!9\) galaxies at \(z\leq 0.5\), \(\langle{\rm sSFR}\rangle\) vanishes at \(d_{\rm node}\!\lesssim\!1\) Mpc. The increase from low to high \(d_{\rm node}\) is much more gradual at \(z\!=\!1\) (\(\approx\!3\times\) from \(\langle d_{\rm node}\rangle\!\sim\!0.2\) Mpc to \(\langle d_{\rm node}\rangle\!\sim\!15\) Mpc). At \(z\geq 2\) however, there is virtually no dependence of \(\langle{\rm sSFR}\rangle\) on \(d_{\rm node}\). We note that our results at \(z=0\) are broadly in agreement with those of Geha et al. (2012), who found that in SDSS DR8, almost all quenched galaxies with \(7\leq\log(M_{*}/{\rm M}_{\odot})\leq 9\) are found within \(\sim\)1.5 Mpc of a massive host.
For intermediate-mass \(9\!\leq\!\log(M_{*}/{\rm M}_{\odot})\!<\!10\) galaxies, the trends are similar in that there is a large \(\sim\)1 dex rise in \(\langle{\rm sSFR}\rangle\) from the lowest to the highest \(d_{\rm node}\) bin at \(z=0\), a much smaller rise at \(z=0.5\), and effectively no \(d_{\rm node}\) dependence at \(z\!\geq\!1\). The lack of \(d_{\rm node}\)-dependence of \(\langle{\rm sSFR}\rangle\) at \(z\!>\!1\) is also seen for high-mass \(\log(M_{*}/{\rm M}_{\odot})\!\geq\!10\) galaxies, but, interestingly, these galaxies do not show a monotonic increase in \(\langle{\rm sSFR}\rangle\) with \(d_{\rm node}\) at lower redshifts. In fact, following a decline with decreasing \(d_{\rm node}\) at \(d_{\rm node}\gtrsim\!0.2\) Mpc, there is an _upturn_ in \(\langle{\rm sSFR}\rangle\) at \(d_{\rm node}\!\lesssim\!0.2\) Mpc.
Considering the filament-centric relationships, we find both differences and similarities with the node-centric relationships. For low-mass galaxies at \(z=0\), there is a sizeable \(\sim 5\times\) increase in \(\langle{\rm sSFR}\rangle\) from \(\langle d_{\rm fil}\rangle\sim 0.3\) Mpc to \(\langle d_{\rm fil}\rangle\sim 15\) Mpc, however, the rise in \(\langle{\rm sSFR}\rangle\) with \(d_{\rm fil}\) is much smaller at \(z=0\) (\(\lesssim 2\times\)) and negligible at \(z\geq 1\). For the intermediate mass range, there is effectively no gradient of \(\langle{\rm sSFR}\rangle\) with \(d_{\rm fil}\) at any redshift. For high-mass galaxies at \(z=0\), we do not see an upturn in \(\langle{\rm sSFR}\rangle\) at the lowest \(d_{\rm fil}\) (unlike at low \(d_{\rm node}\)) but rather a somewhat smooth rise by a factor of a few times in \(\langle{\rm sSFR}\rangle\) with \(d_{\rm fil}\) from low to high \(d_{\rm fil}\). At \(z>0.5\), the relationship between \(\langle{\rm sSFR}\rangle\) and \(d_{\rm fil}\) is very weak. Thus, only low-mass and high-mass galaxies
Figure 2: The median sSFR as a function of distance to the nearest node (top row) and filament spine (bottom row) for galaxies with \(8\!\leq\!\log(M_{*}/{\rm M}_{\odot})\!<\!9\) (left panels), \(9\!\leq\!\log(M_{*}/{\rm M}_{\odot})\!<\!10\) (middle panels), and \(\log(M_{*}/{\rm M}_{\odot})\!\geq\!10\) (right panels). The data points and curves are color-coded by redshift as indicated by the discrete color-bars on the right. Each panel contains an inset which is a continuous color contour plot showing the \(\langle{\rm sSFR}\rangle\)-\(d_{\rm node}\) or \(\langle{\rm sSFR}\rangle\)-\(d_{\rm fil}\) relationship for a larger number of intermediate redshifts, to help locate the redshift at which a distance-dependence disappears (see text). Only galaxies with \(d_{\rm node}>1\) Mpc are included for the filament-centric relationships to mitigate the potential halo-centric effects of nearby clusters and groups. Note that some points are shown as upper limits on \(\langle{\rm sSFR}\rangle\). Star formation activity is dependent on \(d_{\rm node}\) and (to a lesser extent) on \(d_{\rm fil}\) only at lower redshifts, while this dependence disappears at \(z\geq 2\).
Figure 3: \(\langle\mathrm{sSFR}\rangle\) in bins of \(d_{\mathrm{node}}\) (top two rows) and \(d_{\mathrm{fil}}\) (bottom two rows) for central galaxies (1\({}^{\mathrm{st}}\) and 3\({}^{\mathrm{st}}\) row) and satellite galaxies (2\({}^{\mathrm{nd}}\) and 4\({}^{\mathrm{th}}\) row) at different redshifts. Star formation in central galaxies is less dependent on cosmic web environment than that in satellite galaxies which are significantly quenched at low \(d_{\mathrm{node}}\) and \(d_{\mathrm{fil}}\) at low redshifts. Neither centrals nor satellites exhibit a cosmic web dependence of star formation activity at \(z\geq 2\). Insets are not included where no significant relationships between cosmic web environment and sSFR are seen.
at low redshifts are preferentially quenched near filaments, whereas galaxies of all masses are impacted near nodes.
One of the most striking findings on the star formation-cosmic web connection, seen in both the filament- and node-centric analyses, is the _disappearance of a dependence of star formation on distance to cosmic web structures at higher redshifts_. The color contour insets included in Fig. 2 show the \(\langle{\rm sSFR}\rangle\)-\(d_{\rm node}\) and \(\langle{\rm sSFR}\rangle\)-\(d_{\rm fil}\) relationships for many snapshots at \(0.5\leq z\leq 3\). These contours flatten out past a certain redshift for all three mass ranges, indicating that star formation activity is essentially independent of proximity to the cosmic web prior to this epoch. The independence of star formation on cosmic web node-centric distance occurs at \(z\sim 1.3\) for \(9\!\leq\!\log(M_{*}/{\rm M}_{\odot})\!<\!10\) galaxies and \(z\sim 2\) for \(8\!\leq\!\log(M_{*}/{\rm M}_{\odot})\!<\!9\) and \(\log(M_{*}/{\rm M}_{\odot})\!\geq\!10\) galaxies. The \(d_{\rm fil}\)-independence of star formation occurs at \(z\sim 1\) for low-mass and high-mass galaxies, while the star formation in moderate-mass galaxies does not show any significant \(d_{\rm fil}\)-dependence at any redshift. From our analysis, it can be deduced that the cosmic web environment began affecting star formation activity during the latter stages of, or immediately after, the so-called "cosmic noon" of star formation when the star formation rate density in the universe peaked (ending around \(z\sim 1.5\); e.g., Madau & Dickinson 2014).
### Central And Satellite Galaxies
We separate our galaxy samples into central and satellite galaxies to investigate how star formation depends on cosmic web environment for both galaxy types. At each redshift, we identify the most massive galaxy in a halo as the central galaxy and the rest as satellite galaxies and then repeat the analysis in Section 3.1. The \(\langle{\rm sSFR}\rangle\)-\(d_{\rm node}\) and \(\langle{\rm sSFR}\rangle\)-\(d_{\rm fil}\) relationships are shown for central and satellite galaxies in Fig. 3. We only include the color contour inset plots in the panels where any significant relationship between cosmic web environment and star formation is seen.
In general, we find that at low redshifts, the star formation in satellite galaxies is much more strongly connected to cosmic web environment than that of central galaxies. At \(z\!<\!1\), there is a very modest rise in \(\langle{\rm sSFR}\rangle\) of low-mass centrals with \(d_{\rm node}\) and in intermediate-mass centrals, there is no dependence of \(\langle{\rm sSFR}\rangle\) on \(d_{\rm node}\). In contrast, star formation is effectively quenched close to nodes in low-mass satellites at \(z\!\leq\!0.5\), and in intermediate-mass satellites at \(z\!=\!0\).
High-mass centrals show a strong correlation between \(\langle{\rm sSFR}\rangle\) and \(d_{\rm node}\) at low redshifts. While the rise in \(\langle{\rm sSFR}\rangle\) with \(d_{\rm node}\) is mostly monotonic at \(z\!=\!0\), we see an upturn in \(\langle{\rm sSFR}\rangle\) at low \(d_{\rm node}\) (\(d_{\rm node}\!\lesssim\!0.1\) Mpc) at \(z\!\sim\!0.5-1\), similar to that found for the full galaxy population at the lowest redshifts (Fig. 2). This upturn is also seen in high-mass satellites at \(z\!=\!0\), suggesting that the elevation of star formation activity of high-mass galaxies very close to nodes is applicable for both centrals and satellites (with the caveat that the errors for the satellite relationships are larger). Beyond \(d_{\rm node}\!\sim\!0.1\) Mpc, there is a smooth rise in \(\langle{\rm sSFR}\rangle\) with \(d_{\rm node}\) for satellites at \(z\!\leq\!0.5\) and for centrals at \(z\!\leq\!1\).
With respect to filaments, there is negligible dependence of \(\langle{\rm sSFR}\rangle\) on \(d_{\rm fil}\) in centrals of any mass across cosmic time. Both low-mass and high-mass satellites are effectively quenched at low \(d_{\rm fil}\) at \(z\!=\!0\), while the rise in \(\langle{\rm sSFR}\rangle\) with \(d_{\rm fil}\) is more modest in intermediate-mass satellites. However the small-number statistics of the high-mass satellite population prevents us from drawing strong conclusions about their star formation dependence. While satellites appear to drive much of the dependence of star formation on proximity to cosmic web filaments and nodes at low redshifts, there is no statistically significant dependence of star formation on cosmic web environment at \(z\!\geq\!2\) for either centrals or satellites.
### Gas Fraction And Cosmic Web Environment
To further investigate the star formation-cosmic web connection, we examine the available gas content in galaxies relative to nodes and filaments. To this end, we measure the gas fraction,
\[f_{\rm gas}=\frac{M_{\rm gas}}{M_{\rm gas}+M_{*}}\,, \tag{1}\]
which is the ratio of gas mass to the sum of gas and stellar mass bound to a galaxy. We calculate the median gas fraction, \(\langle f_{\rm gas}\rangle\), for all galaxies, centrals and satellites, in the same bins of \(d_{\rm node}\) and \(d_{\rm fil}\) for the same mass ranges and redshifts as above. The \(\langle f_{\rm gas}\rangle\)-\(d_{\rm node}\) and \(\langle f_{\rm gas}\rangle\)-\(d_{\rm fil}\) relationships are presented in Figures 4 and 5, respectively. For clarity of presentation, we only include four redshift bins, \(z=0,1,2\), and \(4\) in these plots. As in Fig. 2, the \(\pm 1\sigma\) bootstrapped error-bars on the medians are shown.
Fig. 4 shows that for the full galaxy population, there is a strong dependence of \(\langle f_{\rm gas}\rangle\) on \(d_{\rm node}\) at lower redshifts. At \(z\!=\!0\), low-mass galaxies at \(d_{\rm node}\!\lesssim\!1\) Mpc are completely devoid of gas, which explains why they are quenched in these environments. This is mostly driven by satellite galaxies, as low-mass centrals only see a drop of a few percent in \(\langle f_{\rm gas}\rangle\) from higher to lower \(d_{\rm node}\). In intermediate-mass galaxies, \(\langle f_{\rm gas}\rangle\) drops by an order of magnitude from the highest to the lowest \(d_{\rm node}\) bin at \(z\!=\!0\), which is commensurate with the \(\sim\!1\) dex drop in \(\langle{\rm sSFR}\rangle\) in Fig. 2. This is again primarily driven by a dramatic decline in gas fraction of satellites while centrals only exhibit a modest decline.
In high-mass galaxies, we find a minimum in \(\langle f_{\rm gas}\rangle\) at \(d_{\rm node}\!\approx\!0.7\) Mpc followed by a steep rise at lower \(d_{\rm node}\). This dramatic upturn in gas fraction helps explain the upturn in star formation at low \(d_{\rm node}\) at low redshifts, but this turnover is a) much more pronounced in \(\langle f_{\rm gas}\rangle\) than in \(\langle{\rm sSFR}\rangle\) and b) persists out to \(z=4\) for \(\langle f_{\rm gas}\rangle\) while it only exists out to \(z\sim 1\) for \(\langle{\rm sSFR}\rangle\). The relationship between \(\langle f_{\rm gas}\rangle\) and \(d_{\rm node}\) in high-mass galaxies is largely dictated by central galaxies, which have large gas fractions close to nodes across cosmic time. This, however, does not result in highly enhanced star formation in low-redshift centrals at low \(d_{\rm node}\), possibly implying a lack of star formation efficiency in these environments - possibly due to central AGN heating the gas in these galaxies and suppressing star formation (see discussion below). High-mass satellites also exhibit a small upturn in \(\langle f_{\rm gas}\rangle\) at \(d_{\rm node}\!\lesssim\!0.1\) Mpc at \(z\!\leq\!3\). Unlike with star formation
tion activity, the gas fraction in all \(\log(M_{*}/{\rm M}_{\odot})\) \(\geq\) 10 galaxies depends on the proximity to nodes even at \(z\) = 4.
Proximity to filaments is less strongly correlated with \(\langle f_{\rm gas}\rangle\) than proximity to nodes, as shown in Fig. 5. At low redshifts, a monotonic rise in \(\langle f_{\rm gas}\rangle\) with \(d_{\rm fil}\) in low and intermediate-mass galaxies is caused mostly by the steep rise in the satellite population, but this dependence disappears with increasing redshift. Centrals of all masses show virtually no \(d_{\rm fil}\)-dependence on \(\langle f_{\rm gas}\rangle\) with at any redshift, consistent with a lack of dependence of star formation activity on distance to filaments.
We also note that the declining availability of gas in galaxies should not immediately result in reduced star formation activity; instead, there should be a time lag between the reduction in gas and the reduction in star formation. This is generally consistent with our results: \(\langle f_{\rm gas}\rangle\) decreases at small \(d_{\rm node}\) and \(d_{\rm fil}\) at higher redshift than does \(\langle\)sSFR\(\rangle\) for any given stellar mass range. In satellites and even high-mass centrals, a correlation between \(\langle f_{\rm gas}\rangle\) and \(d_{\rm node}\) exists out to \(z\) = 4 while star formation is independent of \(d_{\rm node}\) at \(z\) \(\geq\) 2. Alternatively, these results can also be explained by star formation being less efficient further from nodes than closer to nodes at earlier times.
Our results in this section suggest the following. 1) Star formation quenching near cosmic web structures is typically preceded by a scarcity of gas, 2) the gas fraction in satellite galaxies is more significantly affected by cosmic web environment than that in central galaxies and drive the general cosmic web dependence of gas fraction, and 3) at later times, high-mass galaxies, including both satellites and centrals, are
Figure 4: The median gas fraction, \(\langle f_{\rm gas}\rangle\), as a function of \(d_{\rm node}\), for all galaxies (top row), centrals (middle row), and satellites (bottom row), at redshifts \(z=0\), 1, 2, and 4. The general cosmic web dependence of star formation follows from the available gas supply. Satellites drive the gas fraction trends for lower mass galaxies while both centrals and satellites drive the high-mass trends.
more gas-rich near centers of nodes than at the outskirts, leading to increased star formation closer to nodes.
## 4 Discussion
### Physical Interpretations
Here, we interpret our results in terms of physical mechanisms governing the evolution of galaxies and large-scale structure. Perhaps the most puzzling result is that the star formation activity of galaxies in TNG does not depend on their large-scale cosmic web environment at \(z\!\geq\!2\), in contrast with later times when significant dependence occurs. At face value, this seems to suggest a rising importance of quenching driven by environment with cosmic time and, indeed, several recent works have found varying degrees of evidence for a lack of small or large-scale environmental dependence of star formation at higher redshifts - in both observations (e.g., Moutard et al., 2018; Chang et al., 2022; Momose et al., 2022) and simulations (e.g., Xu et al., 2020).
#### 4.1.1 The Cosmic web dependence After Cosmic Noon
The low-redshift dependence of star formation activity on \(d_{\rm node}\) and \(d_{\rm fil}\) can generally be explained by the variation of gas fraction with these distances. In lower mass galaxies at low redshifts, the monotonic descent in \(\langle{\rm sSFR}\rangle\) towards nodes and filaments is consistent with the corresponding descent in \(\langle f_{\rm gas}\rangle\). On average, low-mass galaxies that are within several hundred kpc of a node or filament effectively stop forming new stars at \(z\!=\!0\), most likely because they have very little to no gas available to do so, a behavior largely driven by satellites.
This points to a picture where dwarf galaxies that accrete onto the halos of more massive centrals are quenched in overdense environments where their gas supplies are depleted. Dwarf satellites located in galaxy clusters and groups are subjected to harsh gaseous environments dominated by warm
Figure 5: Same as Fig. 4, but in bins of \(d_{\rm fil}\). Satellites drive the dependence of \(\langle f_{\rm gas}\rangle\) on \(d_{\rm fil}\) more than centrals.
and hot gas with high densities and long cooling times. In these environments, a combination of physical processes can act together on the gas reservoirs of dwarf satellites.
Gas may be removed by ram pressure stripping when these galaxies move through the group/cluster medium or by gravitational (tidal) interactions between the satellites and the central/other satellites or the halo itself. These processes, while often identified as the likely culprits for gas stripping and quenching of low-mass satellites in clusters/groups, typically act on relatively short timescales of \(\lesssim\)500 Myr (e.g., Bahe and McCarthy, 2015; Marasco et al., 2016). Here, we find that low-mass satellites close to nodes are already quenched at \(z\!=\!0.5\) and have greatly reduced gas fractions at \(z\!=\!1\), meaning that the quenched satellites in the local universe had been quenched much earlier. In fact, Donnari et al. (2021) found that in TNG, a large fraction of \(z\!=\!0\)\(\log(M_{*}/\mathrm{M}_{\odot})\!\lesssim\!10\) satellites in groups and clusters were members of other halos whence they experienced environmental quenching before falling into their final host - a phenomenon dubbed "pre-processing" (see also, e.g., Fujita, 2004; Hou et al., 2014).
AGN feedback from massive central galaxies in groups and clusters may also play an important role in quenching star formation in satellites. The TNG model allows for both "ejective" feedback whereby BHs expel star-forming gas from a galaxy and "preventative" feedback whereby BHs heat up the gas and prevent star formation on longer timescales (Zinger et al., 2020). Both these modes of AGN feedback have been observed in galaxies near and far (e.g., Fabian, 2012; King and Pounds, 2015). In particular, Martin-Navarro et al. (2019) showed that stronger BH feedback produces hotter group/cluster media that makes quenching more efficient in satellites. While beyond the scope of this work, it would be valuable to understand how central BH properties such as mass and accretion rate might relate to the cosmic web dependence of star formation.
Satellites at close filament-centric distances and \(d_{\mathrm{node}}\!>\!1\) Mpc would reside in more intermediate density environments than those at close node-centric distances. In either of these types of environments, fresh gas accretion onto the ISM can be stopped by strangulation/starvation such that star formation quenches over longer timescales of \(\sim\)few Gyr (e.g., Peng et al., 2015; Zinger et al., 2018). In a comprehensive analysis of nearby galaxies, Trussler et al. (2020) found that starvation is likely to be the initial prerequisite for quenching across virtually all masses but the remaining cold ISM gas needs to be heated or ejected to complete the quenching process. Low-mass centrals, on the other hand, exhibit a modest cosmic web dependence on gas fraction and consequently, star formation. For these galaxies, so-called "mass quenching" via internal processes (such as feedback) may dominate over environmental effects (e.g., Peng et al., 2010).
In our investigation, we considered the total content of all gas gravitationally bound to a galaxy, without regards to the physical conditions or location of the gas. For instance, we did not measure the fraction of _cool_ gas which would in principle be a more direct measure of gas supply available for star formation. Regardless, we find that the gas fraction of low-mass satellites declines dramatically from \(z\!=\!1\) to \(z\!=\!0\) at small \(d_{\mathrm{node}}\) and to a lesser, but still significant, extent at small \(d_{\mathrm{fil}}\), indicating a lack of accretion from the IGM to the CGM. Many hydrodynamical simulations indeed show that accretion of cold gas from the IGM becomes increasingly inefficient over time (e.g., Angles-Alcazar et al., 2017; Hafen et al., 2020). Furthermore, gas in the CGM may be heated substantially or even ejected by SNe (e.g., Pandya et al., 2022) or AGN (as discussed above) to prevent accretion onto the ISM.
In low-redshift \(\log(M_{*}/\mathrm{M}_{\odot})\!\geq\!10\) galaxies, a minimum in gas fraction and star formation activity occurs at \(d_{\mathrm{node}}\!\sim\!0.2\) Mpc, following an unexpected rise at smaller \(d_{\mathrm{node}}\). This upturn in \(f_{\mathrm{gas}}\) and sSFR close to nodes persists out to \(z\!=\!2\), but manifests in an analogous relationship between star formation and proximity to nodes at \(z\!\lesssim\!1\). When we limit our sample to even higher mass galaxies, this effect is further accentuated, implying that the highest mass galaxies are primarily responsible. The effect of enhanced star formation very close to nodes is stronger in satellites than in centrals at \(z\!=\!0\) while the converse is true at \(z\!=\!0.5-1\). The fact that both \(\langle\mathrm{sSFR}\rangle\) and \(\langle f_{\mathrm{gas}}\rangle\) are lowest at \(d_{\mathrm{node}}\!\sim\!0.2\) Mpc implies that massive galaxies falling into groups/clusters from the outskirts are more gas-poor and passive relative to galaxies at the center.
It is conceivable that some/much of the gas removed from low-mass satellites in rich groups and clusters ends up in higher mass galaxies, enabling the massive galaxies to form stars at higher rates near the centers of these halos. The cores of many groups and clusters have been observed to be abundant in cold gas, which may temporarily trigger star formation near the center (e.g., McDonald et al., 2012; Olivares et al., 2019). However, this gas is also hypothesized to feed central AGN activity and eventually curtail star formation (see Donahue and Voit, 2022, and references therein). Heating from the central AGN could explain why the rise in star formation at small \(d_{\mathrm{node}}\) is not as dramatic as the rise in gas fraction in high-mass galaxies.
The enhancement of star formation in dense environments is not typically observed in statistical studies of the cosmic web-galaxy connection (e.g., Kraljic et al., 2018; Winkel et al., 2021). But there is evidence - both in observations (e.g., Roediger et al., 2014) and in simulations (e.g., Nelson et al., 2018) - of galaxies in groups/clusters enjoy brigher episodes of star formation via compression of gas from ram pressure, mergers or other processes, a phenomenon sometimes called "rejuvenation." However, these events are rare in TNG, with only \(10\%\) of \(\log(M_{*}/\mathrm{M}_{\odot})\!>\!11\) galaxies and \(6\%\) of all galaxies at \(z\!=\!0\) ever having experienced them (Nelson et al., 2018). An analysis of satellite galaxies by Martin-Navarro et al. (2021) showed that AGN outflows can clear out the CGM of massive halos, which reduces ram pressure and preserves star formation in satellites along the direction of the outflows (the minor axis of the central). These phenomena of _positive_ AGN feedback can potentially boost star formation in dense environments such as those close to nodes.
It is also possible that the high-density upturn in star formation is a result of some additional mechanism in the sim
ulations funnelling too much gas into galaxies, over-cooling the gas, or otherwise reducing the efficiency of quenching at the highest density environments. Donnari et al. (2021) report that TNG galaxies in dense environments have diverse histories and quenching pathways that may complicate the interpretation of how and when they quench. Moreover, the AGN-driven gas expulsion in TNG is known to be so efficient that there are very few galaxies with intermediate sSFR (i.e., Green Valley galaxies; e.g., Schawinski et al., 2014), creating tension with observations (Terrazas et al., 2020).
#### 4.1.2 No Cosmic web dependence Before Cosmic Noon?
There is now a growing body of evidence suggesting that cosmological accretion shocks from the formation of cosmic web structures, similar to those around massive halos, can affect galaxy formation. Birnboim et al. (2016) showed that in filaments with a specific linear mass density, the accretion shocks are unstable. These structures can efficiently siphon cool gas into \(10\,\lesssim\,\log(M_{200,\mathrm{c}}/\mathrm{M}_{\odot})\,\lesssim\,13\) halos at \(z=3\) and \(12\,\lesssim\,\log(M_{200,\mathrm{c}}/\mathrm{M}_{\odot})\,\lesssim\,15\) halos at \(z=0\) (see their Fig. 5). According to the stellar-to-halo-mass relations at these redshifts (e.g., Behroozi et al., 2019), this means that unstable filaments can potentially enhance star formation in galaxies of virtually all masses we studied at higher redshifts, while at lower redshifts only the most massive galaxies would see an increase in star formation via this channel. This phenomenon is a possible pathway for early galaxies close to filaments and nodes to have their sSFR elevated to levels comparable to those far from filaments and nodes.
This explanation necessitates an environmental dependence of _overall star formation activity_ at high \(z\) instead of specifically _quenching_. The net trend of constant sSFR with distance from the cosmic web could be naively interpreted as quenching processes being environment-independent at high \(z\). As noted in Section 3.3, we find evidence of star formation in satellites close to nodes being more efficient than that in galaxies further away at early times (\(z\geq 2\)). A plausible scenario for this is that gas is more efficiently channelled into the centers of nodes, and eventually galaxies, via cold streams at high redshift (e.g., Dekel et al., 2009).
Cosmological accretion shocks can also suppress star formation in galaxies. Zinger et al. (2018) showed that accretion shocks at the outskirts of galaxy clusters can quench satellites, which likely impacts galaxies near nodes in our analysis. In TNG, Li et al. (2023) found that shock-induced stripping of the ISM and CGM can quench low-mass satellites inside clusters at \(z\,<\,\)\(0.11\). Recently, Pasha et al. (2022) found that \(5.5\,<\,\log(M_{*}/\mathrm{M}_{\odot})\,<\,8.5\) central galaxies at \(z\,=\,\)\(2\,-\,5\) can be quenched by shock-heated cosmic sheets (which eventually collapse into filaments and nodes; e.g., Bond et al. (1996)). These shocks directly raise the ambient gas temperature in the vicinity of the sheets and suppress gas accretion and star formation in surrounding galaxies.
The impact of accretion shocks in filaments and nodes on galaxy quenching, as a function of both stellar mass and redshift, may be central to interpreting the results of this paper and therefore deserves detailed investigation. In a follow-up study, we will address this problem by analyzing the gaseous conditions of filaments and nodes - with particular emphasis on accretion shock signatures - in tandem with properties of the galaxies residing within them across cosmic time. This analysis will also allow us to characterize filaments and nodes in more detail and account for the fact that not all filaments or nodes will have the same effect on galaxy formation (e.g., Galarraga-Espinosa et al., 2020 found short and long filaments in TNG to be statistically different populations).
#### 4.1.3 Other Important Physical Considerations
Angular momentum is another important aspect of galaxy formation which may shed additional light on how quenching is affected by the cosmic web. From their analysis of quenching timescales in TNG, Walters et al. (2022) suggested that low angular momentum gas accretion leads to galaxies quenching faster than high angular momentum accretion. In the cosmic web framework, galaxies form in the vorticity-rich regions of filaments, acquire angular momentum, and drift to the nodes (e.g., Dubois et al., 2014; Codis et al., 2015). Simulations have predicted for many years that at \(z\,\gtrsim\,\)\(1.5\), gas and angular momentum are funnelled through cold filamentary streams into the centers of galaxies (e.g., Dekel et al., 2009; Pichon et al., 2011). Over time, as these streams disappear due to heating or other processes, the efficiency of galaxy formation at the centers of filaments and nodes may also decline, potentially explaining the difference in star formation activity in these regions between low and high redshift. Additionally, galactic properties such as mass and sSFR have been found to be correlated with the acquisition of angular momentum from the cosmic web (e.g., Kraljic et al., 2019; Welker et al., 2020). Thus, a complete understanding of how the cosmic web affects quenching needs to account for angular momentum acquisition in galaxies in tandem with proximity to cosmic web structures.
Many of the relationships between star formation and cosmic web environment may result from the assembly of DM halos. Subhalo abundance matching predictions from \(\Lambda\)CDM cosmology are found to agree with observed SDSS galaxy distributions, implying that the local density dependence of galaxy properties stems from the corresponding density dependence of halo properties (Dragomir et al., 2018). In the Bolshoi-Planck simulations, Lee et al. (2017) found that for \(\log(M_{200,\mathrm{c}}/\mathrm{M}_{\odot})\,\lesssim\,12\) halos, halo accretion is higher in low-density environments at \(z\lesssim 1\) and in high-density environments at \(z\gtrsim 1\). However, some \(N\)-body simulations have shown that DM halo properties are independent of cosmic web location at fixed overdensities (e.g., Goh et al., 2019). A detailed analysis of the dependence of halo mass accretion with cosmic web environment in TNG is necessary to disentangle the effect of halo mass growth from baryonic effects in determining the galaxy quenching-cosmic web connection.
### Other Caveats
We consider certain other aspects of our methodology that may affect the robustness of our results as well as the conclusions we draw. The first is how numerical resolution in the
simulation may affect our results. Galarraga-Espinosa et al. (2021) showed that despite the \(\sim\)8 times difference in resolution between TNG300-1 and TNG300-2, there are only minor differences in the distribution (including the DisPerSE reconstruction) and properties of filaments. The large scales of the cosmic web are likely to be well-resolved with any of the TNG runs, but the smaller scales of galaxy formation are more sensitive to resolution. Thus, it would be interesting to compare our results for TNG100-1 with those of TNG50-1, which has \(\sim\)16 times the particle mass resolution of TNG100-1 (e.g., Nelson et al., 2019, 2020).
The input physics model is another potential source of uncertainty for theoretical galaxy evolution studies. Galarraga-Espinosa et al. (2020) found that different baryonic physics implemented in different simulations result in somewhat different matter distribution around filaments but that gravity is still the dominant driver. Xu et al. (2020) investigated the sSFR of galaxies in filaments, nodes, sheets, and voids in the EAGLE simulation which uses somewhat different hydrodynamics and feedback prescriptions from the TNG model (see Schaye et al., 2015). They found that at \(z<1\), galaxies with \(\log(M_{*}/\mathrm{M}_{\odot})\lesssim 10.5\) are less star-forming in nodes than other cosmic web environments, while for more massive galaxies there is virtually no cosmic web dependence, the latter finding being at odds with our results. At \(z>1\), they found no statistical dependence of sSFR on the cosmic web environment, consistent with our findings. In a separate study, Rosas-Guevara et al. (2022) found that the star-forming fraction of \(\log(M_{*}/\mathrm{M}_{\odot})>9\) galaxies in EAGLE decreases with distance to the nearest void at \(z=0\). It would be interesting to apply our methodology to investigate the evolving cosmic web dependence of star formation in other hydrodynamical cosmological simulations such as SIMBA (Dave et al., 2019) and Horizon-AGN (Dubois et al., 2014). Such comparisons might help illuminate the effect of uncertain baryonic processes such as AGN feedback on the relationship between galaxy formation and the cosmic web.
Another crucial check on our results is the cosmic web reconstruction itself. As mentioned in Section 2.2, we experimented with DisPerSE parameter choices such as persistence and smoothing of the filamentary skeleton. The latter did not have any significant effect on our results and varying the former affected the frequency of identified structures but did not affect any qualitative conclusions. Overall, we consider our results to be robust to parameter choices. There are several other cosmic web reconstruction techniques that have been employed for cosmic web studies, which have advantages and disadvantages over the DisPerSE framework (see Libeskind et al., 2018, for a detailed comparison of many of these methods). We are currently applying a new state-of-the-art cosmic web reconstruction algorithm called the Monte Carlo Physarum Machine (MCPM), inspired by the _Physarum Polycephalum_ (slimc mold) organism (Elek et al., 2021, 2022), to compare to the local density estimation and global cosmic web characterization from DisPerSE. This method produces continuous cosmic matter densities (as opposed to discrete DTFE densities at the locations of galaxies) and has been applied successfully to both theoretical and observational datasets (e.g., Burchett et al., 2020; Simha et al., 2020; Wilde et al., 2023).
We also assess the importance of local galaxy overdensity in shaping the star formation-cosmic web connection. We repeat our analyses in Section 3 by only considering galaxies with local DTFE galaxy overdensity (as computed by DisPerSE) within \(\pm 1\sigma\) of the mean. We find that the resulting relationships with respect to \(d_{\mathrm{node}}\) and \(d_{\mathrm{fil}}\) look strikingly similar to those we report without filtering out galaxies by overdensity, implying that the effect of cosmic web environment on star formation persists beyond just the highest density regions of the universe. However, we stress that local overdensity and global cosmic web environment are necessarily related to each other and it is therefore not trivial to disentangle the effects of one from the other. We defer a detailed characterization of the dependence of \(d_{\mathrm{fil}}\) and \(d_{\mathrm{node}}\) on overdensity across different redshifts in TNG100 to a future work (see Malavasi et al., 2022, for an in-depth mapping of overdensity to cosmic web proximity in TNG300).
Finally, in our interpretations, we neglected the effect of pseudo-evolution of filaments and nodes, i.e., the evolution of the reference density (in our case, the DTFE mean density) instead of a true _physical_ density. Such pseudo-evolution is known to strongly drive the mass-evolution in DM halos, especially at lower redshifts (Diemer et al., 2013), and we defer a detailed investigation of this effect to future work.
### Testing Predictions with Observations
The predictions presented herein from the TNG100 simulation establish clear objectives for observational studies. First, we identified a point in cosmic time where galaxies' star formation activity begins to depend on their location relative to the large-scale cosmic web environment. Confronting this prediction with observations will necessitate wide-field galaxy surveys capable of characterizing the large-scale structure over a large range of redshifts, out to at least \(z=2\). Second, a common theme we observed in both sSFR and gas fraction was an increase at small node-centric distances for high-mass galaxies. This will require both the extensive survey data necessary for finding, and perhaps even more challenging, measuring the gas contents of, these galaxies. Spectroscopic galaxy surveys both underway and planned for the next several years should make serious headway towards at least the first element of this challenging observational experiment.
The current gold standard for wide-field spectroscopic surveys is SDSS, which can provide the lowest redshift anchor point for such a comparison. The quoted SDSS spectroscopic completeness limit of \(m_{r}=17.7\) would correspond to a redshift limit of \(z\sim 0.01\) for the lowest mass galaxies studied here (\(10^{8}\ \mathrm{M}_{\odot}\)). Even at \(z\sim 0.1\), SDSS is only complete to \(\sim 10^{10}\ \mathrm{M}_{\odot}\), covering the most massive bin we study. Thus, SDSS, in principle, is capable of yielding measurements comparable with the dark blue data points in Figures 2 and 3. Although an independent analysis of observational datasets with our methodology is beyond the scope of
this paper, we refer the reader to the work of Kuutma et al. (2017), Crone Odekon et al. (2018), and Winkel et al. (2021), who explore similar relationships with SDSS. Also of note is Kraljic et al. (2018) who employ the Galaxy and Mass Assembly (GAMA) survey, which goes two magnitudes deeper than SDSS, albeit over a much smaller volume. Still, this does not push completeness to the \(z>1\) transition point in cosmic web dependence we report here.
Constraining the higher redshifts will be more difficult, although the various Dark Energy Spectroscopic Instrument (DESI) surveys should enable cosmic web reconstructions at intermediate redshifts. Initial data and results from the Survey Validation phase are beginning to be released now, showing promising prospects for mapping the large-scale structure to \(z\sim 0.5\) for the Bright Galaxy Survey, \(z\sim 1.1\) for the Luminous Red Galaxies (LRGs), \(z\sim 1.6\) for Emission Line Galaxies (ELGs), and possibly beyond (Lan et al., 2023, and references therein). However, each of these samples is likely to contain highly biased tracers of the underlying structure; e.g., the LRGs are by definition passive galaxies and will preferentially reside in the most massive halos, likely tracing nodes. Conversely, the ELGs, being vigorously star-forming, might bias against these very environments. Nevertheless, neither of these samples will suitably represent the full diversity in star formation exhibited by the general population. Deep follow-up surveys with more agnostic selection criteria will be necessary for a fair comparison with our results.
The Subaru Prime Focus Spectrograph (PFS) also offers great promise for mapping out the galaxy-cosmic web connection at higher redshifts (Takada et al., 2014). In particular, the PFS Galaxy Evolution program will observe up to half a million galaxies at redshifts \(0.7\lesssim z\lesssim 7\)(Greene et al., 2022). The largest survey of this program is expected to yield \(>10^{5}\) continuum-selected galaxies down to a stellar mass limit of \(\log(M_{*}/\mathrm{M}_{\odot})\approx 10.5\) over a comoving survey volume of \(\sim 0.1\)\(\mathrm{Gpc}^{3}\) (\(\sim\)100 times the TNG100 volume) at \(0.7\lesssim z\lesssim 2\). Stellar masses, SFRs, and gas properties will be measured for the vast majority of these galaxies. This sample will be complemented by a smaller number of LBGs and Lyman Alpha Emitters (LAE) out to \(z\sim 7\) which would be more biased tracers of the cosmic web as discussed above.
Spectroscopic surveys with _JWST_ and, eventually, the _Nancy Grace Roman Space Telescope_ will yield galaxy datasets rip for placing in context with the cosmic web mapped by DESI and Subaru PFS. _Roman_, which will map 1700 deg\({}^{2}\) of the sky at infrared wavelengths via the High Latitude Spectroscopic Survey (Wang et al., 2022), should reveal the cosmic web over scales of \(\sim\)1 Gpc as well as the galaxies within to \(z\sim 2\). In the shorter term, _JWST_, through programs such as JADES (Cameron et al., 2023), will yield galaxy spectra to \(z>5\) albeit over much smaller fields of view (the _JWST_ Micro-shutter Assembly will map scales \(\sim 1\) Mpc across in a single pointing). An amalgamation of several such deep fields will be necessary to mitigate cosmic variance.
## 5 Conclusion
In this study, we investigated the IllustrisTNG simulations to understand how the star formation activity of galaxies depends on their cosmic web environment. We used all \(\log(M_{*}/\mathrm{M}_{\odot})\geq 8\) galaxies to reconstruct the cosmic web in the TNG100 snapshots using the DisPerSE framework. We measured the median sSFR and median \(f_{\mathrm{gas}}\) of galaxies as functions of distance to the nearest cosmic web node (\(d_{\mathrm{node}}\)) and filament spine (\(d_{\mathrm{fil}}\)). Our main results are as follows:
1. The \(\langle\mathrm{sSFR}\rangle\) of galaxies at any mass only depends on \(d_{\mathrm{node}}\) or \(d_{\mathrm{fil}}\) at redshifts \(z\lesssim 2\); _the median star formation is independent of cosmic web environment at \(z\geq 2\)_. This holds true also for central and satellite galaxies separately.
2. In \(\log(M_{*}/\mathrm{M}_{\odot})<10\) galaxies, \(\langle\mathrm{sSFR}\rangle\) increases monotonically with \(d_{\mathrm{node}}\) at \(z\leq 1\), with \(8\leq\log(M_{*}/\mathrm{M}_{\odot})<9\) galaxies being completely quenched at \(d_{\mathrm{node}}<1\) Mpc at \(z\leq 0.5\). \(\langle\mathrm{sSFR}\rangle\) has a shallower increase with \(d_{\mathrm{fil}}\) at these redshifts. These trends are almost entirely driven by satellites.
3. In \(\log(M_{*}/\mathrm{M}_{\odot})\geq 10\) galaxies, the \(\langle\mathrm{sSFR}\rangle\)-\(d_{\mathrm{node}}\) relationship inverts at \(d_{\mathrm{node}}\lesssim 0.2\) Mpc up to \(z=1\), while the \(\langle\mathrm{sSFR}\rangle\)-\(d_{\mathrm{fil}}\) relation does not. The \(\langle\mathrm{sSFR}\rangle\)-\(d_{\mathrm{node}}\) inversion is driven by both satellites and centrals, but the \(\langle\mathrm{sSFR}\rangle\)-\(d_{\mathrm{fil}}\) relationship is due to satellites.
4. Most of these star formation-cosmic web relationships can be explained by the cosmic web dependence of gas fraction in galaxies, although there is evidence of \(\langle f_{\mathrm{gas}}\rangle\) depending more strongly on cosmic web environment than \(\langle\mathrm{sSFR}\rangle\) in some cases.
Our results point to a picture where the influence of the cosmic web environment on quenching galaxies is first established at \(z\sim 2\). In the last \(\sim\)10 Gyr, low-mass dwarf satellites are quenched by their star-forming gas supplies being depleted either on short timescales (e.g., via ram pressure stripping or outflows) or on longer timescales (e.g., via starvation), while star formation in low-mass centrals is far less affected by cosmic web environment. At this epoch, high-mass galaxies at the centers of nodes are more gas-rich and star-forming than their counterparts at the outskirts, which could be due to temporary rejuvenation events, positive AGN feedback, and/or a consequence of the TNG model itself. In the earlier universe (\(>\)10 Gyr ago), cosmic web structures likely aided star formation more than they suppressed it, possibly via unstable filaments feeding cold gas to galaxies or cold streams efficiently funnelling initially high angular momentum gas to the central regions of filaments and nodes.
In a follow-up study, we will investigate how the gaseous physical conditions of filaments and nodes affect galaxy formation in TNG100, in particular how accretion shocks around filaments and nodes affect star formation. Furthermore, we will compare the cosmic web reconstruction from DisPerSE with that from the novel MCPM algorithm (Elek et al., 2021, 2022) to obtain more fine-grained insights into the global and local environmental dependence of star formation across cosmic time. The results of this work provide
important predictions to test against ongoing large spectroscopic surveys such as SDSS and DESI, as well as those ongoing and planned with Subaru PFS, _JWST_ and _Roman_.
We are very grateful to N. Luber and Z. Edwards for help with setting up DisPerSE. We thank the anonymous referee for helpful comments that improved the quality of this manuscript. We thank attendees of the 2022 Santa Cruz Galaxy Workshop and the 2023 KITP Cosmic Web Conference, including F. van den Bosch, J. Woo, H. Aung, J. Powell, C. Pichon, U. Kuchner, C. Welker, S. Simha, K-G. Lee, and R. Momose, for stimulating and interesting conversations on this work. FH, JNB, and AA are supported by the National Science Foundation LEAPS-MPS award \(\#2137452\). OE is supported by an incubator fellowship of the Open Source Program Office at UC Santa Cruz funded by the Alfred P. Sloan Foundation (G-2021-16957). DN is supported by NSF (AST-2206055) and NASA (80NSSC22K0821 & TM3-24007X) grants.
|
2305.03253 | VicunaNER: Zero/Few-shot Named Entity Recognition using Vicuna | Large Language Models (LLMs, e.g., ChatGPT) have shown impressive zero- and
few-shot capabilities in Named Entity Recognition (NER). However, these models
can only be accessed via online APIs, which may cause data leak and
non-reproducible problems. In this paper, we propose VicunaNER, a zero/few-shot
NER framework based on the newly released open-source LLM -- Vicuna. VicunaNER
is a two-phase framework, where each phase leverages multi-turn dialogues with
Vicuna to recognize entities from texts. We name the second phase as
Re-Recognition, which recognizes those entities not recognized in the first
phase (a.k.a. Recognition). Moreover, we set entity correctness check dialogues
in each phase to filter out wrong entities. We evaluate VicunaNER's zero-shot
capacity on 10 datasets crossing 5 domains and few-shot capacity on Few-NERD.
Experimental results demonstrate that VicunaNER achieves superior performance
in both shot settings. Additionally, we conduct comprehensive investigations on
Vicuna from multiple perspectives. | Bin Ji | 2023-05-05T02:46:22Z | http://arxiv.org/abs/2305.03253v1 | # VicunaNER: Zero/Few-shot Named Entity Recognition using Vicuna
###### Abstract
Large Language Models (LLMs, e.g., ChatGPT) have shown impressive zero- and few-shot capabilities in Named Entity Recognition (NER). However, these models can only be accessed via online APIs, which may cause data leak and non-reproducible problems. In this paper, we propose VicunaNER, a zero/few-shot NER framework based on the newly released open-source LLM - Vicuna. VicunaNER is a two-phase framework, where each phase leverages multi-turn dialogues with Vicuna to recognize entities from texts. We name the second phase as _Re-Recognition_, which recognizes those entities not recognized in the first phase (a.k.a. _Recognition_). Moreover, we set entity correctness check dialogues in each phase to filter out wrong entities. We evaluate VicunaNER's zero-shot capacity on 10 datasets crossing 5 domains and few-shot capacity on Few-NERD. Experimental results demonstrate that VicunaNER achieves superior performance in both shot settings. Additionally, we conduct comprehensive investigations on Vicuna from multiple perspectives.
## 1 Introduction
Named Entity Recognition (NER) serves as a precondition for many downstream Natural Language Processing (NLP) tasks such as relation extraction. Deep supervised learning NER methods require extensive entity annotations, and it is hard to transfer them across domains. Zero- and few-shot NER is targeted in this scenario, which calls for zero or a few annotated examples and is capable of domain transferring.
Prototypical networks have been widely investigated for zero/few-shot NER, such as StructShot Yang and Katiyar (2020), CONTaiNER Das et al. (2022), ESD Wang et al. (2022), DecomMetaNER Ma et al. (2022), and EP-Net Ji et al. (2022). However, these networks still require fine-tuning datasets of thousands or tens of thousands of examples.
Brown et al. (2020) demonstrate that scaling up language models significantly improves task-agnostic, few-shot NLP task performance, and they propose GPT-3, the well-known milestone of Large Language Models (LLMs). GPT-3 achieves promising performance in diverse NLP tasks without any gradient updates or fine-tuning. Inspired by GPT-3, numerous LLMs are pre-trained or fine-tuned such as InstructGPT Ouyang et al. (2022), Chinchilla Hoffmann et al. (2022), ChatGPT1, PaLM Driess et al. (2023) and GPT-4 OpenAI (2023). Based on these LLMs, zero- and few-shot NER has been comprehensively investigated. For example, Jimenez Gutierrez et al. (2022) explore biomedical few-shot NER with GPT-3. And based on ChatGPT, He et al. (2023) investigate document-level few-shot NER; Hu et al. (2023) conduct research on zero-shot clinical NER; Wei et al. (2023) propose ChatIE to explore zero-shot information extraction including NER. Although these LLM-based studies achieve strong performance, and sometimes even reach competitiveness with prior best prototypical networks, the LLMs can only be accessed through online APIs, which causes the following problems:
Footnote 1: [https://chatch.openai.com/chat](https://chatch.openai.com/chat)
1. Data leak problem. For example, sensitive data from Samsung was leaked to ChatGPT.2
Footnote 2: [https://techcrunch.com/2023/05/02/samsung-bans-use-of-generative-ai-tools-like-chatgpt-after-april-internal-data-leak/](https://techcrunch.com/2023/05/02/samsung-bans-use-of-generative-ai-tools-like-chatgpt-after-april-internal-data-leak/)
2. Non-reproducible problem. Because the LLMs are fine-tuned constantly, but the details are not publicly available Tu et al. (2023).
Fortunately, some open-source LLMs are available to the public, such as T5 Raffel et al. (2020), OPT Zhang et al. (2022), GLM Zeng et al. (2023), BLOOM Workshop et al. (2023), and LLaMA Touvron et al. (2023). Especially, LLaMA attracts much research attention due to that: (1) it can be deployed on local servers; (2) it has evolved many powerful variants via fine-tuning, such as Alpaca
(Taori et al., 2023), Baize Xu et al. (2023), Koala Geng et al. (2023), and Vicuna Chiang et al. (2023).
With the goal of exploring unlimited zero- and few-shot NER approaches, we propose VicunaNER, a Vicuna-based framework that can conduct both zero- and few-shot NER. VicunaNER is composed of two phases, which are known as _Recognition_ and _Re-Recogniziton_, respectively.
1. _Recognition_ consists of multi-turn dialogues with Vicuna. The first-turn prompts Vicuna to recognize entities from texts. For each of the recognized entities, we use one-turn dialogue to prompt Vicuna to check its correctness. After doing this, _Recognition_ generates a list of entities for each text.3 However, we observe that _Recognition_ fails to recognize numerous entities when analyzing the entity results, which motivates us to add the _Re-Recogniziton_ phase. Footnote 3: It is also possible that no entity is recognized.
2. _Re-Recogniziton_ is also composed of multi-turn dialogues with Vicuna. Given a text and its entities recognized in _Recogniziton_, Vicuna is prompted to recognize those unrecognized entities in the first-turn dialogue. Then Vicuna is prompted to check the correctness of newly recognized entities in the other turn dialogues.
Entities recognized in the two phases are merged as the NER results.
We evaluate VicunaNER's zero-shot capacity on 10 datasets crossing 5 domains and few-shot capacity on Few-NERD Ding et al. (2021). Experimental results show that: (1) Under the zero-shot setting, VicunaNER outperforms the ChatGPT-based ChatIE on xxx out of the xxx datasets, even ChatGPT is more powerful than Vicuna; (2) Under the few-shot setting, VicunaNER consistently surpasses the listed baselines, including LLM-based frameworks and prototypical networks. Additionally, we conduct comprehensive investigations to disclose the drawbacks of Vicuna, providing references for fine-tuning it in the future.
### Re-Recognition
Actually, _Recognition_ achieves a whole round of zero/few-shot NER and we can terminate the NER process after obtaining the entity list. However, we find that Vicuna fails to recognize numerous entities when analyzing the entities recognized in _Recognition_. Hence, we design the _Re-Recognition_ phase to recognize those unrecognized entities.
As shown in Figure 1, _Re-Recognition_ consists of multi-turn dialogues with Vicuna, which is similar to _Recognition_. The only difference is the prompts used in the first-turn dialogue of the two phases. To be specific, we add entity descriptions to the prompt used in this phase, where these entities are recognized in _Recognition_, as Figure 1-3 shows. The purpose of doing this is to guide Vicuna in recognizing those unrecognized entities solely. We also use a list to manage entities recognized in _Re-Recognition_, as the "**Entity list 2**" in Figure 1 shows.
Although the prompt is designed to ask Vicuna to solely recognize those unrecognized entities, we find that Vicuna still recognizes some already recognized entities, such as the "(Location, Yate)" shown in Figure 1. We attribute it to the fact that Vicuna has limitations in ensuring the factual accuracy of its outputs (Chiang et al., 2023).
For better comprehension, we report real-world prompt examples of this phase in Appendix A.
### Entity Merging
As aforementioned, we obtain one entity list in each of the two phases, but there may be entities that overlap between the two entity lists. Hence, we remove these overlapping entities when merging the two lists to obtain the NER results, as shown in Figure 1.
### Discussion
#### 3.4.1 Comparison of VicunaNER and ChatIE
Concurrent with our work, ChatIE is ChatGPT-based framework that can conduct zero-shot NER,
Figure 1: The architecture of VicunaNER. It is composed of two phases namely _Recognition_ and _Re-Recognition_, and each phase consists of multi-turn dialogues with Vicuna. We use a zero-shot NER example to describe the workflow. Texts in the gray background are prompts; entity lists in the red background manage entities recognized by the first-turn dialogue in each phase;entity lists in the green background manage entities recognized in each phase; the entity list in the blue background manages the entities recognized by VicunaNER.
and it also adopts a two-phase architecture. We claim that our VicunaNER is quite different from ChatIE in the following aspects:
1. Our VicunaNER depends on the open-source Vicuna, while ChatIE is built upon the more powerful but restricted ChatGPT API.
2. Our VicunaNER conducts a whole round of NER in each of its two phases, While ChatIE solely extracts entity types in its first phase and recognizes entities according to the extracted types in its second phase.
3. Our VicunaNER can conduct both zero- and few-shot NER tasks, while ChatIE is only designed to perform the Zero-shot NER task.
#### 3.4.2 Are More Re-Recognition Phases Necessary?
It seems that adding more _Re-Recognition_ phases can trigger better zero/few-shot NER performance. However, we demonstrate that adding more than one _Re-Recognition_ phase solely brings tiny performance improvements but greatly increases model inference time. We conduct experimental investigations on counts of the _Re-Recognition_ phase in SS xxx
#### 3.4.3 Entity Form
Following the established line of work [20], we don't prompt Vicuna to output entity locations because it is hard for LLMs to output the exact locations. This may cause confusion when an entity occurs more than once in a given text but VicunaNER only recognizes some of them.
|
2301.08829 | Nanofluidics at the crossroads | Nanofluidics, the field interested in flows at the smallest scales, has grown
at a fast pace, reaching an ever finer control offluidic and ionic transport at
the molecular level. Still, artificial pores are far from reaching the wealth
of functionalities of biological channels that regulate sensory detection,
biological transport and neurostransmission - all while operating at energies
comparable to thermal noise. Here, we argue that artificial ionic machinescan
be designed by harnessing the entire wealth of phenomena available at the
nanoscales and exploiting techniques developped in various fields of physics.
As they are generally based on solid-state nanopores, rather than soft
membranes and proteins, they should in particular aim at taking advantage of
their specific properties such as their electronic structure or their ability
to interact with light. These observations call for the design of new ways of
probing nanofluidic systems. Nanofluidics is now at the crossroads, there are
new avenues to build complex ionic machines and this may allow to develop new
functionalities inspired by Nature. | Paul Robin, Lydéric Bocquet | 2023-01-20T23:47:41Z | http://arxiv.org/abs/2301.08829v2 | # Nanofluidics at the crossroads
###### Abstract
Nanofluidics, the field interested in flows at the smallest scales, has grown at a fast pace, reaching an ever finer control of fluidic and ionic transport at the molecular level. Still, artificial pores are far from reaching the wealth of functionalities of biological channels that regulate sensory detection, biological transport and neurotransmission - all while operating at energies comparable to thermal noise. Here, we argue that artificial ionic machines can be designed by harnessing the entire wealth of phenomena available at the nanoscales and exploiting techniques developped in various fields of physics. As they are generally based on solid-state nanopores, rather than soft membranes and proteins, they should in particular aim at taking advantage of their specific properties such as their electronic structure or their ability to interact with light. These observations call for the design of new ways of probing nanofluidic systems. Nanofluidics is now at the crossroads, there are new avenues to build complex ionic machines and this may allow to develop new functionalities inspired by Nature.
## I Introduction
Nanofluidic transport is ubiquitous in Nature, and living organisms rely on membranes with complex properties to interact with their environment. They are notably involved in sensory detection (audition, touch, thermopception), osmotic regulation and neurotransmission [1; 2; 3; 4; 5; 6; 7]. Similarly, cells use active ion pumps to create transmembrane concentration gradients [8] and decouple their chemical composition from that of the extracellular medium - in biological terms, thermodynamic equilibrium is as good as death.
In these few examples, the functions of biological membranes are governed by specific ion channels that react to certain external stimuli, opening or closing depending on conditions [9; 10]. They have long drawn considerable attention, as biomimetic membranes with similar properties would find use in multiple technological applications, from water desalination to the production of hydrogen or the development of intronic machines.
The properties of biological systems, however, emerge from a subtle balance of'soft' processes that operate over energy scales up to a few times the thermal agitation \(k_{B}T\), as determined by the structural and chemical properties of proteins of ion channels. The resulting excitability of biological membranes is in sharp contrast with 'hard' condensed matter, where existing electronic systems are optimized to work well-away from thermal noise. Bridging the gap between these two worlds and creating bio-inspired machines therefore requires to find particular avenues where electronic processes could interact with soft matter.
Over the last decade, research in nanofluidics has allowed to develop artificial fluidic systems with sizes reaching the molecular limit using a variety of materials and geometries [11]: carbon or boron nitride nanotubes [12; 13], decorated pores in different types of membranes [14; 15], layered materials (graphene, MoS\({}_{2}\), graphene oxide) [16; 17], etc. Yet, these devices are still greatly limited in terms of functionalities and the understanding of the physical phenomena occuring at the nanoscale, as well as their upscaling for potential applications.
Major challenges include the precise description of electrolytes confined in nanoscale devices: measurable quantities typically consist of ionic current under various external stimuli (voltage, pressure, concentration drop), but these macroscopic fluxes do not give precise information on the underlying transport processes. As a result, inferring the properties of a nanofluidic channel from current-voltage (or current-pressure) characteristics requires to disentangle the effects of surface charges, hydrodynamic slippage, diffusioosmosis, surface adsorption, etc. Even when individual phenomena are well-understood on their own, existing models generally include multiple fitting parameters, making interpretation and comparison with experiments rather complex [11; 18].
In addition, nanometric confinement has been reported to affect the properties of water itself, with strong modification of its thermodynamic [19; 20; 21] and electrodynamic [22; 23] properties. Likewise, nanoscale water transport, and in particular solid-liquid friction [24; 13; 25], is also still poorly understood. Finer understanding of the structure of confined water and ions would have a far-reaching impact on water desalination [26] and osmotic energy harvesting [27].
Consequently, the field of nanofluidics would greatly benefit from novel techniques that would allow a more direct access to the microscopic properties of nanochannels through new sets of observables. In particular, finding ways of imaging nanoscale transport would constitute a quantum leap for nanofluidics.
Notably, such techniques would be useful to make better use of possible couplings between nanoscale flows and solid walls. Recent advances have opened the avenue to the design of channels with carefully engineered electronic properties [28; 25; 29]. Fine understanding of interactions between liquids and surrounding solid surfaces - beyond the description of the interface as a boundary condition for water flows and electrostatic potential - can only be achieved with specific techniques developed for problems in condensed matter. Such advances are possible by, for example, coupling nanofluidic measurement cells to a microscopy or spectroscopy apparatus. |
2309.02298 | Possible Extragalactic Origins of Five LMC Globular Clusters: Proper
Motion Deviations in Gaia DR3 | We use kinematic data of proper motions from Gaia of forty-two globular and
open clusters from Large Magellanic Cloud (LMC) to explore the possibility of
them having extragalactic origins. We find the difference between the proper
motions of cluster stars and a surrounding patch of young LMC stars in each
case. We find five globular clusters towards the north-east showing a high
difference (> 0.11 mas/yr, or > 25 km/s). We also examine the statistical
significance of this difference taking into account both measurement errors of
cluster and surrounding stars as well as inherent dispersion of stellar motions
in the local galactic environment. The five globular clusters (NGC 2005, NGC
2210, NGC 1978, Hodge 3 and Hodge 11) have mean proper motions that lie outside
the 85% confidence interval of the mean of surrounding young stars, with a
clear outlier (NGC 1978 outside 99.96% confidence) whose difference cannot be
accounted for by statistical noise. A young cluster (NGC 2100) also fitting the
criteria is ruled out owing to contrary evidence from literature. This
indicates a possible interaction with a dwarf galaxy resulting in the
accretion/disruption in path of the five globular clusters, or possibly one or
more past merger(s) of smaller galaxy/galaxies with LMC from its north-eastern
region. This direction also coincides with the location of Tarantula Nebula,
suggesting the possibility of the interaction event or merger having triggered
its star formation activity. | Tamojeet Roychowdhury, Navdha Bhalla | 2023-09-05T15:03:00Z | http://arxiv.org/abs/2309.02298v1 | Possible Extragalactic Origins of Five LMC Globular Clusters: Proper Motion Deviations in _Gaia_ Dr3
###### Abstract
We use kinematic data of proper motions from _Gaia_ of forty-two globular and open clusters from Large Magellanic Cloud (LMC) to explore the possibility of them having extragalactic origins. We find the difference between the proper motions of cluster stars and a surrounding patch of young LMC stars in each case. We find five globular clusters towards the north-east showing a high difference (\(>0.11\) mas/yr, or \(>25\) km/s). We also examine the statistical significance of this difference taking into account both measurement errors of cluster and surrounding stars as well as inherent dispersion of stellar motions in the local galactic environment. The five globular clusters (NGC 2005, NGC 2210, NGC 1978, Hodge 3 and Hodge 11) have mean proper motions that lie outside the 85% confidence interval of the mean of surrounding young stars, with a clear outlier (NGC 1978 outside 99.96% confidence) whose difference cannot be accounted for by statistical noise. A young cluster (NGC 2100) also fitting the criteria is ruled out owing to contrary evidence from literature. This indicates a possible interaction with a dwarf galaxy resulting in the accretion/disruption in path of the five globular clusters, or possibly one or more past merger(s) of smaller galaxy/galaxies with LMC from its north-eastern region. This direction also coincides with the location of Tarantula Nebula, suggesting the possibility of the interaction event or merger having triggered its star formation activity.
keywords: galaxies: globular clusters - galaxies: Magellanic Clouds - galaxy: kinematics and dynamics
## 1 Introduction
The Large Magellanic Cloud (LMC) is Milky Way's largest satellite galaxy and has several smaller satellite galaxies itself as shown in Erkal and Belokurov (2020), Patel et al. (2020) - but so far the only known sign of interaction between LMC and any of its satellites is with the Small Magellanic Cloud.
Mucciarelli et al. (2021) showed via chemical analysis that the globular cluster NGC 2005 most likely did not originate within LMC, indicating a past merger event in the galaxy. These mergers can be traced by kinematic signatures, specifically by using the difference of the observed stellar motions vs the expected stellar motions with respect to the galaxy. Accurate kinematic properties obtained from _Gaia_ has further been used to find tidal tails of clusters - for instance in Sollima (2020), characterization of the Sagittarius stream in Ramos et al. (2022), and has helped map nuances of the Milky Way galaxy in general. Phase space information has also been used to uncover the merger of _Gaia_-Enceladus into the Milky Way by Helmi et al. (2018). Several globular clusters of the Milky Way have been shown to originate from past merger events and interactions with the closely orbiting Sagittarius Dwarf in Massari et al. (2019). These instances demonstrate the potential of _Gaia_'s accurate astrometric measurements in determining galactic structure and history, as well as of globular clusters to carry signatures of the galaxy's past in these measurements.
Moreover, Pagnini et al. (2023) showed that globular clusters from accreted galaxies can dwell in locations populated by stars from a different progenitor including _in-situ_ formation. We thus use this to argue that young blue stars in a region around the globular cluster must have formed _in-situ_, and if the globular cluster is accreted or perturbed by interactions, it will have a slightly different motion from the young blue population.
Since _Gaia_ DR3 does not have radial velocity measurements of stars as far as LMC, we chose to use only the proper motion of the stars. A relatively large number of clusters were studied in order to analyze the effects of considering only the proper motion (instead of the full velocity).
Mean proper motions for clusters alone within the Milky Way has been derived to a high accuracy in Helmi et al. (2018), and later in Vasiliev (2019). The latter also details the presence of spatially correlated systematic errors in _Gaia_'s proper motions, as well as the effect of random errors. These need to be accounted for before we analyze the differences in mean proper motion. Kinematic properties of several globular clusters of LMC were analyzed to a high accuracy by Bennet et al. (2022) with combined data from _Gaia_ and Hubble. They also derived the galaxy rotation dynamics using the data of _cluster motions alone_, and found a relation using it. What we propose to do is to obtain a measure of the _difference_ between the proper motion of the globular clusters with that of the surrounding stars. To the best of our knowledge, such an analysis has not been conducted in earlier works.
We derive the mean proper motion of the cluster ourselves and compare those against the values reported in the above paper. Bennet et al. (2022) also reported NGC 2210 as a likely extragalactic member
in LMC, apart from NGC 2005 reported in Mucciarelli et al. (2021). Our analysis finds them both to be outsiders too, apart from three more clusters. One of these is NGC 1978, whose peculiar elliptical shape was ascertained by Mucciarelli et al. (2007) suggesting the possibility of it having been accreted from outside the galaxy followed by tidal distortion. The two other clusters, Hodge 3 and Hodge 11, are old globular clusters, taken from Olszewski et al. (1996) and if not accreted from outside, their kinematic properties may have been influenced by past interactions with the SMC or other dwarf galaxies. Multiple stellar populations have been detected in Hodge 11 and NGC 2210 in Gilligan et al. (2019). One possible explanation for these, given by Helmi (2008) is that these globular clusters are remnants of cores of accreted dwarf galaxies, which was also explored for Messier 54 globular cluster in Sagittarius Dwarf by Carretta et al. (2010). This would then be consistent with our claim of their extragalactic origin. The last member, NGC 2100, is itself a young cluster located in the Tarantula Nebula but presents a significant difference of proper motion w.r.t. its neighbourhood. However, evidence from prior literature points against the likelihood of its accretion.
In the following sections we present our data selection criteria, the method for quantifying the proper motion differences, checking for effects of errors and results for the sample of clusters chosen.
## 2 Data Selection
Most clusters are chosen from the New General Catalog of Dreyer (1888). ESO 57-30 is from the catalog in Bica et al. (1999). Hodge 3 and 11 were originally listed in Hodge & Wright (1967). We specifically looked for NGC clusters within a 5\({}^{\circ}\) circle around the LMC centre coordinates taken as (RA, Dec) = (80.8942, -69.756) by performing a SIMBAD search for the same, and supplemented it with clusters taken from the list of Olszewski et al. (1996) (which has old globular clusters) and Bennet et al. (2022) (which does a similar proper motion analysis).
We use high-quality proper motion data from _Gaia_ DR3 described in Gaia Collaboration (2016) and Gaia Collaboration (2022). We first retrieved the bulk of LMC stars using the following criteria:
* Stars must lie within a 5\({}^{\circ}\) circle around the LMC centre coordinates taken as (RA, Dec) = (80.8942, -69.756). The 5\({}^{\circ}\) radius was chosen as a rough measure of the point till where LMC stars still dominate over field star population.
* Stars with no proper motion measurement in DR3 are removed
* Stars with poor astrometric data are removed, by selecting only those stars with Renormalized Unit Weight Error (RUWE) \(<1.4\). This gives the optimal selection of stars with accurate astrometric solutions, as shown in Lindegren et al. (2018)
* Stars with well-known (RUWE \(<1.4\)) parallaxes above 0.1 mas (corresponding to distances less than 10 kpc) are also removed, as these correspond to line-of-sight contamination by LMC non-members
We now obtain similar data of both open and globular clusters and their surrounding patches. The following approach was adopted:
* Only stars with RUWE \(<1.4\) and parallax \(<0.1\) mas are retained as in the previous case
* For cluster stars, the cluster centre coordinates are obtained from SIMBAD, and a region of 0.03 degree radius centred on those coordinates is queried from. The choice for 0.03 degree is explained in subsection 2.2
* Any cluster with less than 85 stars in the 0.03 degree disc is removed, to reduce estimation errors discussed in Section 3
* For surrounding stars, all stars with a radius \(>0.05\) degrees and \(<0.25\) degrees (again explained in subsection 2.2), centred at the cluster centre are queried for
The reason for picking both globular and open clusters is to use as large a sample as possible, to demonstrate that high differences are not properties of typical clusters in LMC (details in Section 3).
The outer annulus radius is slightly varied, at 0.18 degrees for dense regions near the centre for seven clusters (to reduce the size of the local environment where the mean proper motion is being measured, since central regions have larger velocity variations on a similar spatial scale), 0.28 degrees for NGC 2210 and 0.45 degrees for NGC 2203 (which lie in sparser regions of the galaxy and did not have a large enough sample of outer blue stars to analyze). The radius was optimized for these cases to include sufficient number of stars to return a good Gaussian fit (curve-fitting error in mean \(<1\%\)).
Since we are observing LMC from outside and at a significant distance, correction for solar reflex proper motion, which was necessary for the proper motions of Milky Way stars in Helmi et al. (2018a), is not needed in this work.
Additionally, we observe that due to _Gaia_'s limiting magnitude of around 21, only the red giant stars and blue giant stars of the Hertzsprung-Russell (HR) diagrams are retrieved, and any lower main sequence and white dwarfs remain hidden. Since we wish to find the difference of a cluster's stars' proper motion with the galaxy's own stars that formed _in-situ_, we chose to retain only the blue giant branch members of the surrounding stars, since these are young stars that are less likely to have been accreted in an interaction/merger event from a different galaxy. For these stars we have the difference between the blue and red passband magnitudes, \(G_{BP}-G_{RP}<0.75\) (obtained empirically by observing the HR diagrams for the samples obtained). We hence use this criteria as a filter for a cluster's surrounding blue stars. We now discuss possible sources of errors inherent in the data and analyze their effects on our results.
### Measurement Errors
_Gaia_'s proper motions are expected to have a naturally arising error in measurement. A spatially correlated systematic error is outlined in Lindegren et al. (2018), and later shown to be the dominant cause of error (at least for clusters with \(\mathrel{\hbox{\hbox to 0.0pt{\hbox{\lower 4.0pt\hbox{$\sim$}}}\hbox{$>$}}}100\) members, which was imposed as part of our data selection criteria) in Milky Way globular clusters in their kinematic analysis by Vasiliev (2019), at a value of about 0.08 mas/yr. These systematic errors are spatially correlated and can effectively be considered as zero-point offsets within 1\({}^{\circ}\) regions - as used in analysis of ultra-faint dwarfs in Battaglia et al. (2022), based on the Gaia EDR3 astrometry outlined in Lindegren et al. (2021), implying that this offset error for each globular cluster and its surrounding LMC star field, within a circle of 0.25\({}^{\circ}\) radius, would have identical values. Since we are attempting to quantify only the difference in proper motion between the cluster stars and LMC neighbouring stars (and not their absolute values), the systematic offset cancels out. Hence we chose not to correct for this offset ourselves.
The effects and treatment of random (statistical) errors is described in Section 3.
### Contamination between Cluster and Surrounding Stars
Our choice of setting the boundary of cluster as 0.03\({}^{\circ}\), and surroundings starting at 0.05\({}^{\circ}\) selects nearly the whole cluster in almost all
cases as cluster stars, and excludes any cluster stars from our surrounding stars sample. We still qualitatively discuss the effects of one sample contaminating the other.
We observe that our hard boundary of \(0.03^{\circ}\) does not give the exact boundary of each cluster, though it is a fairly good estimate. Adopting an LMC distance of 50 kpc from Pietrzynski et al. (2013) this corresponds to a radius of about 26 pc. Half-light radii of most globular clusters are about 10 pc or less shown in van den Bergh (2008) while \(r_{90}\) values are on the scale of 20 pc in Werchan & Zaritsky (2011). Tidal radii are on the scale of 20-50 pc for some representative clusters derived in Piatti & Mackey (2018).
Stars of the surrounding star field may get included in the cluster sample and (with a low possibility) vice-versa. This will affect our calculation of mean proper motions. If we assume the proper motions to be random samples drawn from two Gaussians with different means, then we wish to quantify the difference in their means. Contamination will imply that the samples get mixed up, and we end up under-estimating the difference in their means. So with the hard cut, the difference of means that we obtain will be a _lower bound_ on the difference of actual mean proper motions rather than the exact difference.
## 3 The method
Our query for all LMC stars returns about 1.97 million members. We first plot the histogram for proper motion distribution of all the LMC stars with 1000 bins and obtain a smooth Gaussian. The best fit parameters for this Gaussian, obtained by least-squares fitting are \(\mu=1.92\pm 0.0015\) mas/year, \(\sigma=0.109\pm 0.001\) mas/year. Separating into two coordinates gives us for proper motion along RA, \(\mu_{a}*=\mu_{a}\cos\delta=1.8338\pm 0.001\) mas/yr, and \(\mu_{\delta}=0.3118\pm 0.001\) mas/yr. This value of \(\mu\) includes the systematic offset which we did not correct for, so has a few percent difference with the values reported in earlier literature in Kallivayalil et al. (2006).
Our aim is to find the difference in the proper motion of the clusters as a whole, with that of the surrounding stars. Under a simplified assumption of all stars having formed in-situ in the galaxy, we expect the cluster stars to have the same mean proper motion as those of the annulus of surrounding blue stars, albeit with small differences that can be attributed as a statistical artifact of our finite sample size in each case. These aforementioned means are obtained from a Gaussian profile fit.
We thus plot histograms for cluster stars and surrounding blue stars (extracted as outlined above) with 100 bins (number of bins is slightly varied to reduce bins that have zero members and can lead to bad fits) and obtain Gaussian features for both cases. Number of cluster stars in our retrieved samples ranged from 85 to 300, and number of surrounding blue stars ranged from about 350 to 5000.
Least-squares fitting with a Gaussian profile is done for both distributions individually using Python's scipy.optimize.curve_fit function, and most clusters return an excellent fit with \(<2\%\) relative error in estimation of mean for cluster stars and \(<1\%\) for surrounding stars. Those clusters that do not satisfy the above error bound condition are not considered for further analysis, as later work involves differences of a scale that would be rendered insignificant if fitting errors alone are any higher. We found eight clusters that did not pass this criteria, all taken from the list in Benner et al. (2022). These clusters shared a common characteristic of having low number of cluster as well as surrounding stars (i.e. in sparse surroundings) resultant from a high value of distance from the LMC centre (these distances being reported in the same paper). After noticing this trend we restricted ourselves to clusters closer to the centre, all of which returned good fits.
The Gaussian profiles also indicate that the lack of information of the complete velocity vector (i.e. the line-of-sight velocity) does not significantly hamper our attempt to calculate differences in motion through the galaxy.
From here on, we thus assume that the proper motions of the stars in any of our samples are obtained from a perfectly Gaussian distribution, and any deviation is due to our finite sample size and random errors.
### Statistical Errors
Since we are probing a low-difference regime of proper motions, a proper quantification of the errors in the individual means of cluster and surrounding blue stars is essential. Let \(p_{i}\) denote the proper motion of \(i\)-th star, and \(m_{p}\) denote the mean proper motion of a particular set of stars (which can either be a cluster or blue stars in its surroundings).
For the surrounding stars three sources of errors in the mean are considered:
* The measurement error of proper motion obtained directly from \(Gaia\), for each star, denoted as \(\delta p_{i}\)
* The error margin obtained by the curve-fitting (returned by scipy), denoted as \(\delta m_{\rm fit}\)
* Since the surrounding blue stars are representative of the local galactic environment, and spanning the LMC disk, the standard deviation of the Gaussian curve-fit is a measure of the randomness in the distribution of stars' proper motion in that local environment. This randomness, \(\sigma_{p}\) must also be taken into account.
For the total error and confidence interval estimation in the mean \(\Delta m_{p}\), we take the root mean square error (RMSE) of measurement errors, the fitting error as it is, and the Gaussian standard deviation scaled down by the sample size of the star set \(N\). Mathematically,
\[\Delta m_{p}=\sqrt{\frac{\sum_{i=1}^{N}\delta p_{i}^{2}}{N^{2}}\ +\ \delta m_{\rm fit}^{2}\ +\ \frac{\sigma_{p}^{2}}{N}}\]
For the globular cluster stars, the Gaussian fit standard deviation, \(\sigma_{p}\) is a measure of the velocity dispersion which is related to the mass and other physical parameters of the cluster itself - used widely for instance in Hilker et al. (2019), and not related to the stellar motion
Figure 1: Histogram in sea green for the entire sample of LMC stars retrieved, with the best-fit Gaussian overlayed on top in purple
of the local galactic environment. Therefore, for the error in cluster mean proper motion, only the first two terms are considered in the above sum.
For getting the final confidence interval, we use the following number to quantify the difference of mean proper motions and its significance relative to the errors in means:
\[Q=\frac{\mid m_{p,cluster}-m_{p,surr}\mid}{\Delta m_{p,cluster}+\Delta m_{p,surr}}\]
Conversion from \(Q\) to the confidence interval is done using a normal statistic (\(Q\geq 1\) meaning outside the 68% confidence interval, \(Q\geq 2\) meaning outside the 95% confidence interval, and so on).
For several clusters, we also have the proper motions obtained with a very low error in Table 2 of Bennet et al. (2022), and comparing our obtained mean cluster proper motions with the values reported in the aforementioned paper, we find only a systematic offset ranging between 0.04 mas/yr to 0.09 mas/yr, as we did not correct our values for that. This value of systematic offset is consistent with the representative value of 0.07 mas/yr reported in Vasiliev (2019) for Milky Way's globular clusters.
For finding the absolute difference, we used _our_ values of mean
Figure 2: Histogram for four clusters from our sample, including their best-fit Gaussians and normalized density plots. For NGC 1978 and Hodge 3 the difference in the peaks of Gaussians i.e. their means, is clearly apparent. For NGC 1756 and NGC 1866 that have smaller difference of means, the difference is minuscule - and can be attributed to shifts due to finite sample size
and not those of Bennet et al. (2022), since our means are both uncorrected for the identical systematic error and will hence give the true difference.
### Studying Arbitrary Groups of Stars
Globular/open clusters are clearly discernible as point-like overdensities in the outer parts of LMC. However, in the central regions the stellar density appears uniform and we need to ensure that the differences for these clusters aren't a characteristic feature of all stars near the centre. For this we picked arbitrary (RA, Dec) pairs in the central region, and ran the algorithm as described earlier on these patches. The difference increases near the centre to about 0.06 mas/yr but all with \(Q\) less than 1, still small enough to clearly distinguish between the normal and possibly extragalactic globular clusters.
## 4 Discussion
### Effect of Errors
The difference in means for the larger fraction for our sample of clusters can be explained by the errors alone, as is evident from the value of \(Q\) being less than 1. We still choose a higher confidence interval of 85% corresponding to \(Q\geq 1.43\), and segregate those clusters having the above value in a separate class. We also calculate another number for the significance of the mean difference. Since the standard deviation of the Gaussian fit (\(\sigma_{p,surr}\)) to the surrounding blue stars is a measure of the randomness of stellar motions in that area of the galaxy, we also calculate for all our clusters:
\[D=\frac{|\ m_{p,cluster}-m_{p,surr}|}{\sigma_{p,surr}}\]
### Results for full cluster distribution
We analyzed results for a total of forty-two clusters, spread across throughout the \(5^{\circ}\) circle around the centre. A majority of the clusters show low values of \(Q\) implying they are following the same path as the LMC's interstellar clouds.
There is a marked gap in the values of \(D\), between \(D=0.23\) (equivalent to 18% confidence interval) and \(D=0.31\) (equivalent to 25% confidence interval). All clusters with \(D\geq 0.31\) also belong to the \(Q\geq 1.43\) group, indicating that the difference of means which is more than 85% significant w.r.t. measurement errors is also on the higher side w.r.t. the dispersion of proper motions in the same environment. One cluster, NGC 2108 has \(D>0.31\) but \(Q<1.43\), though \(Q>1\) for it might signify it as a significant member too.
It is also interesting to note that except for NGC 2100, all other clusters in the \(Q\geq 1.43\) group are also the only ones to have absolute differences of proper motion \(\geq 0.11\) mas/yr, which at the LMC distance translates to a plane-of-sky velocity difference of about 25 km/s.
### Outlier Clusters
The histogram for \(Q\) as shown in the figure follows a clear normal distribution, which means the differences observed are consistent with statistical noise. There is one single outlier at \(Q=3.5376\), and its extreme position cannot be explained by noise patterns/distributions alone.
We however chose to study further all the clusters in the \(Q\geq 1.43\) group. These are NGC 2005, NGC 2210, NGC 1978, Hodge 11, Hodge 3 and NGC 2100. We look at each of these galaxies and review existing literature to argue if these are truly possible extragalactic clusters in LMC. The most spectacular difference as pointed earlier is of NGC 1978, which lies outside the 99% confidence interval in \(Q\), and outside the 75% confidence interval of \(D\). Of these six clusters, all except NGC 2100 are globular clusters whose kinematic difference can possibly be ascribed to accretion/disruption events.
We now try to find more properties of our six designated outlier clusters from earlier literature, first reviewing if the only young cluster, NGC 2100, can actually be considered extragalactic in source or not.
**NGC 2100:** This was studied as a young massive cluster in Patrick et al. (2016), and assigned a rough age of 20 Myr therein. It is therefore doubtful if the kinematic difference could be ascribed to an accretion event or not. Its properties have been shown to be similar to its Milky Way counterpart Perseus OB-1, indicating a higher likelihood of it having formed within a galaxy such as LMC. Its age is also consistent with other young clusters of LMC as shown in Niederhofer et al. (2015), further decreasing the probability of it having formed elsewhere. Finally, we mention the absolute difference in proper motion to be lower than 0.11 mas/yr, which is the lower limit for our five other globular clusters. It is entirely possible that the higher \(Q\) value for this cluster arose solely from the low measurement errors in _Gaia_, which is reflected in the error values in the table too. We also note that the surrounding sample for this cluster has a significant portion of the Tarantula Nebula, whose own proper motion dispersion is likely to be low and masking the actual dispersion in the local environment by lowering the value of \(\sigma_{p,surr}\) and increasing \(D\).
**NGC 2005:** This cluster has the most well-known evidence for the possibility of extragalactic origin as highlighted by chemical analysis in Mucciarelli et al. (2021), although in an-situ origin was also proposed by Piatti & Hirai (2023). It is also an old cluster, as used in Olszewski et al. (1996) and has a low metallicity of -1.54 from Johnson et al. (2006). _In-situ_ formation is often associated with higher metallicities as shown by simulation in Renaud et al. (2017), making it a more probable extragalactic candidate.
**NGC 2210:** Earlier evidence for it being an outlier in 3D kinematic
Figure 3: Distribution of values of \(Q\) for our cluster sample, as well as a Gaussian profile fit assuming a zero mean. The best-fit standard deviation for this Gaussian is 0.96, close to 1 i.e. the normal distribution. The histogram is consistent with the normal distribution except for a single point at \(Q=3.5376\), corresponding to NGC 1978
space is given by Bennet et al. (2022), who also pointed to its possible extragalactic origin, and is also an old cluster studied in Olszewski et al. (1996). Gilligan et al. (2019), and later Gilligan et al. (2020) showed that NGC 2210 and Hodge 11 are the only two clusters to show multiple populations in horizontal branch. Multiple populations can be seen as cores of accreted dwarf galaxies as shown in Helmi (2008), though we acknowledge the presence of multiple populations in several globular clusters, and hence not a unique property of the subset we have.
**Hodge 3:** No significant properties were found in earlier literature that can serve as evidence for it to be a cluster of extragalactic origin.
**Hodge 11:** Like NGC 2210, Gilligan et al. (2019) showed it had multiple stellar populations, including those in horizontal branch. Both of these also have the most heavily populated blue-straggler branch among the old clusters studied by Wagner-Kaiser et al. (2017) as seen in their HR diagrams. Both of these, combined with their spatial closeness, suggests a common pathway of evolution and possibly a common origin to both, which may be attributed to an accretion event.
**NGC 1978:** This has been known to be a highly elliptical globular cluster, quite unlike most Milky Way GCs as shown in Mucciarelli et al. (2007). High ellipticity of LMC clusters in general was associated to the weak tidal field of LMC compared to the Milky Way in Goodwin (1997) that preserves the original triaxial structure of
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Name of & Cluster Mean & Cluster Proper & Surrounding Mean & Surrounding Proper & Proper Motion & & Proper Motion \\ Cluster & Proper Motion Error & \(\Delta m_{p,cluster}\) & \(m_{p,surr}\) & \(\Delta m_{p,surr}\) & \(\Delta m_{p,surr}\) & Difference & & \(\sigma_{p,surr}\) \\ \hline NGC 2005 & 2.0051 & 0.0374 & 1.8906 & 0.017 & 0.1145 & 2.1015 & 0.3194 \\ NGC 1978 & 1.8432 & 0.0446 & 1.6445 & 0.0116 & 0.1988 & 3.5376 & 1.1522 \\ Hodge 11 & 1.8445 & 0.0579 & 2.0264 & 0.0546 & 0.1819 & 1.6181 & 0.3567 \\ NGC 2210 & 2.0199 & 0.0493 & 1.8961 & 0.0376 & 0.1238 & 1.4353 & 0.314 \\ NGC 2100 & 1.9267 & 0.0275 & 1.868 & 0.0114 & 0.0587 & 1.5087 & 0.3174 \\ Hodge 3 & 1.7872 & 0.0571 & 1.6652 & 0.0231 & 0.122 & 1.5204 & 0.3904 \\ \hline NGC 1754 & 2.0216 & 0.049 & 1.9827 & 0.0335 & 0.0389 & 0.4719 & 0.1553 \\ NGC 1786 & 1.8771 & 0.0438 & 1.8067 & 0.0151 & 0.0704 & 1.1952 & 0.2229 \\ NGC 1835 & 1.9395 & 0.0273 & 1.9677 & 0.0156 & 0.0282 & 0.6574 & 0.0711 \\ NGC 1916 & 1.9592 & 0.0464 & 1.8901 & 0.0126 & 0.0691 & 1.1703 & 0.2126 \\ ESO 57-30 & 2.0589 & 0.0397 & 2.1227 & 0.0203 & 0.0638 & 1.0654 & 0.1397 \\ NGC 2173 & 2.1672 & 0.0482 & 2.2154 & 0.0678 & 0.0482 & 0.4156 & 0.0948 \\ NGC 2203 & 2.1231 & 0.0615 & 2.0808 & 0.0617 & 0.0424 & 0.3439 & 0.087 \\ NGC 1928 & 1.9146 & 0.035 & 1.8748 & 0.0113 & 0.0397 & 0.8582 & 0.1181 \\ NGC 1939 & 2.0451 & 0.0396 & 2.0763 & 0.017 & 0.0311 & 0.5496 & 0.084 \\ NGC 2108 & 1.8156 & 0.0424 & 1.8799 & 0.0133 & 0.0643 & 1.1533 & 0.3166 \\ NGC 1806 & 1.8837 & 0.0361 & 1.8365 & 0.0169 & 0.0472 & 0.8911 & 0.1362 \\ NGC 1903 & 1.9435 & 0.0281 & 1.8927 & 0.0111 & 0.0508 & 1.2934 & 0.1599 \\ NGC 2214 & 1.816 & 0.0413 & 1.8668 & 0.0316 & 0.0508 & 0.6969 & 0.1367 \\ NGC 1935 & 1.733 & 0.0323 & 1.6883 & 0.0156 & 0.0448 & 0.9341 & 0.2244 \\ NGC 1870 & 1.8999 & 0.0359 & 1.8888 & 0.0122 & 0.0111 & 0.2306 & 0.0295 \\ NGC 2157 & 1.8418 & 0.0409 & 1.8386 & 0.0304 & 0.0032 & 0.0452 & 0.0118 \\ NGC 2019 & 2.0085 & 0.0352 & 2.0557 & 0.0138 & 0.0472 & 0.9634 & 0.1204 \\ NGC 1783 & 1.6578 & 0.0461 & 1.6542 & 0.0143 & 0.0036 & 0.0588 & 0.0165 \\ NGC 1756 & 1.9014 & 0.0416 & 1.9054 & 0.0102 & 0.004 & 0.0774 & 0.0172 \\ NGC 1805 & 1.6057 & 0.0419 & 1.6227 & 0.0134 & 0.017 & 0.3067 & 0.0765 \\ NGC 1818 & 1.6276 & 0.0407 & 1.6768 & 0.0171 & 0.0492 & 0.8514 & 0.1669 \\ NGC 1831 & 1.7276 & 0.0315 & 1.6574 & 0.0266 & 0.0702 & 1.2085 & 0.228 \\ NGC 1866 & 1.579 & 0.0266 & 1.6217 & 0.0166 & 0.0427 & 0.9906 & 0.1627 \\ NGC 1898 & 2.0416 & 0.0314 & 1.9745 & 0.0207 & 0.0671 & 1.2873 & 0.1998 \\ NGC 1987 & 2.1418 & 0.0353 & 2.1886 & 0.0196 & 0.0468 & 0.8529 & 0.1214 \\ NGC 2031 & 2.1675 & 0.0336 & 2.1738 & 0.0168 & 0.0063 & 0.1248 & 0.0226 \\ NGC 2156 & 1.8027 & 0.0438 & 1.8059 & 0.018 & 0.0033 & 0.0527 & 0.0134 \\ NGC 2159 & 1.8064 & 0.0417 & 1.8042 & 0.0167 & 0.0023 & 0.0386 & 0.0098 \\ NGC 1712 & 1.9255 & 0.0337 & 1.9147 & 0.016 & 0.0109 & 0.2189 & 0.0435 \\ NGC 1755 & 1.8586 & 0.0343 & 1.8472 & 0.0182 & 0.0114 & 0.2164 & 0.0375 \\ NGC 1763 & 1.7089 & 0.0388 & 1.7286 & 0.0152 & 0.0197 & 0.3652 & 0.1014 \\ NGC 1846 & 1.777 & 0.0369 & 1.7753 & 0.0208 & 0.0017 & 0.0289 & 0.005 \\ NGC 1847 & 1.8405 & 0.0382 & 1.8722 & 0.0165 & 0.0318 & 0.5808 & 0.0919 \\ NGC 1854 & 1.9173 & 0.0316 & 1.9112 & 0.0134 & 0.0061 & 0.1354 & 0.0191 \\ NGC 1711 & 1.9518 & 0.0324 & 1.9538 & 0.0127 & 0.002 & 0.0445 & 0.0085 \\ NGC 1944 & 2.1388 & 0.0472 & 2.1791 & 0.0313 & 0.0402 & 0.5125 & 0.1142 \\ NGC 2121 & 2.0127 & 0.0384 & 2.0787 & 0.0197 & 0.066 & 1.1367 & 0.171 \\ \
clusters. We propose that the extra higher ellipticity of NGC 1978 may have been a result of it having formed originally in a dwarf galaxy, and being accreted into the LMC much later, thus giving tidal forces less time to make it closer to spherical. Ferraro et al. (2006) also showed that the ellipticity is not a result of merging of two clusters/gas clouds. Metallicity and ages of several clusters was studied by Narloch et al. (2022), who commented that NGC 1978, despite being significantly older than the surrounding field population, has a slightly greater metallicity (-0.38 for cluster vs -0.44 for field), which should have been lower if it was born in the same region in an earlier epoch. This combined with the large deviation in mean proper motion, suggests that it formed in a different environment.
### Spatial Position
The rotational period of LMC stars around its centre as determined by van der Marel & Kallivayalil (2014) is 250 million years. This renders it impossible for us to determine the exact direction from which the clusters may have been accreted or disrupted in their paths, though simulations and further data refinement might make it possible. We still see our outlier clusters all roughly lying in the North-Eastern part of LMC, which also contains the Tarantula Nebula. We calculate the position angles for each of the five clusters w.r.t. the centre of LMC at (RA, Dec) = (80.8942, -69.756) using the standard convention; northwards is the 0deg line and angle is measured positive to East of North. The values of position angles (in degrees) for each cluster is given in Table 2.
In contrast, the position angle of the Tarantula Nebula is \(63.46\arcdeg\), which is quite close to the average PA of the clusters at \(60.1\arcdeg\). All of them lie in the same quadrant as the Nebula. We suggest that the same activity that resulted in the disruption of cluster motions may also have accelerated its star formation activity.
### Direction of Proper Motion Differences
We resolve the proper motions of the six clusters as well as their surrounding blue stars beyond \(Q\geq 1.43\) into its two components, \(\mu_{\alpha}\cos\delta\) and \(\mu_{\delta}\). The relative motions w.r.t. the mean motion of LMC (found earlier in 3) are taken, and these values are used to calculate the difference in each of the components of the proper motion. A representative image is shown in Figure 5.
The general trend of rotational velocity of LMC stars is evident from the direction of arrows. The direction of the difference (black arrows) can be computed as their position angle (standard convention of northwards being the 0deg line and angle measured positive to East of North). The angles are shown in Table 3.
Four of the five globular clusters lying in the \(Q\geq 1.43\) group have their difference position angles lying inside an approximately 80deg interval from \(12\arcdeg\) to \(92\arcdeg\). Comparing with the earlier derived values
\begin{table}
\begin{tabular}{|c|c|} \hline Cluster Name & Position Angle \\ \hline NGC 2005 & 89.8\(\arcdeg\) \\ NGC 1978 & 8.43\(\arcdeg\) \\ Hodge 3 & 29.2\(\arcdeg\) \\ NGC 2210 & 81.55\(\arcdeg\) \\ Hodge 11 & 91.2\(\arcdeg\) \\ \hline \end{tabular}
\end{table}
Table 2: Position Angles of the five outlier clusters w.r.t. LMC centre
Figure 4: Distribution of the clusters studied in this work in the Large Magellanic Cloud, with the colour intensity equal to the \(Q\) value whose calculation is given in 3.1. Colour value is clipped at 1.75 for better contrast. Names of the five outlier clusters and coordinate grids are also present. We notice our five clusters lying preferentially to the North-East.
\begin{table}
\begin{tabular}{|c|c|} \hline Cluster Name & Angle of Proper Motion Difference \\ \hline NGC 2005 & 33.56\(\arcdeg\) \\ NGC 2210 & 13.9\(\arcdeg\) \\ Hodge 3 & 70.83\(\arcdeg\) \\ NGC 1978 & 90.42\(\arcdeg\) \\ Hodge 11 & 231.47\(\arcdeg\) \\ NGC 2100 & 118.33\(\arcdeg\) \\ \hline \end{tabular}
\end{table}
Table 3: Position Angles of the five outlier clusters w.r.t. LMC centre
Figure 5: Relative proper motions of each of the clusters and their surrounding blue stars w.r.t. the mean motion of LMC, represented as arrows. Violet arrows show the cluster proper motion, brown ones show that of the surrounding stars and the black arrows show the difference. Lengths of arrows are proportional to the values of proper motions, with black arrows all scaled up by 33% for better visibility
of clusters' position angles, we find both NGC 1978 and Hodge 3 (lying to the North) having a slower rotation speed than the stellar disk and the difference PA close to 80\({}^{\circ}\). NGC 2100 loosely follows the above as well, however the nature of its proper motion difference with stellar disk is more questionable given previous literature.
NGC 2005 and NGC 2210 both lie Eastward and have a faster rotation speed and a difference PA close to 20\({}^{\circ}\). The trend cannot be robustly ascertained given we have only two data points for each of them. However, if it is of physical origin, it might point to two separate events to explain their nature. Hodge 11 has its difference pointed nearly antiparallel to that of its closest neighbour NGC 2210, which again may or may not have an underlying physical reason.
### Comparison with Surrounding Red Stars
For our six clusters with \(Q\geq 1.43\), we also try to compare the cluster proper motion with that of the surrounding red (\(G_{BP}-G_{RP}>0.75\)) stars (i.e. the older stellar population of the local LMC disk). We report the values of \(Q\), the difference significance in mean proper motions of the cluster with blue stars, the cluster with red stars, and the red stars with blue stars, respectively in Table 4
We see that for two clusters, NGC 2210 and NGC 2100, \(Q\) increases when the cluster motion is compared to the red stars instead of the blue ones. Since NGC 2100 is a young cluster, this result is expected. For NGC 2210, the error in mean proper motion of red stars was much less than that for the blue stars, which reflects in the high \(Q\). The absolute values of the differences are nearly similar, and the similarity of proper motions in blue and red surrounding stars reflects in the very low \(Q_{red,blue}=0.09\).
For the four other clusters, \(Q\) decreases when we compare cluster stars with red disk stars instead of the blue disk stars. This may be explained by the possible accretion/disruption event bringing in a smaller population of old stars itself apart from the cluster, and mixing with the existent LMC disk's old population. The most marked reduction of \(Q\) happened for Hodge 3.
\(Q_{blue,red}\) in general has high values (\(>1\)), even though the absolute values of proper motion differences are all \(<0.06\) mas/yr (except Hodge 3, where is is 0.12 mas/yr), mainly due to the fact that the star sample size of disk stars have higher numbers, bringing down the effect of random errors significantly.
## 5 Conclusions
We examined the proper motions of several clusters and quantified the difference of cluster proper motions with that of young, blue stars in their surroundings and examined their significance w.r.t. errors in the determined means. Major results can be summarized as below:
1. The majority of clusters have very small differences, with a value of \(Q\leq 1\) which can be accounted for by errors alone.
2. Systematic errors cancel out when taking a difference. Statistical errors in the mean are quantified accounting for measurement errors in _Gaia_, Gaussian fitting error as well as the Gaussian standard deviation of proper motions in the surrounding star sample. The values of the metric \(Q\), which represents the significance of mean proper motion difference over the sum of errors, follows a roughly normal Gaussian distribution except NGC 1978 which is a clear outlier and whose difference cannot be accounted for by statistical error distributions alone.
3. We use a cut of 85% confidence interval of \(Q\) to separate clusters with significantly different mean proper motions from their surroundings.
4. This group had five globular clusters with relatively higher proper motion differences \(>0.11\) mas/yr (equivalent to physical plane-of-sky velocity difference \(>25\) km/s), and a sixth young cluster. Literature review for these six clusters provides contrary evidence to extragalactic origin for the young cluster, but a moderate corroboration for our hypothesis that the old globulars may either be accreted or have been affected by some significant disruption event by a nearby interacting galaxy. Correlations between positions and direction of differences may point to two distinct events.
We mainly aim this work to serve as observational evidence of the five clusters having a different proper motion from the rest of their surroundings, although the normal distribution of \(Q\) shows that this difference may or may not have a physical origin. Only in the case of NGC 1978, the clear presence of a physically different proper motion can be ascertained. A direct cause of this could be a galaxy accretion/disruption event; however, further work such as simulations, more precise astrometric and spectral analyses are essential to confirm the true reason for the difference.
## Acknowledgements
We extend our sincerest gratitude to the referee for their patient scrutiny and detailed comments that immensely helped to improve the analysis of errors and statistical significance of differences. We would also like to thank Himanshu Verma, PhD in Physics, IIT Bombay and Krititka, the astronomy club of IIT Bombay for giving us the first exposure to analyzing data from _Gaia_, without which this project would have been impossible. This work has used data from the European Space Agency (ESA) mission _Gaia_ ([https://sci.esa.int/web/gaia](https://sci.esa.int/web/gaia)), processed by the Gaia Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.
## Data Availability
This work used public data available via astroquery.Gaia, the Python API to access _Gaia_ DR3 data. Processing of spacecraft data is carried out by DPAC.
|
2307.13639 | Fake It Without Making It: Conditioned Face Generation for Accurate 3D
Face Reconstruction | Accurate 3D face reconstruction from 2D images is an enabling technology with
applications in healthcare, security, and creative industries. However, current
state-of-the-art methods either rely on supervised training with very limited
3D data or self-supervised training with 2D image data. To bridge this gap, we
present a method to generate a large-scale synthesised dataset of 250K
photorealistic images and their corresponding shape parameters and depth maps,
which we call SynthFace. Our synthesis method conditions Stable Diffusion on
depth maps sampled from the FLAME 3D Morphable Model (3DMM) of the human face,
allowing us to generate a diverse set of shape-consistent facial images that is
designed to be balanced in race and gender. We further propose ControlFace, a
deep neural network, trained on SynthFace, which achieves competitive
performance on the NoW benchmark, without requiring 3D supervision or manual 3D
asset creation. The complete SynthFace dataset will be made publicly available
upon publication. | Will Rowan, Patrik Huber, Nick Pears, Andrew Keeling | 2023-07-25T16:42:06Z | http://arxiv.org/abs/2307.13639v2 | # Fake It Without Making It: Conditioned Face Generation for Accurate 3D Face Shape Estimation
###### Abstract
Accurate 3D face shape estimation is an enabling technology with applications in healthcare, security, and creative industries, yet current state-of-the-art methods either rely on self-supervised training with 2D image data or supervised training with very limited 3D data. To bridge this gap, we present a novel approach which uses a conditioned stable diffusion model for face image generation, leveraging the abundance of 2D facial information to inform 3D space. By conditioning stable diffusion on depth maps sampled from a 3D Morphable Model (3DMM) of the human face, we generate diverse and shape-consistent images, forming the basis of SynthFace. We introduce this large-scale synthesised dataset of 250K photorealistic images and corresponding 3DMM parameters. We further propose ControlFace, a deep neural network, trained on SynthFace, which achieves competitive performance on the NoW benchmark, without requiring 3D supervision or manual 3D asset creation.
## 1 Introduction
Supervised approaches for 3D face shape estimation are limited by a lack of 3D data; 3D capture is costly and time consuming, making large-scale 3D datasets infeasible. This has led to the wide use of self-supervised approaches [32][31][37][27][4]. However, these approaches have been shown to perform poorly in metric reconstruction [25].
Another approach is synthesising 3D face datasets using computer graphics. Wood et al. [35] render a large scale dataset using a parametric face model and library of hand-crafted assets to train an accurate 2D landmark regressor. They are then able to fit a 3D face model to the predicted landmarks [36]. This leads to robust performance but there remains a large domain gap; the images are not photorealistic, the process requires crafted assets, and it is computationally expensive. They propose to 'fake it till you make it' with crafted 'fake' data enabling them to'make it' with strong performance in the real world. We 'fake' it without having to make any assets at all.
Zielonka et al. [39] annotate and unify existing 3D face datasets to enable supervised training of their MICA (MetrIC fAce) network. This is the current state-of-the-art in metric 3D face reconstruction from a single image on the NoW benchmark [25]. 2D knowledge of the human face is incorporated through their use of ArcFace as a feature extractor. However, this approach does not take advantage of the wealth of 2D face data to increase the size of the dataset itself. We use 2D data in both dataset construction and within our network for accurate 3D face reconstruction from a large-scale 3D dataset.
We propose that image generation has reached a critical point where it can be used to bridge the gap between 2D and 3D, helping us solve 3D face reconstruction in a supervised manner. We use the generative capabilities of a 3D Morphable Model (3DMM) as conditioning for a stable diffusion image generation model. Hence, we combine known 3D shape information about the human face with 2D data on the appearance of human faces. This results in a photo-realistic 2D face image with known 3D shape, as shown in
Figure 1: SynthFace, our dataset of photorealistic faces and corresponding 3DMM parameters, is generated using conditioned stable diffusion and rendered depth maps from the FLAME head model. The first example is shown alongside the conditioning depth map.
Figure 1.
The unification of existing 3D datasets through MICA has shown promising results in 3D face shape estimation, but it represents an upper bound on a dataset for supervised 3DMM regression unless more 3D data is collected. We overcome this limitation by devising a dataset generation pipeline which combines 2D and 3D generative models. This is achieved using ControlNet [38], which adds conditional control to Stable Diffusion [24]. We use ControlNet to condition stable diffusion 1.5 on depth maps of our generated 3D faces. In doing so, we produce a large-scale dataset of known 3DMM parameters and generated 2D face images.
The primary contributions of our work are twofold: First, we introduce SynthFace, the first large-scale synthesised dataset of 250K photorealistic faces, depth maps, and corresponding 3DMM parameters, which significantly expands the available data for training and evaluating 3D face shape estimation models. Second, we introduce ControlFace, a network trained on this dataset. With ControlFace, we demonstrate competitive performance on the NoW benchmark, demonstrating that knowledge from 2D generative image models can be integrated to improve 3D face shape estimation.
In summary, our work presents a novel approach to bridge the gap between the limited availability of 3D data and the abundance of 2D data for face shape estimation. Our method is simple to implement, easily extensible, and computationally inexpensive. Future improvements to image generation models, conditioning methods, and 3D face models can all be easily exploited using our method. Through introducing SynthFace and demonstrating the effectiveness of ControlFace, we reveal a promising new direction for improving 3D face shape estimation.
## 2 Related Work
3D face shape estimation from a single image represents a significant challenge in the field of computer vision. It is an ill-posed problem due to the effects of perspective and scaling, which can result in similar 2D representations from different faces. To tackle this, 3D Morphable Models (3DMMs) have been extensively used since their introduction by Blanz and Vetter [2] as they offer prior knowledge of human facial structure and help resolve ambiguities.
3DMMs provide a compact representation of the human face, allow additional constraints to be placed on reconstructions, and facilitate morphing between faces. Furthermore, their generative capabilities enable the sampling of realistic, geometrically consistent faces from within the model's space [9].
However, despite the widespread success of supervised learning across computer vision tasks, it has been severely limited in 3D face reconstruction due to a lack of training data. In this context, supervised learning involves the use of paired 2D-to-3D data, whether real or synthetic, which formally comprises a set of face images and their corresponding 3D model representations [25].
To navigate the scarcity of 3D supervision, many recent approaches have considered optimisation-based and self-supervised methods, but these have shown poor performance on metric benchmarks [25]. Consequently, there is a need to explore supervised approaches to reconstruction and the collection of large-scale 3D training data to simplify the task.
In this work, we explore how the analytical and generative applications of 3DMMs can be combined to achieve accurate 3D face reconstruction. To achieve this, we examine current supervised methods for reconstruction, photorealistic face generation in both 2D and 3D, and how these approaches can be integrated to enable accurate 3D face reconstruction.
### Supervised Reconstruction
One of the earliest notable approaches to supervised reconstruction using deep learning is by Tran et al. [33]. They create surrogate ground truth parameters using pooled multi-image 3DMM estimation. This process involves optimisation-based reconstructions for each image of an individual, with final shape and texture parameters being a weighted average of individual estimations. This is a clever observation: taking advantage of existing 2D multi-image data to improve 3D reconstruction from a single image. This dataset is then used for supervised training with a deep CNN. Despite its novelty in leveraging existing 2D multi-image data for improved 3D reconstruction, this approach is inherently limited by the initial reconstruction method used to generate the training data; at best, it can learn to be as good as this method.
Richardson et al. [23] generate face geometries directly from a 3DMM, rendering the face as an image under randomised lighting conditions. This results in a dataset of images with known 3DMM parameters; however these images are far from photorealistic. This points to a wider problem in synthesised approaches: a domain gap between synthesised and real data that makes generalisation difficult and task performance poor [13].
In contrast, Wood et al. [35] render highly realistic 3D face models for landmark localisation, demonstrating that synthesised data can be used to solve real world problems in the wild. Wood et al. [36] build upon this work to train a dense landmark regressor for 702 facial points. A morphable model is fitted to these dense landmarks, leading to state-of-the-art results in 3D face reconstruction.
The success of this approach affirms the potential of network-based methods in advancing 3D shape estimation. However, this approach requires the manual creation of 3D
assets with associated time, financial, and computational costs. Furthermore, the rendered images fall short of photorealism which limits their uses for direct 3DMM regression.
Other approaches have considered using the 3D data we have rather than relying on synthesised datasets. Zielonka et al. [39] achieve state-of-the-art performance on the NoW benchmark through unifying existing 3D face datasets. This demonstrates the importance of supervision for reconstruction performance even when supervised with minimal available data. However, this approach already represents the upper bound for supervised learning using 3D data, unless further data is collected. In combining 8 existing datasets, they reach just 2315 individuals; this remains a small dataset for supervised learning techniques. Hence, a generative approach similar to Wood et al. [35] is required for unconstrained dataset generation.
Other significant works in this field include exploring a hybrid loss function for weakly-supervised learning [7], generating surrogate ground truth data via multi-image 3DMM fitting using joint optimisation [16], and learning an image-to-image translation network using known depth and feature maps generated from a 3DMM [26].
In our work, we build upon these existing supervised learning methods, combining 2D generative image models and 3D face models. This approach allows us to develop a dataset larger than that proposed by Wood et al. [35] but without the extensive effort required to create 3D assets. We 'fake it' without making it. By leveraging state-of-the-art generative image models, we generate photorealistic images comparable to those used to train MICA [39] while being able to scale dataset size to orders of magnitude above theirs. By taking this novel approach, we aim to significantly advance the field of 3D face reconstruction, introducing a new methodology to achieve 3D face reconstruction using supervised learning.
### Optimising Identity Vectors
The loss function used for supervised 3D reconstruction requires careful consideration. Tran et al. [33] introduce an asymmetric Euclidean loss for minimising errors between predicted and actual parameter vectors; this decouples overestimation errors from under-estimation errors. A standard Euclidean loss favours estimates close to 0 due to 3DMM parameters following a multivariate Gaussian distribution centred at zero by construction. They report more realistic face reconstructions using their asymmetric Euclidean loss.
However, these losses minimise distance in the vector space of 3DMM parameters rather than minimising reconstruction error directly. Richardson et al. [23] directly calculate the Mean Squared Error (MSE) between generated 3D mesh representations. This ensures the loss takes into account how the parameter values affect the reconstructed geometry. Zielonka et al. [39] similarly follow a mesh-based loss but introduce a region dependent weight mask to weigh the facial region much higher than the rest of the head. We seek accurate 3D face shape estimation so we will optimise directly in 3D space using a mesh loss.
### Realistic Parameterised Faces
Automating the tedious manual work behind photorealistic face generation remains an open challenge and long term goal of 3D face representations [9]. 3DMMs provide parametric control but generate unrealistic images; Generative Adversarial Networks (GANs) generate photorealistic images but lack explicit control [11]. Combining the parametric control of a 3DMM with the expressive power of generative models for faces has the potential to create large-scale datasets for supervised 3D face reconstruction.
Recent work has sought to harness the best of both worlds. StyleRig [30] was the first approach to offer explicit control over a pretrained StyleGAN through a 3DMM, allowing for parametric editing of generated images. Building on this, Ghosh et al. [11] condition StyleGAN2 [14] on rendered FLAME [15] geometry and photometric details to add parametric control to GAN-based face generation, facilitating full control over the image generation process. Sun et al. [29] propose a NeRF-based 3D face synthesis network which enforces similarity with a mesh generated by a 3DMM. However, in all these cases, the resulting images fall short of photorealism.
In the field of image synthesis, probabilistic diffusion models now represent the state-of-the-art, surpassing the capabilities of GANs [8]. These models, which have developed significantly since their proposal [28], have been further improved by concurrent advances in transformer-based architectures [34] and text-image embedding spaces [21]. Publicly available text-image embedding spaces such as CLIP [18] have further diversified and enhanced these models [20].
Stable Diffusion is a powerful text-to-image diffusion model, synthesising high resolution images from textual prompts using a Latent Diffusion architecture [24]. ControlNet [38], a HyperNetwork that influences the weights of a larger paired network [12], enables a variety of input modalities to be used to condition the output of Stable Diffusion. Implementations include depth maps, user sketches, and normal map conditioning networks, among others. We use their depth version of ControlNet that utilises MiDaS [22] to obtain 3M depth-image-caption pairs for training.
Unlike previous methods, ControlFace enables photorealistic image generation with strong shape control. For our use case, this enables us to create our own large-scale dataset of photorealistic images and known 3DMM parameters, with conditioning depth maps being generated from an existing model of 3D face shape.
## 3 SynthFace: Fake It Without Making It
We present SynthFace, a comprehensive training dataset for 3D face shape estimation, comprising 250K photorealistic faces with 10K distinct 3D facial shapes. These \((512,512)\) resolution images were rendered in 30 hours utilising 12 GTX 1080 GPUs, which demonstrates a significantly lower resource requirement compared to similar work [35].
To create SynthFace, we first sample 10K faces from the FLAME head model. For each of these faces, we render five depth maps under different perspective projections; this is achieved by setting a constant 72.4\({}^{\circ}\)fov and varying the distance between camera and subject. This gives us 50K depth maps. Each depth map captures a different perceived shape due to the effects of perspective projection. This is designed to enable networks trained on SynthFace to disentangle identity and perspective effects from the underlying 3D shape. We then use ControlNet to condition stable diffusion 1.5 to produce photorealistic faces that adhere to the shape of these depth maps. This is performed five times for each depth map, resulting in 250K photorealistic images with corresponding 3DMM parameters which we used to render the conditioning depth maps. Figure 2 shows this pipeline.
In contrast to other 3D face datasets, we include a large number of different identities for the same face shape. An identity here is an individual recognisable person in 2D image space; a shape is the 3D mesh as parameterised by the 3DMM. We produce 25 images per distinct 3D shape, each capturing a different visual identity, but with the same underlying shape. Figure 3 shows how different identities are included within SynthFace for the same shape. We believe we are the first to incorporate this approach into a dataset for 3D face shape estimation by design. Hence, SynthFace enables disentanglement of shape and identity through supervised learning.
### 3D Face Model
We use the FLAME head model [15] as a generative model for face shape. FLAME is a linear 3DMM with both identity and expression parameters. Linear blend skinning (LBS) and pose-determined corrective blendshapes are used to model the neck, jaw, and eyeballs around joints. This results in a head model containing N = 5023 vertices and K = 4 joints.
FLAME takes coefficients for shape \(\vec{\beta}\in\mathbb{R}^{|\beta|}\), pose \(\vec{\theta}\in\mathbb{R}^{|\theta|}\), and expression \(\vec{\psi}\in\mathbb{R}^{|\psi|}\). These are modelled as vertex displacements from a template mesh \(\overline{\mathbf{T}}\). A skinning
Figure 3: SynthFace includes different visual identities for the same 3D shape. The first column shows two rendered depth maps of the same 3D shape under different perspective projections. The following images in each row are conditioned on that depth map.
Figure 2: SynthFace generation pipeline. We sample a 300 dimensional shape vector and use the FLAME decoder to produce a 3D mesh. We extract a depth map from this mesh which is used alongside a textual prompt as conditioning to generate a photorealistic face.
function \(W\) rotates the vertices of \(T\) around joints \(J\in\mathbb{R}^{3K}\). This is linearly smoothed by blendweights \(\mathcal{W}\in\mathbb{R}^{K\times N}\). The model is formally defined as:
\[M(\vec{\beta},\vec{\theta},\vec{\psi})=W(T_{P}(\vec{\beta},\vec{\theta},\vec{ \psi}),\mathbf{J}(\vec{\beta}),\vec{\theta},\mathcal{W}) \tag{1}\]
where
\[T_{P}(\vec{\beta},\vec{\theta},\vec{\psi})=\overline{\mathbf{T}}+B_{S}(\vec{ \beta};S)+B_{P}(\vec{\theta};P)+B_{E}(\vec{\psi};E) \tag{2}\]
Due to different face shapes requiring different joint locations, joints are defined as a function of \(\vec{\beta}\). Equation 2 includes shape, pose, and expression blendshapes. We sample shape coefficients and set pose and expression coefficients to 0. We use equation 1 to generate a complete 3D mesh of the head from these coefficients.
This approach enables us to create an arbitrary number of human head shapes, each compactly represented by a set of 3DMM parameters. Approaches which directly render textured versions of meshes to 2D suffer from low-fidelity, unrealistic outputs. Instead, we extract the depth map of each mesh to pass to ControlNet, generating realistic faces in the 2D domain.
### Depth Map Generation
In building SynthFace, we use all 300 FLAME shape parameters (\(\beta\)). We later use ArcFace as a feature extractor [6]. This network has been trained to extract discriminative facial features with invariance to rotation and expression of the face. ArcFace uses a novel additive angular margin loss to increase inter-class distance while reducing intra-class distance. Hence, we chose not to model these variations within our dataset. We believe this learning is better performed in the 2D domain with pre-trained networks specialised for these tasks.
We sample identity parameters, \(\vec{\beta}\), individually from a Gaussian distribution with mean 0 and s.d. 0.8. This enables a wide variation of face shape within our dataset. Expression coefficients, \(\vec{\psi}\), are set to 0. We further set pose coefficients, \(\vec{\theta}\), to 0. This results in a fixed frontal pose, which is suitable as input for identity descriptor networks such as ArcFace [6]. We use a perspective camera with a 72.4\({}^{\circ}\)field of view. We vary the distance between the camera and subject from 100 to 400 world units using uniform sampling. This leads to perspective projection effects which model changes observed in real life, enabling a network to learn to deal with these effects.
### Conditioned Face Generation
We use the depth version of ControlNet to modulate the output of Stable Diffusion 1.5. It takes a depth map and textual prompt (positive and negative prompt) as input to produce an image. We produce 5 images per prompt. The inference procedure is set to run for 15 steps. The following prompt was used: Prompt:'studio portrait, profile picture, dslr', Negative Prompt:'monochrome, illustration, painting, unrealistic, artefacts, low quality, plain background'.
A systematic method was undertaken to iteratively refine our prompt to generate realistic human faces. This process involved starting with a single text prompt,'studio portrait', and iteratively adding single phrases, both to positive and negative prompts, to build an improved prompt. The impact of these additional phrases was qualitatively evaluated in each case with only phrases that produced more visually lifelike outputs kept.
### Dataset Demographics
We use FaceLib [1] to estimate age and gender information from all generated faces within SynthFace. SynthFace is estimated to be 83.1% male and 16.9%; this binary is reductive but useful as a diagnostic. Figure 4 details the estimated distribution of ages in SynthFace. It is important to document the demographic data of a proposed dataset, as performance can be expected to be worse on those outside of the data distribution. Each generated face reflects data distributions within FLAME, Stable Diffusion, and how these are linked through ControlNet.
## 4 ControlFace: Accurate 3D Face Shape Estimation
We introduce ControlFace, a deep neural network trained on SynthFace. This network aims to disentangle shape from identity and perspective through supervised training on a large dataset which contains multiple identities for the same shape. It accepts an image as input and outputs a shape vector \(x\in\mathbb{R}^{300}\) for the FLAME decoder. All architectures, training, and evaluation are implemented using PyTorch [17]. Figure 5 shows the training process in full. Figure 6 shows ControlFace at inference time.
Figure 4: SynthFace age distribution.
### Training Data
We use the entirety of SynthFace as our training data. SynthFace contains 250K images of 10K unique shape identities. A unique shape identity is defined as a unique set of 3DMM parameters. For each of these unique shape identities, we render five depth maps under different perspective projections and five images for each of these depth maps.
### Pre-processing
First, faces are detected in each image using RetinaFace [5]. This provides a bounding box used to crop each image and warp it to a frontal pose. The images in SynthFace share a common frontal pose by design. However, this detection and warping step remains crucial. In-the-wild images have various poses which our approach must be able to handle. Next, we use the pretrained ArcFace network as a feature extractor for face description. ArcFace's 512-dimensional output embedding is used as input for a mapping network.
### Mapping Network
We use the mapping network presented by Zielonka et al. [39]. This network consists of three fully-connected layers followed by a linear output layer. We train this network to regress a shape vector \(x\in\mathbb{R}^{300}\). This vector contains coefficients for all 300 identity bases in the FLAME head model. Weights are randomly initialised.
### Training Strategy
We split SynthFace into training and validation sets, following a 80/20 split. We train our mapping network on the
Figure 5: ControlFace training. We train the mapping network within ControlFace on the SynthFace dataset. It is trained to minimise the mesh reconstruction error between a predicted 3D mesh and known 3D mesh for each image in SynthFace.
Figure 6: ControlFace at inference. ControlFace accepts an image as input, aligns it, and calculates an ArcFace embedding from this aligned detected face. A mapping network converts this ArcFace embedding to 3DMM parameters. The FLAME decoder generates a full head mesh from these parameters.
training set and select the best performing model based on the validation loss; we use early stopping with a patience of 20 to achieve this and run for 100 epochs.
We use the AdamW optimizer for optimisation with learning rate \(\eta=1\times 10^{-5}\) and weight decay \(\lambda=2\times 10^{-4}\). We use the same optimisation strategy and loss function as they present originally in [39]. We use their masked mesh loss which puts emphasis on inner facial regions in reconstruction. This loss is detailed here:
\[L=\sum_{(I,G)}|\kappa_{\text{mask}}(G_{\text{3DMM}}(M(\text{ArcFace}(I)))-G)|, \tag{1}\]
This loss is calculated for all pairs of input images, \(I\), and known meshes, \(G\), within SynthFace. \((G_{\text{3DMM}}(M(\text{ArcFace}(I)))\) is the predicted mesh after the image is passed through ArcFace, the mapping network \(M\), and then the FLAME decoder \(G_{\text{3DMM}}\). \(\kappa_{\text{mask}}\) is a region-dependent weight mask with values: 150 for the face region, 1 for the back of the head, and 0.1 for the eyes and ears.
## 5 Experiments and Evaluation
We test our proposed method against the NoW benchmark [25]. The NoW benchmark consists of 2054 images for 100 identities. It has become the standard benchmark for evaluating 3D face shape estimation from 2D images. These are split into validation and test sets consisting of 20 and 80 identities respectively. For each individual, the dataset includes images under different poses, occlusions, and expressions. We use the publicly available validation set of NoW for evaluation. First, a rigid alignment of the predicted meshes to the scans is performed using key facial landmarks. Then the scan-to-mesh distance between the predicted mesh and scan is performed for each vertex. The mean, median, and standard deviations of these distances is computed across all images in the given set. Table 1 shows a comparison of ControlFace with current state-of-the-art methods.
Our results are competitive with the current state-of-the-art in 3D face shape estimation without requiring any ground truth data. We achieve this by introducing a novel method for large dataset generation for 3D face shape estimation. Our work with ControlFace demonstrates that supervised training on this dataset leads to accurate 3D face shape estimation.
Crucially, our work is easily extensible. A longer generation time can lead to a larger dataset and improvements in 2D and 3D generative model capabilities can directly feed into future work. We believe this will enable future versions of SynthFace to close the performance gap with methods such as MICA and AlbedoGAN. Datasets for specific use cases, be that large pose variations or expressions, can be created by updating parameters in our generation code.
In unifying existing 3D face datasets, MICA reaches a natural limit in supervised learning on existing data sources. This is where the opportunity for synthesised approaches such as SynthFace lies. SynthFace can scale beyond this natural limit in real paired data.
## 6 Limitations and Future Work
The current iteration of SynthFace exclusively models variations in shape, leaving out expressive variations. Consequently, ControlFace solely focuses on shape prediction. It may be beneficial for future research to include varying expressions within the dataset or to devise a separate network to model these variations independently.
Our method employs ArcFace to generate a facial identity descriptor, which serves as the input to our mapping network. Importantly, this is an identity embedding and not a shape embedding. We make the assumption that the ArcFace-learned identity encompasses shape and that our mapping network can extract shape from this. Future research should explore retraining ArcFace or similar networks to more specifically extract shape information. Furthermore, the embedding network could be removed entirely, replacing it with a single network that learns to map images to 3DMM parameters in a supervised manner.
We utilise individual depth maps derived from a 3D face model to condition stable diffusion. Our knowledge of the full 3D geometry could be utilised further to improve the conditioned image. This could involve multi-image or even multi-modal conditioning to allow for even greater shape consistency between the 3D model and the generated 2D image.
We must also consider the ethical implications of our work. We show our conditioned stable diffusion approach to generate a dataset of predominantly younger men. This is a clear limitation. We also recognise that we use a deep-learning based age and gender estimator for this analysis which itself will be biased. Commercial gender classification systems were found to exhibit large variations in performance based on an individual's skin tone; they misclassified dark-skinned females more than any other group [3].
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Median & Mean & Std \\ \hline Deep3D [7] & 1.286 & 1.864 & 2.361 \\ DECA (detail) & 1.19 & 1.469 & 1.249 \\ DECA [10] & 1.178 & 1.464 & 1.253 \\ AlbedoGAN (detail) & 0.95 & 1.173 & 0.987 \\ MICA [39] & 0.913 & 1.130 & 0.948 \\ AlbedoGAN [19] & 0.903 & 1.122 & 0.957 \\ ControlFace (ours) & 1.192 & 1.472 & 1.222 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Reconstruction error (mm) on the validation set of the NoW benchmark [25] in non-metrical reconstruction. Comparison results are presented from [19].
Buolamwini et al. found that these systems were trained on datasets including predominantly light-skinned training subjects.
We agree with their proposals for intersectional error analysis using a benchmark balanced by gender and skin colour [3]. This does not currently exist for 3D face shape estimation. Further work should consider the creation of such a benchmark to enable the accuracy of approaches for specific subgroups of individuals to be analysed.
Generative models like stable diffusion require extensive datasets for training, which typically rely on publicly available data. Consequently, there's a likelihood that individuals' data has been used without their explicit consent. This raises clear ethical and legal concerns, particularly for models deployed in the real world.
Accurate 3D face shape estimation finds application in areas such as prosthesis design, yet it can also be utilised for malevolent purposes, including deepfake creation and mass surveillance. These potential misuses must be considered during model development and deployment and weighed against potential benefits.
## 7 Conclusion
In this work, we have addressed a key challenge in 3D face shape estimation by proposing a method for generating a large-scale dataset for supervised training. Our method combines existing 2D and 3D generative models to produce photorealistic images with corresponding 3DMM parameters. The resulting dataset, SynthFace, is the largest dataset of its kind and offers unique opportunities to disentangle shape from identity for accurate 3D face shape estimation.
Our results prove competitive with the existing state-of-the-art in the field of 3D face shape estimation. Notably, our technique does not rely on any ground truth data. Unlike previous methods, ours is easily extensible, computationally inexpensive, and produces photorealistic face images. We see this approach to solving 3D problems by using conditioned 2D diffusion models to hold great potential, particularly as existing 3D face datasets reach their limit for supervised learning.
We expect improvements in image generation, 3D face models, and conditioning networks to all improve the accuracy of this method for 3D face reconstruction; our work provides a clear path for continuous improvement. We believe this work will form the basis of a number of exciting developments in the future of this domain.
|
2310.10927 | Dissipative Effects as New Observables for Cosmological Phase
Transitions | We show that dissipative effects during cosmological first order phase
transitions lead to a frequency-dependent suppression for the usually dominant
gravitational wave production from sound waves, through an analytical modelling
of the source based on the sound shell model. This damping effect is more
pronounced for high frequencies or small scales, and modifies the spectral
shape and possibly the peak frequency. These modifications can be used to
reveal more information about the underlying particle interactions, serving as
a way of breaking the parameter degeneracy that plagues particle physics
studies based on the perfect fluid approximation. | Huai-Ke Guo | 2023-10-17T01:55:05Z | http://arxiv.org/abs/2310.10927v1 | # Dissipative Effects as New Observables for Cosmological Phase Transitions
###### Abstract
We show that dissipative effects during cosmological first order phase transitions lead to a frequency-dependent suppression for the usually dominant gravitational wave production from sound waves, through an analytical modelling of the source based on the sound shell model. This damping effect is more pronounced for high frequencies or small scales, and modifies the spectral shape and possibly the peak frequency. These modifications can be used to reveal more information about the underlying particle interactions, serving as a way of breaking the parameter degeneracy that plagues particle physics studies based on the perfect fluid approximation.
**Introduction.--** The direct detection of gravitational waves from compact object coalescences by the LIGO and Virgo collaboration [1] has revived the interest in the searches and theoretical studies of a stochastic background of gravitational waves of cosmological origin (see, e.g., [2; 3; 4; 5] for reviews), which if discovered would be another milestone and can play a role for fundamental physics [5; 6; 7] similar to what the cosmic microwave background radiation means for modern cosmology. One important class of this kind is that from cosmological first order phase transitions in the early universe [8; 9; 10; 11; 12], which is directly connected to the underlying particle physics and thus to potentially new physics. Gravitational waves from phase transitions have already been searched for at LIGO [13] corresponding to new physics at the scale of \(\mathcal{O}(10^{3}-10^{6})\)TeV [14] well beyond what current high energy colliders can reach, and at pulsar timing array experiments [15; 16] corresponding to the QCD scale. More recently, the pulsar timing array experiments have reported evidence for a stochastic background of gravitational waves [17; 18; 19; 20], with phase transition gravitational waves being one of the potential sources [21; 22]. At the electroweak scale, where current colliders operate, phase transition gravitational waves plays a complementary role to direct searches of new particles at collider (see, e.g., [5] for a review) and can help to pin down the origin of the electroweak symmetry breaking as well as the origin of the baryon asymmetry in the universe [23], thus making it an important goal for future space-based gravitational wave detectors LISA [8; 24], Taiji [25; 26; 27] and Tianqin [28; 29; 30]. The cosmological first order phase transitions can also provide new formation mechanisms for primordial black hole dark matter [31; 32; 33; 34; 35], generate new curvature perturbations [36], and produce primordial magnetic field that can potentially explain the observed magnetic field in the voids [37; 38], among others.
The above goals hinge on a precise understanding of the dynamics and kinematics of the phase transition, and significant progress has been witnessed in recent years. It is now generally accepted that there are mainly three mechanisms for gravitational wave production during such a transition: bubble collisions [39; 40; 41], sound waves [42; 43] and magnetohydrodynamic (MHD) turbulence [8; 44; 45; 46; 47; 48; 49]. For transitions in a thermal plasma, the acoustic production from sound waves is generally believed to be the dominant, while that from the MHD turbulence is subdominant and quite uncertain as of now. This makes a precise prediction of the acoustic production of gravitational waves especially important. Previous studies, however, rely on the perfect fluid approximation to the plasma, with dissipative effects largely neglected, though of course they are naturally included in studies of turbulence (see, e.g., [47; 50]). This results in, among others, the problem of parameter degeneracy, that the spectrum depends only on a set of bulk fluid parameters, which in turn translates into the dependence on the set of phase transition parameters: the dimensionless energy release normalized by the radiation energy density \(\alpha\), the typical inverse time duration \(\beta\) or the mean bubble separation \(R_{*}\), the wall velocity \(v_{w}\) and the transition temperature \(T_{*}\), while many particle physics models can lead to the same parameter values. Thus it is highly desirable to find ways that can break the parameter degeneracy.
In this work, we show that dissipative effects, which have largely been neglected in previous studies, can serve as one way of breaking the degeneracy, and makes possible the probing of very weak particle interactions, in cases where the effect of dissipation is strong.
**Dissipative effects in an imperfect fluid.--** For the generally dominant gravitational wave production from sound waves, when the scalar field driving the phase transition no longer plays a significant role and where possible electromagnetic fields are neglected, the matter content in the universe consists generally of a plasma of relativistic particles, possible non-relativistic particles, or others. The energy momentum tensor of such a system is described in previous studies by the well known perfect fluid form \(pg^{\mu\nu}+(\rho+p)U^{\mu}U^{\nu}\), where \(U^{\mu}\) is the velocity four-vector of the fluid, and \(p\) and \(\rho\) are the pressure and energy density both measured in a locally comoving Lorentz frame (i.e., \({\bf U}=0\)) at a certain instant
of time. However, in the presence of dissipative effects, which drive the system to a new equilibrium state, the energy momentum tensor needs to be modified by including a new term \(\Delta T^{\mu\nu}\)[51], which are described by three positive parameters: the shear viscosity \(\mu\), bulk viscosity \(\zeta\) and thermal conduction \(\chi\). Dissipations of these kinds are well known in Newtonian fluid mechanics and also in relativistic hydrodynamics (see e.g., [52]). In a Lorentz frame comoving with the fluid at a spatial point \(x^{i}\) and at a specific time \(x^{0}=t\), it takes the following form [51]
\[\Delta T^{ij} =-\mu\left(\frac{\partial U_{i}}{\partial x^{j}}+\frac{\partial U _{j}}{\partial x^{i}}-\frac{2}{3}\delta_{ij}\triangledown\cdot\mathbf{U} \right)-\zeta\ \delta_{ij}\triangledown\cdot\mathbf{U},\] \[\Delta T^{i0} =-\chi\left(\frac{\partial T}{\partial x^{i}}+T\dot{U_{i}}\right). \tag{1}\]
Going back to a generic frame, the equations driving the evolution of the system, more specifically of \(p\), \(\rho\), \(\mathbf{v}\), etc, can then be obtained by the conservation of energy and momentum, and also by the conservation of conserved quantum numbers present in the system. The part of the energy momentum tensor that generates gravitational waves is \(a^{2}(p+\rho)\gamma^{2}v^{i}v^{j}\), where \(v^{i}\equiv dx^{i}/d\eta\) with \(\eta\) the comoving time. To facilitate an analytical insight into the underlying physics, we assume in the following that the perturbations caused by the phase transition are small such that in calculating the energy momentum tensor we neglect the fluctuations of \(p\) and \(\rho\). So the key task is on the calculation of the stochastic velocity field \(\mathbf{v}(\eta,\mathbf{x})\). In the absence of dissipation, the sound equation leads to the following solution for the velocity field in an expanding universe [53]
\[v^{i}(\eta,\mathbf{x})=\int\frac{d^{3}q}{(2\pi)^{3}}\left[v^{i}_{\mathbf{q}}e^ {-i\omega\eta+i\mathbf{q}\cdot\mathbf{x}}+c.c.\right], \tag{2}\]
where \(\omega=qc_{s}\), with \(\mathbf{q}\) the comoving wavenumber and \(c_{s}\) the speed of sound, which is the dispersion relation in the absence of dissipation. The presence of dissipation has the effect of converting kinetic energy of the fluid into heat, leading then to a damping of sound waves, thus making \(v^{i}_{\mathbf{q}}\) dependent on \(\eta\). For the longitudinal sound waves excited by the expanding bubbles with wavenumber \(\mathbf{q}\), the amplitude of the velocity Fourier component is damped exponentially in the following way [51]
\[v^{i}_{\mathbf{q}}(\eta)\propto\exp\left[-\int\Gamma(\mu,\zeta,\xi)d\eta \right]. \tag{3}\]
The detailed form of \(\Gamma\) was derived in [51], with the key property that \(\Gamma\propto q^{2}\), a result well known from Newtonian fluid mechanics (see e.g., [52]). Thus perturbations of smaller scales or larger frequencies are more damped by the presence of dissipation.
**Velocity field and power spectrum**.-- With the equations of motion setup for the system and the effect of dissipation included, we need the initial conditions to obtain the velocity field. Here the non-zero velocity field is excited by the interaction between the plasma and expanding bubbles: as each bubble expands, the plasma surrounding it is stirred. Because there are many bubbles in a Hubble volume, a precise determination of the velocity field would be by solving the fluid equations together with those governing the expansion and destructions of the bubbles, i.e., the evolution of the scalar field responsible for the transition. However, in the case where the velocity is small one can add linearly the contributions from all bubbles, which is the essence of the sound shell model [54; 55] (see [53; 56; 57] for generalizations), such that an analytical determination of velocity power spectrum can be achieved. Then in the sound shell model, the coefficient of each Fourier component \(v^{i}_{\mathbf{q}}(\eta)\) is obtained by linearly superposing the perturbations from a total of, say \(N_{b}\), bubbles ever nucleated and destroyed upon collision with another bubble
\[v^{i}_{\mathbf{q}}(\eta)=\sum_{n=1}^{N_{b}}v^{i(n)}_{\mathbf{q}}\mathrm{exp} \left[-\int_{\eta^{(n)}_{d}}^{\eta}\Gamma d\bar{\eta}\right]\theta(\eta-\eta^ {(n)}_{d}), \tag{4}\]
where \(\eta^{(n)}_{d}\) denotes the destruction time of then'th bubble at which the stirred velocity starts contributing to sound waves, neglecting for simplicity possibly forced motion of the sound shells [56]. The \(\eta\)-independent \(v^{i(n)}_{\mathbf{q}}\) can be calculated by solving the velocity profile [58] for then'th one that is nucleated at time \(\eta^{(n)}_{s}\) and at location \(\mathbf{x}^{(n)}\)[53] giving then \(v^{i(n)}_{\mathbf{q}}=i\dot{q}^{j}\left(\eta^{(n)}_{lt}\right)^{3}e^{i\omega ^{(n)}_{\mathbf{q}}-i\mathbf{q}\cdot\mathbf{x}^{(n)}}A\left(q\eta_{lt}\right)\), where \(\eta^{(n)}_{lt}=\eta^{(n)}_{d}-\eta^{(n)}_{s}\), which is the conformal lifetime of then'th bubble, and \(A\) is a function with an absolute value that peaks at \(q\eta_{lt}\sim\mathcal{O}(1)\). The physical meaning of this equation is quite clear: at time \(\eta^{(n)}_{d}\) when the \(n\)'th bubble is destroyed, the initial velocity perturbation is matched onto freely propagating sound waves, giving then its contribution to the Fourier component with an amplitude that is damped over the following time due to dissipation. Since different bubbles contribute at different times, the corresponding component of sound waves get damped at different times accordingly.
The summation of \(N_{b}\), bubbles leads to a velocity \(v^{i}_{\mathbf{q}}(\eta)\) that has a stochastic nature: the bubbles are nucleated and destroyed at different random times and at different random locations, resulting thus in a random velocity field. This stochastic nature, which is also classical, is similar to that encountered in the standard cosmological perturbation theory where however the randomness originates from the quantum fluctuations of the inflaton (see, e.g., [59]). For such stochastic fields, meaningful quantities are their averages. Since the number of bubbles \(N_{b}\) is generally large, and according to the central limit theorem, the velocity is Gaussian to a good approximation, in which case the fundamental average is the two-point correlator. With the notation of Fourier transform \(\tilde{v}^{i}_{\mathbf{q}}(\eta)=\int d^{3}\mathbf{x}\ e^{-i\mathbf{q}\cdot \mathbf{x}}v^{i}(\eta,\mathbf{x})\). The fundamental two-point correlator takes a form that is a generalized
version of that in the absence of dissipations [53; 55]
\[\langle\tilde{v}^{i}_{\bf q}(\eta_{1})\tilde{v}^{j*}_{\bf k}(\eta_{2 })\rangle=2\pi^{2}q^{-3}\delta^{3}({\bf q}-{\bf k})\tilde{q}^{i}\tilde{k}^{j}\] \[\times{\cal P}_{v}(q,\eta_{1},\eta_{2})\cos[\omega(\eta_{1}-\eta_ {2})], \tag{5}\]
where the delta function comes from the overall homogeneity of the universe and the velocity power spectrum \({\cal P}_{v}\) can be shown to be
\[{\cal P}_{v}(q,\eta_{1},\eta_{2})=\frac{q^{3}}{\pi^{2}}\int d\eta _{\rm ft}\int d\eta_{d}\left[P(\eta_{\rm ft},\eta_{d})\frac{N_{b}}{V}\right]\] \[\times\eta_{\rm ft}^{6}|A(q\eta_{\rm ft})|^{2}{\rm exp}\left[-\int _{\eta_{d}}^{\eta_{1}}\Gamma dt-\int_{\eta_{d}}^{\eta_{2}}\Gamma dt\right]. \tag{6}\]
Here \(P(\eta_{\rm ft},\eta_{d})\) is the probability density function of bubbles with lifetime \(\eta_{\rm ft}\) and destruction time \(\eta_{d}\). Marginalizing over \(\eta_{d}\) gives the lifetime distribution \(P(\eta_{\rm ft})\) that can be derived analytically [53; 55] or numerically from a simulation of bubble nucleation [60; 61], while the full distribution \(P(\eta_{\rm ft},\eta_{d})\) can be obtained straightforwardly from results of these numerical simulations if not possible analytically. In the absence of dissipations, \({\cal P}_{v}\) is independent of \(\eta_{1}\) and \(\eta_{2}\) and the right hand side of the velocity correlator depends on \(\eta_{1}\) and \(\eta_{2}\) through only the combination \((\eta_{1}-\eta_{2})\), meaning that the velocity field is stationary. Inclusion of dissipations, however, makes \({\cal P}_{v}\) dependent on \(\eta_{1}\) and \(\eta_{2}\) separately, and thus renders the velocity field non-stationary. Physically the presence of this nonstationarity is apparent as the damping caused by dissipation accumulates over time and thus depends on the absolute time when each velocity perturbation kicks in.
In light of the \(q^{2}\) dependence of the decay rate \(\Gamma\), it is convenient to define an effective damping length \(\int_{\eta_{d}}^{\eta_{1}}\Gamma dt=q^{2}d_{D}^{2}(\eta_{d},\eta_{1})\) and then the exponent becomes \(q^{2}[d_{D}^{2}(\eta_{d},\eta_{1})+d_{D}^{2}(\eta_{d},\eta_{2})]\equiv q^{2 }d_{D}^{2}(\eta_{d},\eta_{1},\eta_{2})\). In cases where the bubbles all disappeared within a short time range, we can neglect the variation among the bubble destruction times, and simply use a typical time \(\eta_{*}\). This leads to a result similar to that obtained without dissipations \({\cal P}_{v}(q)\),
\[{\cal P}_{v}(q,\eta_{1},\eta_{2})={\rm exp}\left[-q^{2}d_{D}^{2}(\eta_{*}, \eta_{1},\eta_{2})\right]{\cal P}_{v}(q). \tag{7}\]
The effect of dissipations on the velocity power spectrum are shown in the left panel of Fig. 1, for the simplified case of a constant \(d_{D}(\eta_{*},\eta_{1},\eta_{2})\) in unit of the typical scale of the phase transition, the mean bubble separation \(R_{*}\), for illustration. One can clearly see the damping of the high frequency tail for \(d_{D}/R_{*}=10^{-3}\), and for larger values of \(d_{D}\) the damping effect extends more broadly to lower frequencies, and can even shift the peak to lower values.
**Damping of gravitational waves.**-- The power spectrum of the stochastic gravitational waves generated by the stochastic velocity field can be obtained by firstly solving the gravitational wave amplitude in terms of the source and then calculating its correlator which is reduced to sums of product of the velocity correlators, and eventually becomes [53]:
\[{\cal P}_{\rm GW}(\eta,k) = \frac{32G^{2}[(\tilde{\rho}+\tilde{p})\,\tilde{U}_{f}^{2}]^{2}}{3 a^{2}H^{2}}(kR_{*})^{3}\int_{\tilde{y}_{s}}^{\tilde{y}}d\tilde{y}_{1}\int_{ \tilde{y}_{s}}^{\tilde{y}}d\tilde{y}_{2} \tag{8}\] \[\times\left(\frac{\partial\tilde{y}}{\partial\tilde{\eta}}\right) ^{2}\frac{\partial G(\tilde{y},\tilde{y}_{1})}{\partial\tilde{y}}\frac{ \partial G(\tilde{y},\tilde{y}_{2})}{\partial\tilde{y}}\frac{a(\eta_{s})^{8}}{ a^{2}(\eta_{1})a^{2}(\eta_{2})}\] \[\times\frac{\tilde{\Pi}^{2}(kR_{*},k\eta_{1},k\eta_{2})}{k^{2}},\]
where \(G(\tilde{y},\tilde{y}_{1/2})\) is the Green's function; \(\tilde{y}=f(k)a/a(\eta_{s})\) with \(f(q)\) a factor depending on the expansion rate of the universe; \(\tilde{U}_{f}\) is the root-mean-square fluid velocity. The key in this equation is the dimensionless source auto-correlator
\[\tilde{\Pi}^{2}=\frac{\pi}{2}\frac{1}{\tilde{U}_{f}^{4}}\int d^{3} \tilde{q}{\cal P}_{v}(\tilde{q}){\cal P}_{v}(\tilde{q})\frac{(1-\mu^{2})^{2}}{ \tilde{q}\tilde{q}^{5}}e^{-(q^{2}+\tilde{q}^{2})d_{D}^{2}}\] \[\times\cos\left[c_{s}\tilde{q}\frac{\beta_{c}(\eta_{1}-\eta_{2})} {\beta_{c}R_{*}}\right]\cos\left[c_{s}\tilde{q}\frac{\beta_{c}(\eta_{1}-\eta_{ 2})}{\beta_{c}R_{*}}\right], \tag{9}\]
where \(\tilde{q}=|{\bf q}-{\bf k}|\), \(\mu=\hat{\bf q}\cdot\hat{\bf k}\), \(\tilde{q}=qR_{*}\) and \(\beta_{c}\) is the comoving version of \(\beta\)[53]. The integral in \({\cal P}_{\rm GW}\) can equivalently be transformed into one over \((\eta_{1}-\eta_{2})\) and another linear combination [53], which is more useful for a stationary source. The nonstationarity in the velocity power spectrum propagates into \(\tilde{\Pi}^{2}\), making it also non-stationary and also making the integral more complicated. Despite this, qualitative physical insights can be gained: the highly oscillatory trigonometric functions force \(\eta_{1}\) to be close to \(\eta_{2}\), the same as that in the absence of the smoothly varying dissipation term, and the exponential factor leads to a reduced auto-correlator. In the right panel of Fig. 1, we show the source auto-correlator for several constant values of the ratio \(d_{D}/R_{*}\). We can again see the reduced overall amplitude of the auto-correlator due to dissipation. Note that the assumption of a constant \(d_{D}\) in the above example results in a sta
Figure 1: Left: velocity power spectrum as a function of the dimensionless wavenumber \(qR_{*}\). Right: source auto-correlation as a function of the dimensionless conformal time difference \(\beta_{c}|\eta_{1}-\eta_{2}|\) for \(qR_{*}=10\). The numbers in the plots denote the ratio \(d_{D}/R_{*}\). In both plots, we fix \(\alpha=0.0046\), \(v_{w}=0.92\) as an example.
tionary source correlator, while generically the correlator depends on \(\eta_{1}\) and \(\eta_{2}\) in a more complicated way.
With above dampings found for the velocity power spectrum and the source auto-correlator, it is no wonder that similar dampings show up in the eventual gravitational wave spectrum, and this is shown in Fig. 2 for the same parameter choices as in previous plots for \(d_{D}/R_{*}=0,10^{-2},10^{-1}\) and \(1\). Here the quantity plotted captures all the \(k\)-dependence of \({\cal P}_{\rm GW}(\eta,k)\) whereas unrelated overall factors are neglected. The message carried by this plot is the central result of this work: dissipations lead to a sharp suppression of the gravitational wave spectrum, usually acting at the high frequency tail and possibly extending to the lower frequency regime. While dampings at the high frequency tail affect the UV spectral shape, those at low frequencies can even shift the peak frequency to smaller values. Exactly at what value of the frequency does the damping start to appear and how much the spectrum is damped depend highly on the values of \(\mu,\zeta\) and \(\chi\) and their possible time variations as the universe expands. For the simplified example where \(d_{D}\) are constants, the source becomes stationary and the accumulation of gravitational wave power over time is mostly decoupled from the integral over the short auto-correlation time (\(\eta_{1}-\eta_{2}\)), which results then in the factorized form of the spectrum, i.e., \({\cal P}_{\rm GW}\propto(kR_{*})^{3}\tilde{P}_{\rm SW}\Upsilon(\tau_{\rm sw})\)[53] with \(\tau_{\rm sw}\) the lifetime of sound waves. For more realistic scenarios where \(d_{D}\) is a function of both \(\eta_{1}\) and \(\eta_{2}\), the situation is more complicated, as the spectrum generated at a later time has a different shape as that from an earlier time, and above factorized form of the spectrum does not generally exist.
**Lifetime of sound waves and time scales.**-- The interplay between the suppressed production of gravitational waves due to the expansion of the universe, as captured by the function \(\Upsilon\), and the damping due to dissipation in realistic cases where \(d_{D}\) varies with time leaves interesting imprints on the shape as well as on the amplitude of the gravitational wave spectrum. The damped spectral shape for the affected frequency range varies over time and thus the eventual shape is a combined result in which the details of the dissipation are recorded. In addition, for cases where the effective damping length \(d_{D}\) is large, the damping, which now reduces the amplitude across a larger frequency range, together with the dilution effect can strive to determine a new effective lifetime of the sound waves, beyond which negligible gravitational waves are produced. This is in contrast to the widely adopted lifetime of sound waves, \(R_{*}/\bar{U}_{f}\)[62; 63], which corresponds to the onset of MHD turbulence. More interestingly, in cases where dissipation is strong and thus the Reynolds number is significantly reduced, the turbulence might never have been generated, and it is solely dissipation that determines the lifetime of sound waves.
From the above analysis, one can identify the following time scales without performing a dedicated numerical integration to get an analytical insight: (1) auto-correlation time, which is typically very short and thus decoupled from the others; (2) mean free time of the particles leading to the dissipation, which and whose time evolution determine eventually the value and the time variation of the effective damping length \(d_{D}\); (3) the onset time of the MHD turbulence, which marks the end of the acoustic production of gravitational waves; (4) the Hubble time, or the expansion rate of the universe when the sound waves are active. The final gravitational wave spectrum from sound waves is thus a complicated product as a result of the intertwining of these time scales and the underlying effects, which is model dependent and also depends on the cosmological context. Seemingly like messy complications, this is really a blessing as it breaks the parameter degeneracy that plagues the study based on the perfect fluid approximation.
**Experimental detection and microscopic origin.**-- The modified spectral shape and amplitude, as parametrized by \(\mu\), \(\zeta\) and \(\chi\), can be readily searched for at gravitational wave detectors, to extend previous searches at LIGO, Virgo and KAGRA [13], and at PTA experiments [21]. For future space-based interferometers such as LISA, Taiji and Tianqin, which target the most important electroweak phase transition, more correlation and complementarity between experimental detection [64; 65] and traditional particle physics studies, such as direct searches at the high energy colliders, can be readily explored. Measurements of this kind open new portals into studies of the underlying microscopic particle interactions. The particle information gained by the measurement of \(\mu\), \(\zeta\) and \(\chi\) differs from the studies based on the perfect fluid approximation in that the former is caused by particles with long mean free path in the plasma, thus with very weak interactions, while the
Figure 2: The dimensionless gravitational wave spectrum before redshift as a function of the dimensionless wavenumber \(kR_{*}\), for several constant values of \(d_{D}/R_{*}\).
latter is from the bulk fluid parameters in the perfect fluid approximation. This selects out potentially important applications to the probing of dark matter and other more weakly interacting particles which are abundant in physics beyond the standard model. Thus studies of the particle interactions from the dissipative observables are complementary to their traditional direct studies which are effective when the interactions are strong.
Aside from gravitational waves, also known as tensor mode perturbation in the cosmological perturbation theory, dissipations also suppress scalar perturbations, such as the density perturbations. Indeed the same kind of damping is known in the cosmological perturbation theory as Silk damping [66] which suppresses the power spectrum of the CMB temperature anisotropy at large multipole moment \(l\), due to the increasing diffusion length of the photons at recombination. Studies of dampings to gravitational waves and scalar perturbations in the context of cosmological first order phase transitions can thus help reveal equally important information of the underlying particle interactions.
**Conclusion.--** In this work, we have studied the effect of dissipations on affecting the gravitational wave production from the usually dominant sound waves. We observe dampings of gravitational wave spectrum on the high frequency tail for short damping length and on a broader frequency range for longer ones where the peak frequency can also be shifted to lower frequencies. These modifications serve as a set of new observables for probing cosmic phase transitions and break the parameter degeneracy that plagues previous particle physics studies, thus opening up new portals to studies of the underlying particle physics models. This also implies that updated searches are desired at LIGO, by the PTA experiments and others.
**Acknowledgements.--** We would like to thank Ligong Bian, Wei Chao, Yvonne Wong for helpful discussions. This work is supported by the startup fund provided by the University of Chinese Academy of Sciences and by the National Science Foundation of China (NSFC) under Grant No. 12147103..
|
2305.05622 | Multilinear Hyperquiver Representations | We count singular vector tuples of a system of tensors assigned to the edges
of a directed hypergraph. To do so, we study the generalisation of quivers to
directed hypergraphs. Assigning vector spaces to the nodes of a hypergraph and
multilinear maps to its hyperedges gives a hyperquiver representation.
Hyperquiver representations generalise quiver representations (where all
hyperedges are edges) and tensors (where there is only one multilinear map).
The singular vectors of a hyperquiver representation are a compatible
assignment of vectors to the nodes. We compute the dimension and degree of the
variety of singular vectors of a sufficiently generic hyperquiver
representation. Our formula specialises to known results that count the
singular vectors and eigenvectors of a generic tensor. | Tommi Muller, Vidit Nanda, Anna Seigal | 2023-05-09T17:07:59Z | http://arxiv.org/abs/2305.05622v2 | # Multilinear Hyperquiver Representations
###### Abstract.
We count singular vector tuples of a system of tensors assigned to the edges of a directed hypergraph. To do so, we study the generalisation of quivers to directed hypergraphs. Assigning vector spaces to the nodes of a hypergraph and multilinear maps to its hyperedges gives a hyperquiver representation. Hyperquiver representations generalise quiver representations (where all hyperedges are edges) and tensors (where there is only one multilinear map). The singular vectors of a hyperquiver representation are a compatible assignment of vectors to the nodes. We compute the dimension and degree of the variety of singular vectors of a sufficiently generic hyperquiver representation. Our formula specialises to known results that count the singular vectors and eigenvectors of a generic tensor.
## 1. Introduction
The theory of quiver representations provides a unifying framework for some fundamental concepts in linear algebra [7]. In this paper, we introduce and study a natural generalisation of quiver representations, designed to analogously serve the needs of multilinear algebra.
###### Quiver Representations and Matrix Spectra
A _quiver_\(Q\) consists of finite sets \(V\) and \(E\), whose elements are called vertices and edges respectively, along with two functions \(s,t:E\to V\) sending each edge to its source and target vertex. It is customary to write \(e:i\to j\) for the edge \(e\) with \(s(e)=i\) and \(t(e)=j\). The definition does not prohibit self-loops \(s(e)=t(e)\) nor parallel edges \(e_{1},e_{2}:i\to j\). A _representation_\((U,\alpha)\) of \(Q\) assigns a finite-dimensional vector space \(U_{i}\) to each \(i\in V\) and a linear map \(\alpha_{e}:U_{i}\to U_{j}\) to each \(e:i\to j\) in \(E\). Originally introduced by Gabriel to study finite-dimensional algebras [22], quiver representations have since become ubiquitous in mathematics. They appear prominently in disparate fields ranging from representation theory and algebraic geometry [12] to topological data analysis [32]. In most of these appearances, the crucial task is to classify the representations of a given quiver up to isomorphism. This amounts in practice to cataloguing the _indecomposable_ representations; i.e., those that cannot be expressed as direct sums of smaller nontrivial representations.
For all but a handful of quivers, the set of indecomposables (up to isomorphism) is complicated, and such a classification is hopeless [30, Theorem 7.5]. Nevertheless, one may follow the spirit of [37] and use quivers to encode compatibility constraints with spectral interpretations. We work with representations that assign vector spaces \(U_{i}=\mathbb{C}^{d_{i}}\) to each vertex and matrices \(A_{e}:\mathbb{C}^{d_{i}}\rightarrow\mathbb{C}^{d_{j}}\) to each edge. We denote the quiver representation by \((\boldsymbol{d},A)\), where \(\boldsymbol{d}:=(d_{1},\ldots,d_{n})\) is the dimension vector. Let \([\boldsymbol{x}]\in\mathbb{P}(\mathbb{C}^{d})\) denote the projectivisation of \(\boldsymbol{x}\in\mathbb{C}^{d}\). We define the **singular vectors** of a quiver representation \((\boldsymbol{d},A)\) to consist of tuples \(\big{(}[\boldsymbol{x}_{i}]\in\mathbb{P}(\mathbb{C}^{d_{i}})\mid i\in V\big{)}\) for which there exist scalars \((\lambda_{e}\mid e\in E)\)
so that the compatibility constraint \(A_{e}\boldsymbol{x}_{i}=\lambda_{e}\boldsymbol{x}_{j}\) holds across each edge \(e:i\to j\). Standard notions from linear algebra arise as special cases of such singular vectors, see also Figure 1:
1. The eigenvectors of a matrix \(A:\mathbb{C}^{d}\to\mathbb{C}^{d}\) are the singular vectors of the representation of the Jordan quiver that assigns \(\mathbb{C}^{d}\) to the unique node and \(A\) to the unique edge.
2. The singular vectors of a matrix \(A:\mathbb{C}^{d_{1}}\to\mathbb{C}^{d_{2}}\) arise from the representation of the directed cycle of length \(2\), with \(A\) assigned to one edge and \(A^{\top}\) assigned to the other.
3. The generalised eigenvectors of a pair of matrices \(A,B:\mathbb{C}^{d}\to\mathbb{C}^{d}\) - i.e., non-zero solutions \(\boldsymbol{x}\) to \(A\boldsymbol{x}=\lambda\cdot B\boldsymbol{x}\) for some \(\lambda\in\mathbb{C}\) - are the singular vectors of the representation of the Kronecker quiver with \(A\) on one edge and \(B\) on the other.
For \(d=d_{1}=d_{2}\), a generic instance of any of these three quiver representations has \(d\) singular vectors.
**Hyperquiver Representations and Tensors.** This century has witnessed progress towards extending the spectral theory of matrices to the multilinear setting of tensors [34]. Given a tensor \(T\in\mathbb{C}^{d_{1}}\otimes\cdots\otimes\mathbb{C}^{d_{n}}\), we write \(T(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{j-1}\,,\cdot\,,\boldsymbol{x}_{j+ 1},\ldots,\boldsymbol{x}_{n})\in\mathbb{C}^{d_{j}}\) for the vector with \(i\)-th coordinate
\[\sum_{i_{1}=1}^{d_{1}}\ldots\sum_{i_{j-1}=1}^{d_{j-1}}\sum_{i_{j+1}=1}^{d_{j+1 }}\ldots\sum_{i_{n}=1}^{d_{n}}T_{i_{1},\ldots,i_{j-1},i,i_{j+1},\ldots,i_{n}}( \boldsymbol{x}_{1})_{i_{1}}\cdots(\boldsymbol{x}_{j-1})_{i_{j-1}}(\boldsymbol {x}_{j+1})_{i_{j+1}}\cdots(\boldsymbol{x}_{n})_{i_{n}}.\]
Eigenvectors and singular vectors of tensors were introduced in [31, 33]. The eigenvectors of \(T\in(\mathbb{C}^{d})^{\otimes n}\) are all non-zero \(\boldsymbol{x}\in\mathbb{C}^{d}\) satisfying
\[T(\,\cdot\,,\boldsymbol{x},\ldots,\boldsymbol{x})=\lambda\cdot\boldsymbol{x},\]
for some scalar \(\lambda\in\mathbb{C}\). In the special case of matrices, this reduces to the familiar formula \(A\boldsymbol{x}=\lambda\boldsymbol{x}\). Similarly, the singular vectors of a tensor \(T\in\mathbb{C}^{d_{1}}\otimes\cdots\otimes\mathbb{C}^{d_{n}}\) are the tuples of non-zero vectors \((\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{n})\in\mathbb{C}^{d_{1}}\times \cdots\times\mathbb{C}^{d_{n}}\) satisfying
\[T(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{j-1},\,\cdot\,,\boldsymbol{x}_{j +1},\ldots,\boldsymbol{x}_{n})=\mu_{j}\boldsymbol{x}_{j}\]
for all \(j\). This specialises for matrices to the familiar pair of conditions \(A\boldsymbol{x}_{2}=\mu_{1}\boldsymbol{x}_{1}\) and \(A^{\top}\boldsymbol{x}_{1}=\mu_{2}\boldsymbol{x}_{2}\).
Eigenvectors and singular vectors have been computed for special classes of tensors in [35, 36]; they have been used to study hypergraphs [5, 34] and to learn parameters in latent
Figure 1. Quiver representations corresponding to (a) the eigenvectors of a matrix, (b) the singular vectors of a matrix, (c) the generalised eigenvectors of a pair of matrices.
variable models [3, 4]. For a symmetric tensor \(T\in(\mathbb{C}^{d})^{\otimes n}\) the eigenvectors are, equivalently, all non-zero \(\boldsymbol{x}\in\mathbb{C}^{d}\) for which a scalar multiple \(\lambda\cdot\boldsymbol{x}^{\otimes n}\) constitutes a critical point to the best rank-one approximation problem for \(T\). Similarly, the singular vectors of \(T\in\mathbb{C}^{d_{1}}\otimes\cdots\otimes\mathbb{C}^{d_{n}}\) are all tuples of non-zero vectors \((\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{n})\in\mathbb{C}^{d_{1}}\times \cdots\times\mathbb{C}^{d_{n}}\) for which \(\lambda\cdot\boldsymbol{x}_{1}\otimes\cdots\otimes\boldsymbol{x}_{n}\) is a critical point to the best rank one approximation for \(T\)[31].
In order to create the appropriate generalisation of quiver singular vectors to subsume these notions from the spectral theory of tensors, we generalise from quivers to _hyperquivers_. In general, hyperquivers are obtained from quivers by allowing the source and target maps \(s,t:E\to V\) to be multivalued. For our purposes, it suffices to consider hyperquivers where each edge \(e\in E\) has a single target vertex. The hyperedge \(e\) now has a tuple of sources \((s_{1}(e),s_{2}(e),\ldots,s_{\mu}(e))\in V^{\mu}\) for some \(e\)-dependent integer \(\mu\geq 1\). A representation \(\boldsymbol{R}=(\boldsymbol{d},T)\) of such a hyperquiver assigns to each vertex \(i\) a vector space \(\mathbb{C}^{d_{i}}\) and to each edge \(e\) a tensor
\[T_{e}\in\mathbb{C}^{d_{t(e)}}\otimes\mathbb{C}^{d_{s_{1}(e)}}\otimes\ldots \otimes\mathbb{C}^{d_{s_{\mu}(e)}}.\]
We identify a vector space \(\mathbb{C}^{d}\) with its dual \((\mathbb{C}^{d})^{*}\), allowing us to view the tensor \(T_{e}\) as a multilinear map
\[T_{e}:\prod_{j=1}^{\mu}\mathbb{C}^{d_{s_{j}(e)}} \to\mathbb{C}^{d_{t(e)}}\] \[(\boldsymbol{x}_{s_{1}(e)},\ldots,\boldsymbol{x}_{s_{\mu}(e)}) \mapsto T_{e}(\,\cdot\,,\boldsymbol{x}_{s_{1}(e)},\ldots, \boldsymbol{x}_{s_{\mu}(e)}).\]
Our hyperquiver representations reduce to usual quiver representations when each edge has \(\mu=1\).
The set of singular vectors of a hyperquiver representation \(\boldsymbol{R}\), denoted \(\mathcal{S}(\boldsymbol{R})\), consists of all tuples \(\big{(}[\boldsymbol{x}_{i}]\in\mathbb{P}(\mathbb{C}^{d_{i}})\mid i\in V\big{)}\) that satisfy
\[T_{e}(\,\cdot\,,\boldsymbol{x}_{i_{1}},\ldots,\boldsymbol{x}_{i_{\mu}})= \lambda_{e}\cdot\boldsymbol{x}_{j}, \tag{1.1}\]
for some scalar \(\lambda_{e}\), across every edge \(e\in E\) of the form \((i_{1},\ldots,i_{\mu})\to j\). We work with vectors in a product of projective spaces, since we require the vectors to be non-zero (as for the singular vectors of a matrix) and moreover because the equation (1.1) still holds after non-zero rescaling of each \(\boldsymbol{x}_{i}\), albeit for different scalars \(\lambda_{e}\).
Perhaps the simplest nontrivial family of examples is furnished by starting with the quiver with a single vertex and a single hyperedge with \(m-1\) source vertices -- we call this the \(m\)-Jordan hyperquiver. Consider the representation that assigns, to the vertex, the vector space \(\mathbb{C}^{d}\) for some dimension \(d\geq 0\), and to the edge, a tensor \(T\in(\mathbb{C}^{d})^{\otimes m}\), seen as a multilinear map \(T:(\mathbb{C}^{d})^{(m-1)}\to\mathbb{C}^{d}\) that contracts vectors along the last \(m-1\) modes of \(T\); see Figure 1(a) for the case \(m=3\). The singular vectors of this representation are all \([\boldsymbol{x}]\in\mathbb{P}(\mathbb{C}^{d})\) satisfying \(T(\,\cdot\,,\boldsymbol{x},\boldsymbol{x},\ldots,\boldsymbol{x})=\lambda\cdot \boldsymbol{x}\) for some scalar \(\lambda\in\mathbb{C}\). The singular vectors of the representation are therefore the eigenvectors of the tensor \(T\).
The compatibility conditions that define singular vectors can be reframed in terms of the vanishing of minors of suitable \(d_{i}\times 2\) matrices. Hence \(\mathcal{S}(\boldsymbol{R})\) is a multiprojective variety in \(\prod_{i\in V}\mathbb{P}(\mathbb{C}^{d_{i}})\). This variety simultaneously forms a multilinear (and projective) generalisation of the linear _space of sections_ of a quiver representation from [37], and a multi-tensor generalisation of the set of singular vectors of a single tensor from [20]. The property that a point lies in \(\mathcal{S}(\boldsymbol{R})\) is equivariant under an orthogonal change of basis on each vector space, as is true for the singular vectors of a matrix, as follows. Let \(([x_{1}],\ldots,[x_{n}])\in\prod_{i\in V}\mathbb{P}(\mathbb{C}^{d_{i}})\) be a singular vector tuple of a hyperquiver representation with tensors \(T_{e}\in\mathbb{C}^{d_{t(e)}}\otimes\mathbb{C}^{d_{s_{1}(e)}}\otimes\ldots \otimes\mathbb{C}^{d_{s_{\mu}(e)}}\)
and let \(Q_{1},\ldots,Q_{n}\) be a tuple of complex orthogonal matrices; i.e., \(Q_{i}^{\top}Q_{i}=I_{d_{i}}\). Then \(([Q_{1}x_{1}],\ldots,[Q_{n}x_{n}])\) is a singular vector tuple of the hyperquiver representation where each \(T_{e}\) has its source components multiplied by \(Q_{s_{j}(e)}^{\top}\) and target component multiplied by \(Q_{t(e)}\). We expect the topology of this variety, particularly its (co)homology groups, to provide rich and interesting isomorphism invariants for hyperquiver representations.
**Main Result.** We derive exact and explicit formulas for the _dimension_ and _degree_ of \(\mathcal{S}(\boldsymbol{R})\) when \(\boldsymbol{R}\) is a sufficiently generic representation of a given hyperquiver. Here is a simplified version, in the special case when all vector spaces are of the same dimension.
**Theorem**.: _Let \(\boldsymbol{R}=(\boldsymbol{d},T)\) be a generic representation of a hyperquiver \(H=(V,E)\) with \(\boldsymbol{d}=(d,d,\ldots,d)\). Let \(N=(d-1)(|V|-|E|)\) and \(D\) be the coefficient of \(\left(\prod_{i\in V}h_{i}\right)^{d-1}\) in the polynomial_
\[\left(\sum_{i\in V}h_{i}\right)^{N}\cdot\prod_{e\in E}\left(\sum_{k=1}^{d}h_{ t(e)}^{k-1}\cdot h_{s(e)}^{d-k}\right),\quad\text{where}\quad h_{s(e)}:=\sum_{j=1}^{ \mu(e)}h_{s_{j}(e)}.\]
_Then \(\mathcal{S}(\boldsymbol{R})=\varnothing\) if and only if \(D=0\). Otherwise, \(\mathcal{S}(\boldsymbol{R})\) has dimension \(N\) and degree \(D\). Moreover, if \(\dim\mathcal{S}(\boldsymbol{R})=0\), then each singular vector tuple occurs with multiplicity \(1\)._
**Example 1.1**.: Let \(\boldsymbol{R}\) be the hyperquiver representation in Figure 3, with \(T\in\mathbb{C}^{3}\otimes\mathbb{C}^{3}\otimes\mathbb{C}^{3}\) a generic tensor. We have \(N=(3-1)(2-2)=0\). We seek the coefficient \(D\) of the monomial \(h_{1}^{2}h_{2}^{2}\) in the polynomial
\[\left((h_{1}+h_{2})^{2}+h_{1}(h_{1}+h_{2})+h_{1}^{2}\right)^{2}=9h_{1}^{4}+18 h_{1}^{3}h_{2}+15h_{1}^{2}h_{2}^{2}+6h_{1}h_{2}^{3}+h_{2}^{4}.\]
We see that \(D=15\). Hence the singular vector variety \(\mathcal{S}(\boldsymbol{R})\) has dimension \(N=0\) and consists of \(15\) singular vector tuples, each occurring with multiplicity \(1\).
Our argument follows the work of Friedland and Ottaviani from [20] -- we first construct a vector bundle whose generic global sections have the singular vectors of \(\boldsymbol{R}\) as their zero set, and then apply a variant of Bertini's theorem to count singular vectors by computing the top Chern class of the bundle. The authors of [20] compute the number of singular vectors of a single generic tensor -- this corresponds to counting the singular vectors of the hyperquiver
Figure 2. Examples of hyperquiver representations corresponding to (a) the eigenvectors of a tensor, (b) the singular vectors of a tensor, and (c) the generalised eigenvectors of a pair of tensors.
representation depicted in Figure 2(b). Here we derive general formulas to describe the algebraic variety of singular vectors for an arbitrary network of (sufficiently generic) tensors.
**Related Work.** Special cases of our degree formula, all in the case \(\dim\mathcal{S}(\boldsymbol{R})=0\), recover existing results from the literature -- see [9] and [19, Corollary 3.2] for eigenvector counts, [20] for singular vector counts, and [13, 20] for generalised eigenvector counts. Other recent work that builds upon the approach in [20] includes [15, 38] which study the span of the singular vector tuples, [40] which studies tensors determined by their singular vectors, and the current work [2] which uses a related setup to count totally mixed Nash equilibria. The eigenscheme of a matrix [1] and ternary tensor [6] is a scheme-theoretic version of \(\mathcal{S}(\boldsymbol{R})\) for the Jordan quiver in Figure 1a and the hyperquiver in Figure 2a.
**Outline.** The rest of this paper is organised as follows. In Section 2 we introduce hyperquiver representations and their singular vector varieties. We state our main result, Theorem 3.1, in Section 3 and describe a few of its applications. The construction of the vector bundle corresponding to a hyperquiver representation is given in Section 4, and our Bertini-type theorem - which we hope will be of independent interest - is proved in Section 5. We show that for generic \(\boldsymbol{R}\) the hypotheses of the Bertini theorem are satisfied by the associated vector bundle in Section 6, and compute its top Chern class in Section 7. For completeness, we have collected relevant results from intersection theory in Appendix A.
## 2. The singular vector variety
We establish notation for hyperquiver representations, define their singular vector varieties, and highlight the genericity condition which plays a key role in the sequel. Without loss of generality, we henceforth assume \(V=[n]\), where \([n]:=\{1,\ldots,n\}\) for \(n\in\mathbb{N}\).
**Definition 2.1**.: A _hyperquiver_\(H=(V,E)\) consists of a finite set of _vertices_\(V\) of size \(|V|=n\) and a finite set of _hyperedges_\(E\). For each hyperedge \(e\in E\) we have
* an integer \(\mu(e)\geq 1\) called the _index_ of \(e\)
* a tuple of vertices \(v(e)\in V^{m}\) called the _vertices_ of \(e\), where \(m:=\mu(e)+1\).
Figure 3. The light-green hyperedge is the contraction \(T(\,\cdot\,,\boldsymbol{x},\boldsymbol{y})\) and the dark-green hyperedge is the contraction \(T(\boldsymbol{x},\,\cdot\,,\boldsymbol{y})\), where \(\boldsymbol{x},\boldsymbol{y}\in\mathbb{C}^{3}\) are on the left and right vertices respectively.
For brevity, we may refer to a hyperedge as an edge and write \(\mu\) as a shorthand for \(\mu(e)\). The \(j\)-th entry of tuple \(v(e)\) is denoted \(s_{j}(e)\in V\). The tuple \(s(e):=(s_{1}(e),\ldots,s_{\mu}(e))\) are the _sources_ of \(e\), and the vertex \(t(e):=s_{m}(e)\) is the _target_ of \(e\).
**Remark 2.2**.: _Usual quivers are the special case with \(m=2\) for all \(e\in E\). Definition 2.1 does not exclude entries of \(s(e)\) being equal to \(t(e)\), nor does it exclude multiple hyperedges with the same tuple \(v(e)\)._
We now define representations of hyperquivers. The definition works for vector spaces over any field, but we focus on \(\mathbb{C}\).
**Definition 2.3**.: _Fix a hyperquiver \(H=(V,E)\). Let \(\boldsymbol{d}=(d_{1},\ldots,d_{n})\) be a dimension vector. A representation \(\boldsymbol{R}=(\boldsymbol{d},T)\) of \(H\) assigns_
* A vector space \(\mathbb{C}^{d_{i}}\) to each vertex \(i\in V\).
* A tensor \(T_{e}\in\mathbb{C}^{e}\) to each hyperedge \(e\in E\), where \(\mathbb{C}^{e}:=\mathbb{C}^{d_{t(e)}}\otimes\mathbb{C}^{d_{s_{1}(e)}}\otimes \cdots\otimes\mathbb{C}^{d_{s_{\mu}(e)}}\), which is viewed as a multilinear map \(\prod_{j=1}^{\mu}\mathbb{C}^{d_{s_{j}(e)}}\to\mathbb{C}^{d_{t(e)}}\).
We define for brevity
\[T_{e}(\boldsymbol{x}_{s(e)}):=T_{e}(\,\cdot\,,\boldsymbol{x}_{s_{1}(e)}, \ldots,\boldsymbol{x}_{s_{\mu}(e)}). \tag{2.1}\]
We say that two tensors \(T_{e}\) and \(T_{e^{\prime}}\)_agree up to permutation_ if the tuples \(v(e)\) and \(v(e^{\prime})\) agree up to a permutation \(\sigma\) and
\[(T_{e})_{i_{m},i_{1},\ldots,i_{m-1}}=(T_{e^{\prime}})_{i_{\sigma(m)},i_{\sigma (1)},\ldots,i_{\sigma(m-1)}}.\]
**Definition 2.4**.: _The singular vector variety \(\mathcal{S}(\boldsymbol{R})\) of a representation \(\boldsymbol{R}\) consists of tuples \(\boldsymbol{\chi}=([\boldsymbol{x}_{1}],\ldots,[\boldsymbol{x}_{n}])\in\prod _{i=1}^{n}\mathbb{P}(\mathbb{C}^{d_{i}})\) such that_
\[T_{e}(\boldsymbol{x}_{s(e)})=\lambda_{e}\boldsymbol{x}_{t(e)}, \tag{2.2}\]
_for some scalar \(\lambda_{e}\in\mathbb{C}\), for every edge \(e\in E\). The points of the variety are called the singular vector tuples of \(\boldsymbol{R}\)._
**Remark 2.5**.: _The scalars \(\lambda_{e}\) in (2.2) can be thought of as the singular values of the singular vector tuple \((\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{n})\). However, the non-homogeneity of (2.2) means that rescaling vectors in the tuple can change the singular values. We say that a singular vector tuple has a singular value zero if \(\lambda_{e}=0\) for some edge \(e\in E\)._
The singular vector variety is a subvariety of the multiprojective space \(X=\prod_{i=1}^{n}\mathbb{P}(\mathbb{C}^{d_{i}})\). Its defining equations are as follows. The singular vector tuples \(\boldsymbol{\chi}=([\boldsymbol{x}_{1}],\ldots,[\boldsymbol{x}_{n}])\) are the tuples whose \(d_{t(e)}\times 2\) matrix
\[M_{e}(\boldsymbol{x}):=\begin{pmatrix}|&|\\ T_{e}(\boldsymbol{x}_{s(e)})&\boldsymbol{x}_{t(e)}\\ |&|\end{pmatrix}\]
has rank \(\leq 1\) for all \(e\in E\). The rank of this matrix depends only on the points \([\boldsymbol{x}_{i}]\in\mathbb{P}(\mathbb{C}^{d_{i}})\), and not on the vectors \(\boldsymbol{x}_{i}\in\mathbb{C}^{d_{i}}\). Equations for the multiprojective variety \(\mathcal{S}(\boldsymbol{R})\) are the \(2\times 2\) minors of all matrices \(M_{e}(\boldsymbol{x})\) for \(e\in E\). When we speak of the degree of \(\mathcal{S}(\boldsymbol{R})\), we refer to the degree of its image under the Segre embedding \(s:X\hookrightarrow\mathbb{P}^{D}\), for \(D=\prod_{i=1}^{n}d_{i}-1\).
Our main result finds the dimension and degree of the singular vector variety for a hyperquiver representation with sufficiently general tensors on the hyperedges. We say that a property \(P\) holds for a generic point of an affine variety \(V\) if there exists a Zariski-open
set \(U\) in \(V\) such that \(P\) holds for all points in \(U\). We call any point of such a \(U\) a _generic point_ of \(V\). One way that a hyperquiver representation can be sufficiently generic is for the tuple of tensors \((T_{e}\mid e\in E)\) assigned to its edges to be generic; that is, a generic point of \(\prod_{e\in E}\otimes_{i=1}^{m}\mathbb{C}^{d_{s_{i}(e)}}\). This holds, for example, in Figure 0(a) and 0(c). But our notion of genericity allows tensors on different hyperedges to coincide, as in Figure 0(b). Our genericity condition is encoded by a partition of the hyperedges.
**Definition 2.6** (Genericity of a hyperquiver representation).:
1. A _partition_ of a hyperquiver \(H=(V,E)\) is a partition of its hyperedges \(E=\coprod_{r=1}^{M}E_{r}\) such that for any hyperedges \(e,e^{\prime},e^{\prime\prime}\in E_{r}\), 1. the indices \(\mu(e)\) and \(\mu(e^{\prime})\) equal the same number \(\mu\) 2. the tuples \(v(e)\) and \(v(e^{\prime})\) coincide up to a permutation \(\sigma\) of \([\mu+1]\) 3. if \(\sigma\) and \(\sigma^{\prime}\) are permutations in (b) for the pairs \(v(e)\),\(v(e^{\prime})\) and \(v(e^{\prime})\),\(v(e^{\prime\prime})\) respectively, where \(e\neq e^{\prime}\) and \(e^{\prime}\neq e^{\prime\prime}\), then \(\sigma(\mu+1)\neq\sigma^{\prime}(\mu+1)\).
2. The _partition_ of a representation \(\boldsymbol{R}=(\boldsymbol{d},T)\) is the unique partition of hyperedges such that for any \(e,e^{\prime}\in E_{r}\), the tensors on \(e\) and \(e^{\prime}\) agree up to a permutation \(\sigma\).
3. The representation \(\boldsymbol{R}=(\boldsymbol{d},T)\) is _generic_ if given hyperedges \(e_{r}\in E_{r}\) for \(r\in[M]\), the tuple of tensors \((T_{e_{1}},T_{e_{2}},\ldots,T_{e_{M}})\) is a generic point in \(\prod_{r=1}^{M}\mathbb{C}^{e_{r}}\)
**Example 2.7**.: We fix a basis on each vector space \(U_{i}\cong\mathbb{C}^{d_{i}}\) in Definition 2.3 because being a singular vector tuple is not invariant under an arbitrary change of basis. For example, the quiver in Figure 0(b) with a generic square matrix \(A:\mathbb{C}^{d}\rightarrow\mathbb{C}^{d}\) has \(d\) singular vector pairs \(([\boldsymbol{x}],[\boldsymbol{y}])\). However, there exist change of basis matrices \(M_{1},M_{2}\in GL(d,\mathbb{C})\) such that \(M_{2}AM_{1}^{-1}=I_{d}\), and the identity matrix \(I_{d}\) has infinitely many singular vector pairs: all pairs \(([\boldsymbol{z}],[\boldsymbol{z}])\). The property of being a singular vector tuple is preserved, however, by an orthogonal change of basis, cf. the discussion in the introduction and [6, Remark 1.1].
**Remark 2.8**.: A (usual) quiver representation may be defined as assigning (abstract) vector spaces to vertices and linear maps to edges. Similarly, we could define a hyperquiver representation as assigning vector spaces \(U_{i}\) to each vertex \(i\) and multilinear maps \(\mathcal{T}_{e}:\prod_{i=1}^{\mu}U_{s_{j}(e)}\to U_{t(e)}\) to each edge \(e\in E\). The dimension of the linear space of sections of a quiver representation [37] and the dimension and degree of the singular vector variety of a hyperquiver representation are invariant under the action of \(GL(U_{i})\) on each vertex and edge. Since there is no given choice of a basis, or more generally no inner product on each vector space, the notion of a transpose of a linear map or permutation of a multilinear map does not make sense. Therefore, a generic representation in the sense of Definition 2.6 can only apply when each \(E_{r}\) is a singleton and we assign a distinct generic matrix or tensor to each edge. With a choice of basis, our genericity conditions allow permutations of tensors along the edges, via coarser partitions. The space of sections and the singular vector variety are then \(O(d_{i})\)-invariant but not \(GL(d_{i})\)-invariant.
## 3. Main theorem and its consequences
In this section, we present our main result in full generality and study its consequences.
**Theorem 3.1**.: _Let \(\boldsymbol{R}=(\boldsymbol{d},T)\) be a generic hyperquiver representation and \(\mathcal{S}(\boldsymbol{R})\) be the singular vector variety of \(\boldsymbol{R}\). Let \(N=\sum_{i\in V}(d_{i}-1)-\sum_{e\in E}(d_{t(e)}-1)\) and \(D\) be the coefficient
of the monomial \(h_{1}^{d_{1}-1}\cdots h_{n}^{d_{n}-1}\) in the polynomial_
\[\left(\sum_{i\in V}h_{i}\right)^{N}\cdot\prod_{e\in E}\left(\sum_{k=1}^{d_{t(e)} }h_{t(e)}^{k-1}h_{s(e)}^{d_{t(e)}-k}\right),\quad\text{where}\quad h_{s(e)}:= \sum_{j=1}^{\mu(e)}h_{s_{j}(e)}. \tag{3.1}\]
_Then \(\mathcal{S}(\boldsymbol{R})=\varnothing\) if and only if \(D=0\). Otherwise, \(\mathcal{S}(\boldsymbol{R})\) is of pure dimension \(N\) and has degree \(D\). If \(\boldsymbol{R}\) has finitely many singular vector tuples, then each singular vector tuple is of multiplicity 1, is not isotropic, and has no singular value equal to 0._
Note that the partition from Definition 2.6 does not appear in the statement of Theorem 3.1: the partition provides a genericity condition for the result to hold, but the dimension and degree of the singular vector variety do not depend on the partition. Next we give a sufficient condition for a hyperquiver representation to consist of finitely many points. This condition applies to Figure 1(a) and Figure 1(b), but not to Figure 1(c).
**Corollary 3.2**.: _The hyperquivers with finitely many singular vector tuples for any choice of generic representation are those whose vertices each have exactly one incoming hyperedge._
Proof.: If \(\dim\mathcal{S}(\boldsymbol{R})=N=\sum_{i\in V}(d_{i}-1)-\sum_{e\in E}(d_{t(e )}-1)=0\) for all dimensions \(d_{i}\), then \(\sum_{i\in V}(d_{i}-1)=\sum_{e\in E}(d_{t(e)}-1)\) as polynomials in the variables \(d_{i}\). Each \(d_{i}\) appears exactly once on the left hand side of the equation. Hence it must also appear exactly once on the right hand side. Therefore \(|V|=|E|\) and every \(i\in V\) has exactly one \(e\in E\) with \(i=t(e)\).
We show how Theorem 3.1 specialises to count the eigenvectors and singular vectors of a generic tensor, as well as to count the solutions to the generalised eigenproblem from [13].
**Example 3.3** (Eigenvectors of a tensor).: We continue our discussion from the introduction. The representation of the \(m\)-Jordan hyperquiver with a generic tensor \(T\in(\mathbb{C}^{d})^{\otimes m}\) on its hyperedge is generic in the sense of Definition 2.6, since we have only one hyperedge. There are finitely many eigenvectors, by Corollary 3.2. The polynomial (3.1) is
\[\sum_{k=1}^{d}h^{k-1}((m-1)h)^{d-k}=\left(\sum_{k=1}^{d}(m-1)^{d-k}\right)h^{ d-1}=\frac{(m-1)^{d}-1}{m-2}h^{d-1}.\]
The coefficient of \(h^{d-1}\) is \(\frac{(m-1)^{d}-1}{m-2}\). This agrees with the count for the number of eigenvectors of a generic tensor from [9, Theorem 1.2] and [19, Corollary 3.2].
We now consider singular vectors. A result of Friedland and Ottaviani [20] is:
**Theorem 3.4** (Friedland and Ottaviani [20, Theorem 1]).: _The number of singular vectors of a generic tensor \(T\in\mathbb{C}^{d_{1}}\otimes\cdots\otimes\mathbb{C}^{d_{n}}\) is the coefficient of the monomial \(h_{1}^{d_{1}-1}\ldots h_{n}^{d_{n}-1}\) in the polynomial_
\[\prod_{i\in[n]}\frac{\widehat{h_{i}}^{d_{i}}-h_{i}^{d_{i}}}{\widehat{h_{i}}-h _{i}},\qquad\text{where}\quad\widehat{h_{i}}:=\sum_{j\in[n]\setminus\{i\}}h_{ j},\ i\in[n]. \tag{3.2}\]
_Each singular vector tuple is of multiplicity 1, is not isotropic, and does not have singular value 0._
We explain how the above result follows from Theorem 3.1.
**Example 3.5** (Singular vectors of a tensor).: Consider the hyperquiver with \(n\) vertices \(V=[n]\) and \(n\) hyperedges. For every vertex \(i\in V\), there is a hyperedge \(e_{i}\) with \(s(e_{i})=(1,\ldots,i-1,i+1,\ldots,n)\) and target \(t(e)=i\). Consider the representation that assigns the vector space \(\mathbb{C}^{d_{i}}\) to each vertex and the same generic tensor \(T\in\mathbb{C}^{d_{1}}\otimes\cdots\otimes\mathbb{C}^{d_{n}}\) to each hyperedge. On each edge \(e_{i}\), the tensor \(T\) is seen as a multilinear map
\[T:\mathbb{C}^{d_{1}}\times\ldots\times\mathbb{C}^{d_{i-1}}\times \mathbb{C}^{d_{i+1}}\times\cdots\times\mathbb{C}^{d_{n}} \to\mathbb{C}^{d_{i}}\] \[(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{i-1},\boldsymbol{x}_{ i+1},\ldots,\boldsymbol{x}_{n}) \mapsto T(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{i-1},\, \cdot\,,\boldsymbol{x}_{i+1},\ldots,\boldsymbol{x}_{n}).\]
This representation is generic in the sense of Definition 2.6, where the partition of the edge set \(E\) has size \(M=1\) and the permutation \(\sigma\) sending \(v(e_{i})\) to \(v(e_{j})\) is the one that swaps \(i\) and \(j\) and keeps all other entries fixed. Figure 1(b) illustrates this representation for \(n=3\). The singular vector variety consists of all non-zero vectors \(\boldsymbol{x}_{i}\in\mathbb{C}^{d_{i}}\) such that \(T(\boldsymbol{x}_{s(e)})=\lambda_{e}\boldsymbol{x}_{t(e)}\) for some \(\lambda_{e}\in\mathbb{C}\) and all \(e\in E\), where \(T(\boldsymbol{x}_{s(e)})\) is defined in (2.1). That is, the singular vector variety consists of all singular vector tuples of \(T\). Corollary 3.2 shows that there are finitely many singular vector tuples. The polynomial (3.1) specialises to
\[\prod_{i\in[n]}\left(\sum_{k=1}^{d_{i}}h_{i}^{k-1}\widehat{h_{i}}^{d_{i}-k} \right),\quad\text{where}\quad\widehat{h_{i}}:=\sum_{j\in[n]\setminus\{i\}}h_ {j},\ i\in[n].\]
This is equivalent to (3.2) via the identity \(\frac{x^{n}-y^{n}}{x-y}=\sum_{k=1}^{n}x^{k-1}y^{n-k}\).
**Example 3.6** (The generalised tensor eigenvalue problem).: Consider a generic representation of the _Kronecker hyperquiver_ with a generic pair of tensors \(A,B\in\mathbb{C}^{d_{2}}\otimes(\mathbb{C}^{d_{1}})^{\otimes(m-1)}\), see Figure 1(c) with \(m=3\) and \(d=d_{1}=d_{2}\). The edge set \(E\) has a partition with \(M=2\). We remark that Corollary 3.2 implies that there will not be finitely many singular vector tuples for _all_ representations of this hyperquiver. There will be a non-zero finite number of singular vectors if and only if \(d:=d_{1}=d_{2}\) since this is when \(N=0\) in Theorem 3.1. The singular vector tuples are the non-zero pairs \(\boldsymbol{x},\boldsymbol{y}\in\mathbb{C}^{d}\) such that \(A(\,\cdot\,,\boldsymbol{x},\ldots,\boldsymbol{x})=\lambda^{\prime}\boldsymbol{y}\) and \(B(\,\cdot\,,\boldsymbol{x},\ldots,\boldsymbol{x})=\lambda^{\prime\prime} \boldsymbol{y}\), for some \(\lambda^{\prime},\lambda^{\prime\prime}\in\mathbb{C}\). This reduces to the single equation
\[A(\,\cdot\,,\boldsymbol{x},\ldots,\boldsymbol{x})=\lambda B(\,\cdot\,, \boldsymbol{x},\ldots,\boldsymbol{x})\]
for some \(\lambda\in\mathbb{C}\). This is a tensor-analogue of the generalised eigenvalue problem for two matrices. It was shown in [20, Corollary 16] and [13, Theorem 2.1] that there are \(d(m-1)^{d-1}\) generalised tensor eigenvalue pairs \(\boldsymbol{x}\) and \(\boldsymbol{y}\) for the tensors \(A\) and \(B\). Our general formula in Theorem 3.1 also recovers this number, as follows. The polynomial (3.1) is
\[\left(\sum_{k=1}^{d}h_{2}^{k-1}((m-1)h_{1})^{d-k}\right)\left(\sum_{\ell=1}^{d} h_{2}^{\ell-1}((m-1)h_{1})^{d-\ell}\right). \tag{3.3}\]
A monomial \(h_{1}^{d-1}h_{2}^{d-1}\) is obtained from the product of a \(k\)-th summand and an \(\ell\)-th summand such that \(k+\ell=d+1\). There are \(d\) such pairs of summands \(k,\ell\in\{1,\ldots,d\}\). Each such monomial will have a coefficient of \((m-1)^{d-1}\). Hence the coefficient of \(h_{1}^{d-1}h_{2}^{d-1}\) in (3.3) is \(d(m-1)^{d-1}\).
Now we find the dimension and degree of the singular vector variety \(\mathcal{S}(\boldsymbol{R})\) for a generic representation \(\boldsymbol{R}\) of a hyperquiver with a single hyperedge, as shown in Figure 4.
**Corollary 3.7**.: _Let \(H\) be a hyperquiver with one hyperedge with all entries of its tuple of vertices distinct. Let \(\boldsymbol{R}\) be the representation that assigns the vector space \(\mathbb{C}^{d_{i}}\) to each vertex \(i\) and a generic tensor to the hyperedge. Then:_
1. _The dimension of_ \(\mathcal{S}(\boldsymbol{R})\) _is_ \(N=\sum_{i=1}^{n-1}d_{i}-n+1\)__
2. _The degree of_ \(\mathcal{S}(\boldsymbol{R})\) _is_ \[\sum_{k=1}^{d_{n}}\sum_{\begin{subarray}{c}k_{1}+\cdots+k_{n-1}\\ =d_{n}-k\end{subarray}}\binom{d_{n}-k}{k_{1},\ldots,k_{n-1}}\binom{N}{d_{1}-1- k_{1},\ldots,d_{n-1}-1-k_{n-1},d_{n}-k}.\] (3.4)
Proof.: The dimension of \(\mathcal{S}(\boldsymbol{R})\) is \(N=(\sum_{i=1}^{n}d_{i}-n)-(d_{n}-1)=\sum_{i=1}^{n-1}d_{i}-n+1\), by Theorem 3.1. The degree of \(\mathcal{S}(\boldsymbol{R})\) is the coefficient of \(h_{1}^{d_{1}-1}\cdots h_{n}^{d_{n}-1}\) in the product
\[\underbrace{\left(\sum_{i=1}^{n}h_{i}\right)^{N}}_{(1)}\underbrace{\left( \sum_{k=1}^{d_{n}}\left(\sum_{i=1}^{n-1}h_{i}\right)^{d_{n}-k}h_{n}^{k-1} \right)}_{(2)}.\]
For each \(k\in\{1,\ldots,d_{n}\}\), the monomial \(h_{1}^{k_{1}}\cdots h_{n-1}^{k_{n-1}}h_{n}^{k-1}\) in the expansion of (2) for some \(k_{1},\ldots,k_{n-1}\) such that \(\sum_{i=1}^{n-1}k_{i}=d_{n}-k\) has coefficient \(\binom{d_{n}-k}{k_{1},\ldots,k_{n-1}}\). This is combined with the monomial \(h_{1}^{d_{1}-1-k_{1}}\cdots h_{n-1}^{d_{n}-1-k_{n-1}}h_{n}^{d_{n}-k}\) from the expansion of (1), which has coefficient \(\binom{d_{1}-1-k_{1},\ldots,d_{n-1}-1-k_{n-1},d_{n}-k}\). Multiplying these coefficients and summing over those \(k_{1},\ldots,k_{n-1}\) with \(\sum_{i=1}^{n-1}k_{i}=d_{n}-k\), we obtain
\[\sum_{\begin{subarray}{c}k_{1}+\cdots+k_{n-1}\\ =d_{n}-k\end{subarray}}\binom{d_{n}-k}{k_{1},\ldots,k_{n-1}}\binom{N}{d_{1}- 1-k_{1},\ldots,d_{n-1}-1-k_{n-1},d_{n}-k}.\]
Summing over \(k=1,\ldots,d_{n}\) gives the result.
When \(d:=d_{1}=\cdots=d_{n}\), we can use Corollary 3.7 to find the degree of \(\mathcal{S}(\boldsymbol{R})\), which is displayed in Table 1 for \(d=1,\ldots,6\) and \(n=2,\ldots,6\). Observe that: (i) the degree row of \(d=2\) consist of the factorial numbers; and (ii) the degree column of \(n=2\) consist of powers of \(2\). We explain these observations. To see (i), if \(d=2\), then (3.4) becomes
Figure 4. A hyperquiver with a single hyperedge and a representation
When \(k=2\), the only summands satisfying \(k_{1}+\cdots+k_{n-1}=2-k\) is \(k_{1}=\cdots=k_{n-1}=0\), which is \(1\) for the first factor and \((n-1)!\) for the second factor in (3.5). When \(k=1\), the only allowed indices are of the form \(k_{i}=1\) and \(k_{j}=0\) for all \(i\neq j\), from which we get \(1\) for the first factor and \((n-1)!\) for the second factor in (3.5). Since there are \(n-1\) such allowed indices, (3.5) evaluates to \((n-1)!+(n-1)(n-1)!=n!\). For (ii), when \(n=2\), we have
\[\sum_{k=1}^{d}\sum_{k_{1}=d-k}\binom{d-k}{k_{1}}\binom{d-1}{d-1-k _{1},d-k} =\sum_{k=1}^{d}\binom{d-k}{d-k}\binom{d-1}{k-1,d-k}\] \[=\sum_{k=0}^{d-1}\binom{d-1}{k,d-1-k}=\sum_{k=0}^{d-1}\binom{d-1} {k}=2^{d-1}.\]
**Example 3.8** (Periodic orbits of order \(n\)).: Consider the hyperquiver representation in Figure 5 with a generic tensor \(T\in(\mathbb{C}^{d})^{\otimes m}\). The singular vector tuples are the non-zero vectors \(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{n}\in\mathbb{C}^{d}\) such that
\[T(\,\cdot\,,\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{1}) =\lambda_{1}\boldsymbol{x}_{2}\] \[T(\,\cdot\,,\boldsymbol{x}_{2},\ldots,\boldsymbol{x}_{2}) =\lambda_{2}\boldsymbol{x}_{3}\] \[\vdots\] \[T(\,\cdot\,,\boldsymbol{x}_{n},\ldots,\boldsymbol{x}_{n}) =\lambda_{n}\boldsymbol{x}_{1}\]
for some \(\lambda_{i}\in\mathbb{C}\). In other words, each \(\boldsymbol{x}_{i}\) is a periodic point of order \(n\).
The hyperquiver representation is not generic in the sense of Definition 2.6 as edges with different tuples \(v(e)\) up to permutation are assigned the same tensor \(T\). Hence Theorem 3.1 does not apply. Nonetheless, we predict the dimension and degree, using Theroem 3.1. The result predicts finitely many \(n\)-periodic points, by Corollary 3.2. Their count is predicted to be the coefficient of the monomial \(h_{1}^{d-1}\ldots h_{n}^{d-1}\) in the polynomial
\[\left(\sum_{k=1}^{d}h_{2}^{k-1}(\mu h_{1})^{d-k}\right)\left(\sum_{k=1}^{d}h_ {3}^{k-1}(\mu h_{2})^{d-k}\right)\ldots\left(\sum_{k=1}^{d}h_{1}^{k-1}(\mu h _{n})^{d-k}\right), \tag{3.6}\]
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline \(d\)\(n\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) \\ \hline \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) \\ \hline \(2\) & \(2\) & \(6\) & \(24\) & \(120\) & \(720\) \\ \hline \(3\) & \(4\) & \(66\) & \(1980\) & \(93240\) & \(6350400\) \\ \hline \(4\) & \(8\) & \(840\) & \(218400\) & \(110510000\) & \(96864800000\) \\ \hline \(5\) & \(16\) & \(11410\) & \(27512100\) & \(1.5873\times 10^{11}\) & \(1.89313\times 10^{15}\) \\ \hline \(6\) & \(32\) & \(160776\) & \(3741400000\) & \(2.54601\times 10^{14}\) & \(4.26416\times 10^{19}\) \\ \hline \end{tabular}
\end{table}
Table 1. The degree of the singular vector variety \(\mathcal{S}(\boldsymbol{R})\) of the hyperquiver in Figure 4 with \(d_{1}=...=d_{n}=d\) and generic tensor \(T\). The dimension of \(\mathcal{S}(\boldsymbol{R})\) is \(N=(d-1)(n-1)\). In particular, \(\mathcal{S}(\boldsymbol{R})\) is positive-dimensional except in the first row.
by Theorem 3.1. This monomial is obtained from the product of terms
\[(h_{2}^{k-1}(\mu h_{1})^{d-k})(h_{3}^{k-1}(\mu h_{2})^{d-k})\ldots(h_{1}^{k-1}( \mu h_{n})^{d-k})\]
coming from each of the respective factors in (3.6), for each \(k\in[d]\). The coefficient of this product is \(\mu^{n(d-k)}\). Thus, the coefficient of \(h_{1}^{d-1}\ldots h_{n}^{d-1}\) in (3.6) is
\[\sum_{k=1}^{d}\mu^{n(d-k)}=\frac{\mu^{nd}-1}{\mu^{n}-1}=\frac{(m-1)^{nd}-1}{(m- 1)^{n}-1}.\]
This turns out to be the correct number of period-\(n\) fixed points, as proved in [18, Corollary 3.2]. The number of eigenvectors of a generic tensor is the special case \(n=1\) (Example 3.3).
Example 3.9 (Empty singular vector variety).: Consider the quiver in Figure 6, where the vertices are assigned vector spaces of dimension \(d>1\), and the two edges are assigned generic matrices \(A,B\in\mathbb{C}^{d\times d}\). Any singular vector would need to be an eigenvector of both matrices \(A\) and \(B\), but a pair of generic matrices \(A\) and \(B\) do not share an eigenvector. We see how the emptiness of the singular vector variety is captured by Theorem 1: The polynomial is \((dh_{1}^{d-1})^{2}\), which has coefficient of \((h_{1}h_{2})^{d-1}\) equal to zero.
Example 3.10 (Insufficiently generic representations).: The quiver representations in Figure 7 with \(d>1\) and generic matrix \(A\in\mathbb{C}^{d\times d}\) do not satisfy the genericity conditions in Definition 2.6. In Figure 7(a), the only permutations \(\sigma,\sigma^{\prime}\) on \(\{1,2\}\) sending the matrix \(A\) on one edge to the matrix \(A\) on the other edge and vice versa are the identity permutations, which fail to satisfy the condition \(\sigma(2)\neq\sigma^{\prime}(2)\), causing one of the edges to be redundant. The resulting singular vector variety has dimension \(d-1\) and degree \(2^{d-1}\) by Corollary 3.7, rather than the expected dimension \(0\) and degree \(d\) in Example 3.6. In Figure 7(b), the singular vectors are the non-zero points \(\boldsymbol{x}\in\mathbb{C}^{d}\) such that \(A^{2}\boldsymbol{x}=\lambda A\boldsymbol{x}\) for some \(\lambda\in\mathbb{C}\), of which there are \(d\) solutions, rather than the expected \(0\) solutions in Theorem 3.1.
Figure 5. A hyperquiver representing a period-\(n\) orbit
Figure 6. A quiver representation with empty singular vector variety
In the remainder of this section, we explore connections to dynamical systems and message passing.
**Example 3.11** (Fixed Homology Classes).: A _parameterised dynamical system_ is a continuous map \(f:X\times P\to X\), where \(X\) and \(P\) are compact triangulable topological spaces, respectively called the _state_ and _parameter_ space of \(f\). Taking homology with complex coefficients, we obtain a \(\mathbb{C}\)-linear map
\[H_{k}f:H_{k}(X\times P)\to H_{k}(X)\]
in each dimension \(k\geq 0\). We know from the Kunneth formula [39, Section 5.3] that the domain of \(H_{k}f\) is naturally isomorphic to the direct sum \(\bigoplus_{i+j=k}H_{i}(X)\otimes H_{j}(P)\). Therefore, each \(H_{k}f\) admits a component of the form
\[T_{k}:H_{k}(X)\otimes H_{0}(P)\to H_{k}(X),\]
We say that a non-zero homology class \(\xi\in H_{k}(X)\) is _fixed_ by \(f\) at a non-zero homology class \(\eta\in H_{0}(P)\) whenever there exists a scalar \(\lambda\in\mathbb{C}\) satisfying \(T_{k}(\xi\otimes\eta)=\lambda\cdot\xi\). The set of all such fixed homology classes (up to scaling) is the singular vector variety of the hyperquiver representation in Figure 8.
Let \(k:=\dim H_{k}(X)\) and suppose \(P\) has \(d\) connected components; i.e., \(\dim H_{0}(P)=d\). Then the singular vector variety has dimension \(d-1\) and degree equal to the coefficient of \(h_{1}^{k-1}h_{2}^{d-1}\) in the polynomial \((h_{1}+h_{2})^{d-1}\sum_{j=1}^{k}(h_{1}+h_{2})^{k-j}h_{2}^{j-1}\), by Theorem 3.1. The monomial \(h_{1}^{k-1}h_{2}^{d-1}\) arises by pairing a term \(\binom{k-j}{i}h_{1}^{i}h_{2}^{k-j-i}h_{2}^{j-1}=\binom{k-j}{i}h_{1}^{i}h_{2}^{k -i-1}\) in the expanded sum with the term \(\binom{d-1}{k-i-1}h_{1}^{k-i-1}h_{2}^{(d-1)-(k-i-1)}\) in the expanded parentheses, for all \(0\leq i\leq k-j\) and \(1\leq j\leq k\). Thus, its coefficient is
\[\sum_{j=1}^{k}\sum_{i=0}^{k-j}\binom{k-j}{i}\binom{d-1}{k-i-1}\]
In particular, if \(P\) is connected (i.e., \(d=1\)), then there is exactly one non-zero homology class in \(H_{k}(X)\) fixed by \(f\).
**Example 3.12** (Message Passing).: Our framework counts the fixed points of certain multilinear message passing operations, as we now describe. Assign vectors \(\boldsymbol{x}_{i}^{(0)}:=\boldsymbol{x}_{i}\in\mathbb{C}^{d_{i}}\) to each \(i\in V\). Apply the multilinear map \(T_{e}\) to the vectors \((\boldsymbol{x}_{s_{1}(e)}^{(k)},\ldots,\boldsymbol{x}_{s_{\mu}(e)}^{(k)})\) at nodes in \(s(e)\). Then, update the vector at the target vertex \(t(e)\) to
\[\boldsymbol{x}_{t(e)}^{(k+1)}:=T_{e}(\boldsymbol{x}_{s(e)}^{(k)})\in\mathbb{C }^{d_{t(e)}}. \tag{3.7}\]
Figure 7. Insufficiently generic quiver representations
In the limit, one converges to a fixed point of the update steps. The singular vector variety consists of tuples of directions in \(\mathbb{C}^{d_{i}}\) that are fixed under these operations, for any order of update steps.
We compare the update (3.7) to message passing graph neural networks, see e.g. [23, 25]. The vector at each vertex is the features of the vertex. The vectors typically lie in a vector space of the same dimension, as in Theorem 1. Message passing operations take the form
\[\mathbf{x}_{i}^{(k+1)}=f(\{\mathbf{x}_{i}^{(k)}\}\cup\{\mathbf{x}_{j}^{(k)}:j\in\mathcal{N }(i)\}), \tag{3.8}\]
where \(\mathcal{N}(i)\) is the neighbourhood of vertex \(i\). That is, the vector of features at node \(i\) in the \((k+1)\)-th step depends on the features of node \(i\) and its neighbours at the \(k\)-th step. Our update step in (3.7) is a special case of (3.8). We relate (3.7) to operations in the literature.
The function \(f\) in (3.8) often involves a non-linearity, applied pointwise. In comparison, we focus on the (multi)linear setting, as discussed for example in [11]. There, the authors study the optimisation landscapes of linear update steps, relating them to power iteration algorithms. Our approach to count the locus of fixed points sheds insight into the global structure of this optimisation landscape, in the spirit of [10, 14]. Studying such fixed point conditions directly is the starting point of implicit deep learning [17, 24].
The neighbourhood \(\mathcal{N}(i)\), for us, consists of nodes \(j\) that appear in a tuple \(s(e)\) for some edge \(e\) with \(t(e)=i\). Update steps are usually over a graph rather than a hypergraph. The tensor multiplications from (3.7) incorporate higher-order interactions. Such higher-order structure also appears in tensorised graph neural networks [27] and message passing simplicial networks [8].
## 4. The singular vector bundle
In this section, we define the singular vector bundle. It is a vector bundle on \(X=\prod_{i=1}^{n}\mathbb{P}(\mathbb{C}^{d_{i}})\) whose global sections are associated to hyperquiver representations. The zeros of a section are the singular vectors of the corresponding representation.
Following [20, Section 2], for each integer \(d>0\) we consider four vector bundles over \(\mathbb{P}(\mathbb{C}^{d})\): the _free_ bundle \(\mathscr{F}(d)\), the _tautological_ bundle \(\mathscr{T}(d)\), the _quotient_ bundle \(\mathscr{Q}(d)\), and the _hyperplane_ bundle \(\mathscr{H}(d)\). Their fibres at each \([\mathbf{x}]\in\mathbb{P}(\mathbb{C}^{d})\) are
\[\mathscr{F}(d)_{[\mathbf{x}]} =\mathbb{C}^{d}\] \[\mathscr{T}(d)_{[\mathbf{x}]} =\operatorname{span}(\mathbf{x}) \mathscr{H}(d)_{[\mathbf{x}]} =\operatorname{span}(\mathbf{x})^{\vee}.\]
Figure 8. Fixed points in homology
Here if \(V\) is a vector space or vector bundle, then \(V^{\vee}\) denotes its dual. We have a short exact sequence of vector bundles
\[0\to\mathscr{T}(d_{i})\to\mathscr{F}(d_{i})\to\mathscr{Q}(d_{i})\to 0. \tag{4.1}\]
There are projection maps \(\pi_{i}:X\to\mathbb{P}(\mathbb{C}^{d_{i}})\) with \(\pi_{i}(\boldsymbol{\chi})=[\boldsymbol{x}_{i}]\), where \(\boldsymbol{\chi}=([\boldsymbol{x}_{1}],\ldots,[\boldsymbol{x}_{n}])\). We pull back a vector bundle \(\mathscr{B}\) over \(\mathbb{P}(\mathbb{C}^{d_{i}})\) to a bundle \(\pi_{i}^{*}\mathscr{B}\) over \(X\), whose fiber at \(\boldsymbol{\chi}\in X\) equals \(\mathscr{B}_{[\boldsymbol{x}_{i}]}\). There is an exact sequence \(0\to\mathscr{T}(d_{i})_{[\boldsymbol{x}_{i}]}\to\mathscr{F}(d_{i})_{[ \boldsymbol{x}_{i}]}\to\mathscr{Q}(d_{i})_{[\boldsymbol{x}_{i}]}\to 0\) of vector spaces at every \([\boldsymbol{x}_{i}]\in\mathbb{P}(\mathbb{C}^{d_{i}})\). Hence there is an exact sequence of vector bundles
\[0\to\pi_{i}^{*}\mathscr{T}(d_{i})\to\pi_{i}^{*}\mathscr{F}(d_{i})\to\pi_{i}^{* }\mathscr{Q}(d_{i})\to 0. \tag{4.2}\]
**Definition 4.1**.: Let \(\boldsymbol{R}=(\boldsymbol{d},T)\) be a hyperquiver representation and let \(X=\prod_{i=1}^{n}\mathbb{P}(\mathbb{C}^{d_{i}})\). For each hyperedge \(e\in E\), we consider the following vector bundles over \(X\).
\[\mathscr{T}(e):=\bigotimes_{j=1}^{\mu(e)}\pi_{s_{j}(e)}^{*}\mathscr{T}(d_{s_{ j}(e)}),\qquad\mathscr{B}(e):=\operatorname{Hom}\left(\mathscr{T}(e),\pi_{t(e)}^{* }\mathscr{Q}(d_{t(e)})\right).\]
We define the _singular vector bundle_ of \(\boldsymbol{R}\) over \(X\) to be \(\mathscr{B}(\boldsymbol{R}):=\bigoplus_{e\in E}\mathscr{B}(e)\).
The vector bundle \(\mathscr{B}(\boldsymbol{R})\) depends on the hypergraph \(H\) and the assigned vector spaces \(U\), but not on the multilinear maps \(T\). It can be written in terms of a partition of edges as \(\mathscr{B}(\boldsymbol{R})=\bigoplus_{r=1}^{M}\bigoplus_{e\in E_{r}}\mathscr{ B}(e)\). We will see that when \(\boldsymbol{R}\) is a generic hyperquiver representation, the zero locus of a generic section of \(\mathscr{B}(\boldsymbol{R})\) is the singular vector variety \(\mathcal{S}(\boldsymbol{R})\). We make the following observations about its summands \(\mathscr{B}(e)\).
**Proposition 4.2**.: _Let \(\mathscr{B}(e)=\operatorname{Hom}\left(\mathscr{T}(e),\pi_{t(e)}^{*}\mathscr{ Q}(d_{t(e)})\right)\). Then the following hold._
1. _The fibre of_ \(\mathscr{B}(e)\) _at_ \(\boldsymbol{\chi}\) _is_ \(\operatorname{Hom}\left(\operatorname{span}\left(\otimes_{j=1}^{\mu(e)} \boldsymbol{x}_{s_{j}(e)}\right),\mathbb{C}^{d_{t(e)}}/\operatorname{span}( \boldsymbol{x}_{t(e)})\right)\)_._
2. _The bundle_ \(\mathscr{B}(e)\) _has rank_ \(d_{t(e)}-1\)_._
3. _We have the isomorphism_ \(\mathscr{B}(e)=\left(\bigotimes_{j=1}^{\mu(e)}\pi_{s_{j}(e)}^{*}\mathscr{H}(d _{s_{j}(e)})\right)\otimes\pi_{t(e)}^{*}\mathscr{Q}(d_{t(e)})\)_._
Proof.: The bundle \(\mathscr{T}(e)\) has fibres
\[\mathscr{T}(e)_{\boldsymbol{\chi}} =\bigotimes_{j=1}^{\mu(e)}\pi_{s_{j}(e)}^{*}\mathscr{T}(d_{s_{j}( e)})_{\boldsymbol{\chi}}=\bigotimes_{j=1}^{\mu(e)}\mathscr{T}(d_{s_{j}(e)})_{[ \boldsymbol{x}_{s_{j}(e)}]}\] \[=\bigotimes_{j=1}^{\mu(e)}\operatorname{span}(\boldsymbol{x}_{s_ {j}(e)})=\operatorname{span}\left(\otimes_{j=1}^{\mu(e)}\boldsymbol{x}_{s_{j} (e)}\right).\]
The bundle \(\pi_{t(e)}^{*}\mathscr{Q}(d_{t(e)})\) has fibre \(\pi_{t(e)}^{*}\mathscr{Q}(d_{t(e)})_{\boldsymbol{\chi}}=\mathbb{C}^{d_{t(e)}} /\operatorname{span}(\boldsymbol{x}_{t(e)})\). This proves (a). Then (b) follows, since the dimension of the fibre is \(d_{t(e)}-1\). To prove (c), observe that \(\mathscr{B}(e)\simeq\mathscr{T}(e)^{\vee}\otimes\pi_{t(e)}^{*}\mathscr{Q}(d _{t(e)})\) and that
\[\mathscr{T}(e)^{\vee}=\left(\bigotimes_{j=1}^{\mu(e)}\pi_{s_{j} (e)}^{*}\mathscr{T}(d_{s_{j}(e)})\right)^{\vee} \simeq\bigotimes_{j=1}^{\mu(e)}\left(\pi_{s_{j}(e)}^{*}\mathscr{T}( d_{s_{j}(e)})\right)^{\vee}\] \[\simeq\bigotimes_{j=1}^{\mu(e)}\pi_{s_{j}(e)}^{*}\mathscr{T}(d_{s _{j}(e)})^{\vee}=\bigotimes_{j=1}^{\mu(e)}\pi_{s_{j}(e)}^{*}\mathscr{H}(d_{s_{j }(e)}).\qed\]
We relate the singular vector variety to the singular vector bundle. The global sections of a vector bundle \(\mathscr{B}\) are denoted by \(\varGamma(\mathscr{B})\). They are the holomorphic maps \(\sigma:X\to\mathscr{B}\) that send each \(\boldsymbol{\chi}\in X\) to a point in \(\mathscr{B}_{\boldsymbol{\chi}}\). A global section of \(\mathscr{B}(e)\) is a map sending each \(\boldsymbol{\chi}\in X\) to an element in
\[\operatorname{Hom}\left(\operatorname{span}\left(\otimes_{j=1}^{\mu(e)} \boldsymbol{x}_{s_{j}(e)}\right),\mathbb{C}^{d_{t(e)}}/\operatorname{span}( \boldsymbol{x}_{t(e)})\right),\]
by Proposition 4.2(a). Definition 2.6(ii) of a partition gives an equivalence relation between tensors assigned to \(E_{r}\) via permutation of the modes. Following the notation of Definition 2.6(iii), we denote by \(T_{r}\in\mathbb{C}^{e_{r}}\) a representative for the class corresponding to \(E_{r}\), for some \(e_{r}\in E_{r}\), and we define \(T_{r}(\boldsymbol{x}_{s(e)}):=T_{e}(\boldsymbol{x}_{s(e)})\) for all \(e\in E_{r}\), where \(T_{e}(\boldsymbol{x}_{s(e)})\) is defined in (2.1). A tensor \(T\in\mathbb{C}^{e_{r}}\) determines a global section of \(\mathscr{B}(e)\) for every \(e\in E_{r}\), which we denote by \(L_{e}(T)\). The map \(L_{e}(T)\) sends \(\boldsymbol{\chi}\) to the map
\[\otimes_{j=1}^{\mu(e)}\boldsymbol{x}_{s_{j}(e)}\,\mapsto\,\overline{T( \boldsymbol{x}_{s(e)})}\in\mathbb{C}^{d_{t(e)}}/\operatorname{span}(\boldsymbol {x}_{t(e)}).\]
where \(\overline{T(\boldsymbol{x}_{s(e)})}\) is the image of \(T(\boldsymbol{x}_{s(e)})\) in the quotient vector space \(\mathbb{C}^{d_{t(e)}}/\operatorname{span}(\boldsymbol{x}_{t(e)})\). In other words, following [20, Lemma 9], we define the map
\[L_{e}:\mathbb{C}^{e_{r}} \longrightarrow\varGamma(\mathscr{B}(e))\] \[T \longmapsto L_{e}(T).\]
We form the composite map
\[L:\bigoplus_{r=1}^{M}\mathbb{C}^{e_{r}} \longrightarrow\varGamma(\mathscr{B}(\boldsymbol{R})) \tag{4.3}\] \[(T_{1},\dots,T_{M}) \longmapsto\bigoplus_{r=1}^{M}\bigoplus_{e\in E_{r}}L_{e}(T_{r}).\]
We connect the global sections in the image of \(L\) to the singular vector tuples of a hyperquiver representation, generalizing [20, Lemma 11].
**Proposition 4.3**.: _Let \(\boldsymbol{R}=(\boldsymbol{d},T)\) be a hyperquiver representation. Let \(X=\prod_{i=1}^{n}\mathbb{P}(\mathbb{C}^{d_{i}})\) and let \(\mathscr{B}(\boldsymbol{R})\) be the singular vector bundle, with \(L:\bigoplus_{r=1}^{M}\mathbb{C}^{e_{r}}\to\varGamma(\mathscr{B}(\boldsymbol{R }))\) the map in (4.3). Then a point \(\boldsymbol{\chi}\in X\) lies in the zero locus of the section \(\sigma=L((T_{r})_{r=1}^{M})\) if and only if \(\boldsymbol{\chi}\) is a singular vector tuple of \(\boldsymbol{R}\)._
Proof.: \(L((T_{r})_{r=1}^{M})(\boldsymbol{\chi})\) is the \(|E|\)-tuple of zero maps each in \(\mathscr{B}(e)_{\boldsymbol{\chi}}\) if and only if for all \(e\in E_{r}\) and \(r\in[M]\), \(L_{e}(T_{r})(\boldsymbol{\chi})(\otimes_{j=1}^{\mu(e)}\boldsymbol{x}_{s_{j}( e)})=\overline{0}\), if and only if \(T_{r}(\boldsymbol{x}_{s(e)})=\lambda_{e}\boldsymbol{x}_{t(e)}\) for some \(\lambda_{e}\in\mathbb{C}\), if and only if \(\boldsymbol{\chi}\) is a singular vector tuple of the hyperquiver representation \(\boldsymbol{R}\).
In light of the preceding result, it becomes necessary to determine the image of \(L\) within \(\varGamma(\mathscr{B}(\boldsymbol{R}))\). For this purpose, we make use of the following Kunneth formula for vector bundles. Note that \(H^{0}(X,\mathscr{B}):=\varGamma(\mathscr{B})\).
**Proposition 4.4** (Kunneth Formula, [29, Proposition 9.2.4]).: _Let \(X\) and \(Y\) be complex varieties and \(\pi_{X}:X\times Y\to X\) and \(\pi_{Y}:X\times Y\to Y\) be the projection maps. If \(\mathscr{F}\) and \(\mathscr{G}\) are vector bundles on \(X\) and \(Y\) respectively, then_
\[H^{n}(X\times Y,\pi_{X}^{*}\mathscr{F}\otimes\pi_{Y}^{*}\mathscr{G})\cong \bigoplus_{p+q=n}H^{p}(X,\mathscr{F})\otimes H^{q}(Y,\mathscr{G}).\]
The following result, which generalises [20, Lemma 9 parts (1) and (2)], characterises the image of \(L\).
**Proposition 4.5**.: _The linear map \(L:\bigoplus_{r=1}^{M}\mathbb{C}^{e_{r}}\to\varGamma(\mathscr{B}(\mathbf{R}))\) in (4.3) is bijective._
Proof.: By the definition of \(L\), it suffices to show for each \(e\in E\) that \(L_{e}\) is an injective linear map between vector spaces of the same dimension. First we show that \(L_{e}\) is injective. Consider \(e\in E_{r}\) and let \(T\in\mathbb{C}^{e_{r}}\). If \(T\neq 0\), then there exist \(\mathbf{x}_{s_{j}(e)}\in\mathbb{C}^{d_{s_{j}(e)}}\) for \(j\in[\mu(e)]\) with \(\mathbf{v}:=T(\mathbf{x}_{s(e)})\neq 0\). Let \(\mathbf{x}_{t(e)}\in\mathbb{C}^{d_{t(e)}}\setminus\operatorname{span}(\mathbf{v})\). Then \(L_{e}(T)(\mathbf{\chi})(\otimes_{j=1}^{\mu(e)}\mathbf{x}_{s_{j}(e)})\neq\overline{0}\). Hence, the global section \(L_{e}(T)\) is not the zero section.
We recursively apply the Kunneth formula in the case \(n=0\) to obtain
\[H^{0}(X,\mathscr{B}(e))=\bigotimes_{j=1}^{\mu(e)}H^{0}(X,\pi_{s_{j}(e)}^{*} \mathscr{H}(d_{s_{j}(e)}))\otimes H^{0}(X,\pi_{t(e)}^{*}\mathscr{Q}(d_{t(e)})).\]
It remains to compute the dimensions of the factors. We have \(\dim H^{0}(X,\pi_{i}^{*}\mathscr{H}(d_{i}))=d_{i}\) by results on the cohomology of line bundles over projective space [26, Theorem 5.1]. Finally, the short exact sequence (4.2) gives a long exact sequence in cohomology
\[0\to\underbrace{H^{0}(X,\pi_{i}^{*}\mathscr{T}(d_{i}))}_{=0}\to H^{0}(X,\pi_ {i}^{*}\mathscr{F}(d_{i}))\to H^{0}(X,\pi_{i}^{*}\mathscr{Q}(d_{i}))\to \underbrace{H^{1}(X,\pi_{i}^{*}\mathscr{T}(d_{i}))}_{=0}\to\ldots.\]
The underlined terms are \(0\), again by [26, Theorem 5.1]. Thus \(\dim H^{0}(X,\pi_{i}^{*}\mathscr{Q}(d_{i}))=d_{i}\), since \(\dim H^{0}(X,\pi_{i}^{*}\mathscr{F}(d_{i}))=d_{i}\). Hence \(\dim H^{0}(X,\mathscr{B}(e))=\prod_{j=1}^{m}d_{s_{j}(e)}\). This is the dimension of \(\mathbb{C}^{e_{r}}\), so \(L_{e}\) is a bijection.
## 5. Bertini-type theorem
In this section, we relate the zeros of a generic section of a vector bundle to its top Chern class, cf. [20, Section 2.5]. This relation holds when the vector bundle is "almost generated", see Definition 5.2. We refer the reader to Appendix A for relevant background on Chern classes and Chow rings. In this section, \(X\) is any smooth complex projective variety. Recall that the global sections of \(\mathscr{B}\), denoted \(\varGamma(\mathscr{B})\), are the holomorphic maps \(\sigma:X\to\mathscr{B}\) that send each \(\mathbf{\chi}\in X\) to a point in the fibre \(\mathscr{B}_{\mathbf{\chi}}\).
**Definition 5.1**.: Let \(X\) be a smooth projective variety and \(\mathscr{B}\) a vector bundle over \(X\). The vector bundle \(\mathscr{B}\) is _globally generated_ if there exists a vector subspace \(\varLambda\subseteq\varGamma(\mathscr{B})\) such that for all \(\mathbf{\chi}\in X\), we have \(\varLambda(\mathbf{\chi})=\mathscr{B}_{\mathbf{\chi}}\), where \(\varLambda(\mathbf{\chi}):=\{\sigma(\mathbf{\chi})\mid\sigma\in\varLambda\}\).
**Definition 5.2**.: Let \(X\) be a smooth projective variety and \(\mathscr{B}\) a vector bundle over \(X\). The vector bundle \(\mathscr{B}\) is _almost generated_ if there exists a vector subspace \(\varLambda\subseteq\varGamma(\mathscr{B})\) such that either \(\mathscr{B}\) is globally generated, or there are \(k\geq 1\) smooth irreducible proper subvarieties \(Y_{1},\ldots,Y_{k}\) of \(X\), with \(Y_{0}=X\), such that:
* For all \(i\geq 0\), there is a vector bundle \(\mathscr{B}_{i}\) over \(Y_{i}\), and for any \(j\geq 0\), if \(Y_{i}\) is a subvariety of \(Y_{j}\), then \(\mathscr{B}_{i}\) is a subbundle of \(\mathscr{B}_{j}\big{|}_{Y_{i}}\)
* \(\varLambda(\mathbf{\chi})\subseteq(\mathscr{B}_{i})_{\mathbf{\chi}}\) for all \(\mathbf{\chi}\in Y_{i}\) and \(i\geq 0\)
* If \(\alpha_{i}\subseteq[k]\) is the set of all \(j\in[k]\) such that \(Y_{j}\) is a proper subvariety of \(Y_{i}\), then \(\varLambda(\mathbf{\chi})=(\mathscr{B}_{i})_{\mathbf{\chi}}\) for all \(\mathbf{\chi}\in Y_{i}\setminus(\cup_{j\in\alpha_{i}}Y_{j})\).
Now we state our Bertini-type theorem; cf. [20, Theorem 6]. The zero locus of a section \(\sigma\in\Gamma(\mathscr{B})\) is \(Z(\sigma):=\{\boldsymbol{\chi}\in X\mid\sigma(\boldsymbol{\chi})=0\}\). The top Chern class and top Chern number of \(\mathscr{B}\), see Definition A.5, are denoted \(c_{r}(\mathscr{B})\in A^{*}(X)\) and \(\nu(\mathscr{B})\in\mathbb{Z}\), respectively. We assume \(X\subseteq\mathbb{P}^{D}\) via some closed immersion \(s:X\hookrightarrow\mathbb{P}^{D}\) and regard \(c_{r}(\mathscr{B})=s_{*}(c_{r}(\mathscr{B}))\in A^{*}(\mathbb{P}^{D})\), see Remark A.3.
**Theorem 5.3** (Bertini-Type Theorem).: _Let \(X\subseteq\mathbb{P}^{D}\) be a smooth irreducible complex projective variety of dimension \(d\), and \(\mathscr{B}\) a vector bundle of rank \(r\) over \(X\), almost generated by a vector subspace \(\Lambda\subseteq\Gamma(\mathscr{B})\). Let \(\sigma\in\Lambda\) be a generic section with \(Z(\sigma)\subseteq X\) its zero locus._
1. _If_ \(r>d\)_, then_ \(Z(\sigma)\) _is empty_
2. _If_ \(r=d\)_, then_ \(Z(\sigma)\) _consists of_ \(\nu(\mathscr{B})\) _points. Furthermore, if_ \(\operatorname{rank}\mathscr{B}_{i}>\dim Y_{i}\) _for all_ \(i\geq 1\)_, then each point has multiplicity 1 and does not lie on_ \(\cup_{i=1}^{k}Y_{i}\)_._
3. _If_ \(r<d\)_, then_ \(Z(\sigma)\) _is empty or smooth of pure dimension_ \(d-r\)_. In the latter case, the degree of_ \(Z(\sigma)\) _is_ \(\nu\left(\mathscr{B}\right|_{L}\right)\)_, where_ \(L\subseteq\mathbb{P}^{D}\) _is the intersection of_ \(d-r\) _generic hyperplanes in_ \(\mathbb{P}^{D}\)_. If_ \(\nu\left(\mathscr{B}\right|_{L}\right)\neq 0\)_, then_ \(Z(\sigma)\) _is non-empty._
**Remark 5.4**.: The above theorem generalises [20, Theorem 6], where parts (a) and (b) appear. We add part (c). Compared to [20, Theorem 6], our extra assumption \(\dim(\mathscr{B}_{i})>\dim(Y_{i})\) for \(i>0\) in (b) appears because it is absent from Definition 5.2, whereas it appears in [20, Definition 5].
To prove Theorem 5.3, we use the following results.
**Theorem 5.5** (Fiber Dimension Theorem [28, Theorem 1.25]).: _Let \(f:X\to Y\) be a dominant morphism of irreducible varieties. Then there exists an open set \(U\subseteq Y\) such that for all \(y\in U\), \(\dim X=\dim Y+\dim(f^{-1}(y))\)._
**Theorem 5.6** (Generic Smoothness Theorem [26, Corollary III.10.7]).: _Let \(f:X\to Y\) be a morphism of irreducible complex varieties. If \(X\) is smooth, then there exists an open subset \(U\subseteq Y\) such that \(f|_{f^{-1}(U)}\) is smooth. Furthermore, if \(f\) is not dominant, then \(f^{-1}(U)=\varnothing\)._
Proof of Theorem 5.3.: Consider \(I=\{(\boldsymbol{\chi},\sigma)\in X\times\Lambda\mid\sigma(\boldsymbol{\chi})=0\}\) with projection maps
Then \(I\) is a vector bundle over \(X\). Since the base space \(X\) is irreducible, so is the total space \(I\). We show that \(\dim I=\dim\Lambda+d-r\). The map \(p\) is surjective, and hence dominant, since the zero section lies in \(\Lambda\). There exists an open set \(U\subseteq X\) such that \(\dim I=d+\dim(p^{-1}(\boldsymbol{\chi}))\) for all \(\boldsymbol{\chi}\in U\), by Theorem 5.5. The fibre \(p^{-1}(\boldsymbol{\chi})\simeq\{\sigma\in\Lambda:\sigma(\boldsymbol{\chi})=0\}\) consists of sections in \(\Lambda\) that vanish at \(\boldsymbol{\chi}\). Consider the evaluation map \(\{\boldsymbol{\chi}\}\times\Lambda\to\mathscr{B}_{\boldsymbol{\chi}}\) that sends \((\boldsymbol{\chi},\sigma)\) to \(\sigma(\boldsymbol{\chi})\). This is a linear map of vector spaces and its kernel is isomorphic to \(p^{-1}(\boldsymbol{\chi})\). Let \(Y:=\cup_{i=1}^{k}Y_{i}\), where the \(Y_{i}\) are from Definition 5.2. The variety \(Y\) is a proper subvariety of \(X\). For each \(\boldsymbol{\chi}\in X\setminus Y\), the evaluation map is surjective, by Definition 5.2(iii). Thus, the evaluation map has rank \(r\) and nullity \(\dim\Lambda-r\). Hence \(\dim(p^{-1}(\boldsymbol{\chi}))=\dim\Lambda-r\) for all \(\boldsymbol{\chi}\in U\cap(X\setminus Y)\). Therefore \(\dim I=\dim\Lambda+d-r\).
The fiber \(q^{-1}(\sigma)\simeq\{\boldsymbol{\chi}\in X:\sigma(\boldsymbol{\chi})=0\}\) is the zero locus \(Z(\sigma)\). We show that the map \(q\) is dominant if and only if \(q^{-1}(\sigma)\neq\varnothing\) for generic \(\sigma\in\varLambda\). If \(q\) is dominant, then there exists an open set \(W\subseteq\varLambda\) such that \(q^{-1}(\sigma)\) is smooth of codimension \(\dim I-\dim\varLambda=d-r\) for all \(\sigma\in W\), by Theorems 5.5 and 5.6. In particular, \(q^{-1}(\sigma)\) is non-empty. Conversely if \(q\) is not dominant, then there is an open set \(W\subseteq\varLambda\) such that \(q^{-1}(\sigma)=\varnothing\) for all \(\sigma\in W\), by Theorem 5.6.
Now we show that \(Z(\sigma)\neq\varnothing\) for generic \(\sigma\in\varLambda\) if and only if \(c_{r}(\mathscr{B})\neq 0\). If \(Z(\sigma)=\varnothing\), then the existence of a nowhere vanishing section of \(\mathscr{B}\) implies that \(c_{r}(\mathscr{B})=0\)[21, Lemma 3.2]. Conversely, if \(Z(\sigma)\neq\varnothing\), then the map \(q\) is dominant, so \(Z(\sigma)\) is smooth of codimension \(d-r\). If \(c_{r}(\mathscr{B})=0\), then \(0=c_{r}(\mathscr{B})=[Z(\sigma)]\) by Definition A.5(ii), which is a contradiction since the degree of a non-empty projective variety is a positive integer [26, Proposition I.7.6.a]. In particular, if \(r=d\) and \(\nu(\mathscr{B})=0\), then \(Z(\sigma)=\varnothing\).
The map \(q\) is not dominant if \(\dim I<\dim\varLambda\); i.e., if \(r>d\). This proves (a) and the emptiness possibility in (c). It remains to consider the case \(r\leq d\) with the map \(q\) dominant and generic \(\sigma\in\varLambda\).
\(Z(\sigma)\subseteq\mathbb{P}^{D}\) is smooth of dimension \(d-r\). It is pure dimensional by [21, Example 3.2.16]. When \(r=d\), we have \([Z(\sigma)]=c_{r}(\mathscr{B})=\nu(\mathscr{B})[p]\) for some \(p\in X\), by Definition A.5(ii), so the zero locus consists of \(\nu(\mathscr{B})\) points. It remains to relate the degree to the top Chern class for \(r<d\). The degree of \(Z(\sigma)\) is the number of points in the intersection of \(Z(\sigma)\) with \(d-r\) generic hyperplanes \(\mathbb{P}^{D}\). Denote the intersection of \(d-r\) such hyperplanes by \(L\). Let \(L\xrightarrow{j}\mathbb{P}^{D}\) be its inclusion. We have \([Z(\sigma)]=c_{r}(\mathscr{B})\) by Definition A.5(ii) and seek \([L]c_{r}(\mathscr{B})\). We compute in \(A^{*}(\mathbb{P}^{D})\):
\[[L]c_{r}(\mathscr{B}) =j_{*}([L])c_{r}(\mathscr{B}) \text{(definition of pushforward)}\] \[=j_{*}(j^{*}(c_{r}(\mathscr{B}))[L]) \text{(projection formula)}\] \[=j_{*}(c_{r}(j^{*}\mathscr{B})[L])=j_{*}(c_{r}\left(\mathscr{B} \big{|}_{L}\right)[L]) \text{(Definition A.5(iv))}\] \[=j_{*}(\nu\left(\mathscr{B}\big{|}_{L}\right)[p][L]) \text{(definition of top Chern number)}\] \[=\nu\left(\mathscr{B}\big{|}_{L}\right)j_{*}([p][L]) \text{(pushforward is a morphism)}\] \[=\nu\left(\mathscr{B}\big{|}_{L}\right)j_{*}([p])=\nu\left( \mathscr{B}\big{|}_{L}\right)[p] \text{(intersection with a point)} \tag{5.1}\]
for some point \(p\in L\). Thus, the degree of \(Z(\sigma)\) is \(\nu\left(\mathscr{B}\big{|}_{L}\right)\). As a corollary, we obtain that if \(\nu(\mathscr{B})\neq 0\) or \(\nu\left(\mathscr{B}\big{|}_{L}\right)\neq 0\), then \(Z(\sigma)\neq\varnothing\). This proves the dimension and degree statements in (b) and (c).
Lastly, we show that when \(r=d\) and the additional assumptions of (b) hold, the points in \(Z(\sigma)\) are generically of multiplicity \(1\) and do not lie on \(Y\). Smoothness in Theorem 5.6 shows that each of the finitely many points in \(q^{-1}(\sigma)\) are of multiplicity \(1\). We have \(\operatorname{rank}\mathscr{B}_{i}>\dim Y_{i}\) for all \(i\geq 1\). Hence \(\dim(p^{-1}(Y_{i}))=\dim Y_{i}+\dim\varLambda-\operatorname{rank}\mathscr{B}_ {i}<\dim\varLambda\). Thus, \(\dim(p^{-1}(Y))<\dim\varLambda\), and using the fact that the projection \(\mathbb{P}^{n}\times\mathbb{A}^{m}\to\mathbb{A}^{m}\) is a closed map, we deduce that \(q\) is a closed map. Hence \(q(p^{-1}(Y))\) is a proper subvariety of \(\varLambda\). For all points in the open set \(\sigma\in W\cap W^{\prime}\), where \(W^{\prime}=\varLambda\setminus q(p^{-1}(Y))\), the fibre \(q^{-1}(\sigma)\) contains no points in \(Y\).
_Remark 5.7_.: Our proof of Theorem 5.3, is analogous to the proofs in [20] of their Theorems 4 and 6. Their proof uses [21, Example 3.2.16], which is equivalent to axiom (ii) in Definition A.5. Our proof adds the Chern number computation for case (c).
## 6. Generating the singular vector bundle
In this section we show that \(\mathscr{B}(\mathbf{R})\) is almost generated, so that Theorem 5.3 may be applied to it. We generalise the singular vector bundle to a bundle \(\mathscr{B}(\mathbf{R},F)\), for a subset of hyperedges \(F\subseteq E\). The zeros of a global section of \(\mathscr{B}(\mathbf{R},F)\) are singular vectors with singular value zero along the edges in \(F\). We show that \(\mathscr{B}(\mathbf{R},F)\) is almost generated. This will later yield not only the dimension and degree of the singular vector variety \(\mathcal{S}(\mathbf{R})\) in Theorem 3.1, but also the final statement about the non-existence of a zero singular value.
**Definition 6.1**.: Let \(\mathbf{R}=(\mathbf{d},T)\) be a hyperquiver representation and let \(X=\prod_{i=1}^{n}\mathbb{P}(\mathbb{C}^{d_{i}})\). Given \(F\subseteq E\), we define
\[\mathscr{B}(e,F)=\begin{cases}\operatorname{Hom}\left(\mathscr{T}(e),\pi_{t( e)}^{*}\mathscr{D}(d_{t(e)})\right)&\text{if }e\notin F\\ \operatorname{Hom}\left(\mathscr{T}(e),\pi_{t(e)}^{*}\mathscr{F}(d_{t(e)}) \right)&\text{if }e\in F.\end{cases}\]
It has fibres
\[\mathscr{B}(e,F)_{\mathbf{\chi}}=\begin{cases}\operatorname{Hom}\left(\operatorname {span}\left(\otimes_{j=1}^{\mu(e)}\mathbf{x}_{s_{j}(e)}\right),\mathbb{C}^{d_{t(e) }}/\operatorname{span}(\mathbf{x}_{t(e)})\right)&\text{if }e\notin F\\ \operatorname{Hom}\left(\operatorname{span}\left(\otimes_{j=1}^{\mu(e)}\mathbf{x} _{s_{j}(e)}\right),\mathbb{C}^{d_{t(e)}}\right)&\text{if }e\in F,\end{cases}\]
where \(\mathbf{\chi}=([\mathbf{x}_{1}],\ldots,[\mathbf{x}_{n}])\). The _singular vector bundle_ of \(\mathbf{R}\) over \(X\) with respect to \(F\) is \(\mathscr{B}(\mathbf{R},F)=\bigoplus_{e\in E}\mathscr{B}(e,F)\).
The singular vector bundle \(\mathscr{B}(\mathbf{R})\) from Definition 4.1 is \(\mathscr{B}(\mathbf{R},\varnothing)\).
**Proposition 6.2**.: _The bundle \(\mathscr{B}(\mathbf{R},F)\) has rank \(\sum_{e\in E}(d_{t(e)}-1)+|F|\)._
Proof.: The rank of \(\mathscr{B}(\mathbf{R},F)\) is \(\sum_{e\in E}\operatorname{rank}\mathscr{B}(e,F)\). For \(e\notin F\), \(\operatorname{rank}\mathscr{B}(e,F)=d_{t(e)}-1\), as in Proposition 4.2(b). For \(e\in F\), \(\operatorname{rank}\mathscr{B}(e,F)=\operatorname{rank}\operatorname{Hom}( \mathscr{T}(e),\pi_{t(e)}^{*}\mathscr{F}(d_{t(e)}))=d_{t(e)}\).
We construct global sections for \(\mathscr{B}(\mathbf{R},F)\) whose zero loci correspond to singular vectors with zero singular value along the edges in \(F\). Define the map
\[L_{e,F}:\mathbb{C}^{e_{r}} \longrightarrow\varGamma(\mathscr{B}(e,F)) \tag{6.1}\] \[L_{e,F}(T)(\mathbf{\chi})(\otimes_{j=1}^{\mu(e)}\mathbf{x}_{s_{j}(e)})= \begin{cases}\overline{T(\mathbf{x}_{s(e)})}\in\mathbb{C}^{d_{t(e)}}/\operatorname {span}(\mathbf{x}_{t(e)})&e\notin F\\ T(\mathbf{x}_{s(e)})\in\mathbb{C}^{d_{t(e)}}&e\in F.\end{cases}\]
We define the composite map
\[L_{F}:\bigoplus_{r=1}^{M}\mathbb{C}^{e_{r}} \rightarrow\varGamma(\mathscr{B}(\mathbf{R},F)) \tag{6.2}\] \[L_{F}= \bigoplus_{r=1}^{M}\bigoplus_{e\in E_{r}}L_{e,F}.\]
We connect the global sections in the image of \(L_{F}\) to the singular vector tuples of \(\mathbf{R}\), generalizing Proposition 4.3 and [20, Lemma 11].
**Proposition 6.3**.: _Let \(\mathscr{B}(\mathbf{R},F)\) be the singular vector bundle with respect to \(F\) and \(L_{F}:\bigoplus_{r=1}^{M}\mathbb{C}^{e_{r}}\rightarrow\varGamma(\mathscr{B}( \mathbf{R},F))\) the linear map in (6.2). A point \(\mathbf{\chi}=([\mathbf{x}_{1}],\ldots,[\mathbf{x}_{n}])\in X\) lies
_in the zero locus of the section \(\sigma=L_{F}((T_{r})_{r=1}^{M})\) if and only if \(\boldsymbol{\chi}\) is a singular vector tuple of \(\boldsymbol{R}\) with zero singular value along all edges in \(F\)._
Proof.: The image \(L_{F}((T_{r})_{r=1}^{M})(\boldsymbol{\chi})\) is the tuple of zero maps each in \(\mathscr{B}(e,F)_{\boldsymbol{\chi}}\) if and only if for all \(e\in E_{r}\) and \(r\in[M]\), \(L_{e,F}(T_{r})(\boldsymbol{\chi})(\otimes_{j=1}^{\mu(e)}\boldsymbol{x}_{s_{j} (e)})\) is the zero vector in the appropriate case of (6.1), if and only if \(T_{r}(\boldsymbol{x}_{s(e)})=\lambda_{e}\boldsymbol{x}_{t(e)}\) for some \(\lambda_{e}\in\mathbb{C}\) with \(\lambda_{e}=0\) if \(e\in F\), if and only if \(\boldsymbol{\chi}\) is a singular vector tuple of the hyperquiver representation \(\boldsymbol{R}\), with zero singular values along the edges of \(F\).
**Definition 6.4**.: The _isotropic quadric_\(Q_{n}=\{\boldsymbol{v}\in\mathbb{C}^{n}:\boldsymbol{v}^{\top}\boldsymbol{v}=0\}\) is the quadric hypersurface in \(\mathbb{C}^{n}\) of isotropic vectors. The variety \(Q_{n}\) is defined by a homogeneous equation. We consider it as a subvariety \(\mathbb{P}(Q_{n})\) of \(\mathbb{P}^{n}\).
**Definition 6.5**.: If \(T\in\mathbb{C}^{e}\) is a tensor and \(\boldsymbol{x}_{s_{j}(e)}\in\mathbb{C}^{d_{s_{j}(e)}}\) are vectors for \(j\in[m]\), then we denote by \(T(\boldsymbol{x}_{e}):=T_{e}(\boldsymbol{x}_{t(e)},\boldsymbol{x}_{s_{1}(e)},\ldots,\boldsymbol{x}_{s_{\mu}(e)})=\boldsymbol{x}_{t(e)}^{\top}T( \boldsymbol{x}_{s(e)})\in\mathbb{C}\) the contraction of the tensor \(T\) by the vectors \(\boldsymbol{x}_{s_{j}(e)}\), where \(T(\boldsymbol{x}_{s(e)})\) is the vector defined in (2.1).
We give a necessary and sufficient condition for when the maps in (6.1) generate the vector space \(\mathscr{B}(e)_{\boldsymbol{\chi}}\). This generalises [20, Lemma 8] from a single tensor to a hyperquiver representation. Later, in our proof that \(\mathscr{B}(\boldsymbol{R},F)\) is almost generated, we apply this condition to the vector subbundles \(\mathscr{B}_{i}\) in Definition 5.2. This will allow us to associate a single tensor to each piece of the partition.
**Lemma 6.6**.: _Let \(H=(V,E)\) be a hyperquiver, \(E=\coprod_{r=1}^{M}E_{r}\) be a partition, and assign vector spaces \(\mathbb{C}^{d_{i}}\) to each vertex \(i\in V\). Fix a collection of vectors \(\boldsymbol{x}_{i}\in\mathbb{C}^{d_{i}}\setminus\{0\}\) for \(i\in[n]\) and \(\boldsymbol{y}_{e}\in\mathbb{C}^{d_{t(e)}}\) for \(e\in E\). Fix \(F\subseteq E\) a subset of hyperedges. Let \(G_{r}\) be the hyperedges \(e\in E_{r}\setminus F\) such that \(\boldsymbol{x}_{t(e)}\) is isotropic. Then for all \(r\in[M]\), the following are equivalent:_
1. _There exist tensors_ \(T_{r}\in\mathbb{C}^{e_{r}}\) _for some_ \(e_{r}\in E_{r}\) _satisfying the equations_ \[\overline{T_{r}}(\boldsymbol{x}_{s(e)}) =\overline{\boldsymbol{y}_{e}}\in\mathbb{C}^{d_{t(e)}}/\operatorname {span}(\boldsymbol{x}_{t(e)}) e\in E_{r}\setminus F\] (6.3) \[T_{r}(\boldsymbol{x}_{s(e)}) =\boldsymbol{y}_{e}\in\mathbb{C}^{d_{t(e)}} e\in E_{r}\cap F.\] (6.4)
2. _Given any pair of edges_ \(e,e^{\prime}\in(F\cap E_{r})\cup G_{r}\)_, we have_ \[\boldsymbol{x}_{t(e)}^{\top}\boldsymbol{y}_{e}=\boldsymbol{x}_{t(e^{\prime})} ^{\top}\boldsymbol{y}_{e^{\prime}}.\] (6.5)
Proof.: \((a\Rightarrow b):\) There is a tensor \(T_{r}\) satisfying (6.3) if and only if there are scalars \(\lambda_{e}\in\mathbb{C}\) such that \(T_{r}(\boldsymbol{x}_{s(e)})=\boldsymbol{y}_{e}+\lambda_{e}\boldsymbol{x}_{t (e)}\) for all \(e\in E_{r}\setminus F\). Multiplying both sides by \(\boldsymbol{x}_{t(e)}\) gives \(T_{r}(\boldsymbol{x}_{e})=\boldsymbol{x}_{t(e)}^{\top}\boldsymbol{y}_{e}+ \lambda_{e}\boldsymbol{x}_{t(e)}^{\top}\boldsymbol{x}_{t(e)}\). Similarly, from (6.4) we obtain, for \(e\in F\cap E_{r}\), the condition \(T_{r}(\boldsymbol{x}_{e})=\boldsymbol{x}_{t(e)}^{\top}\boldsymbol{y}_{e}\). The scalar \(T_{r}(\boldsymbol{x}_{e})\) only depends on \(r\) via the set \(E_{r}\). Thus for any pair of edges \(e,e^{\prime}\in E_{r}\), we have
\[\boldsymbol{x}_{t(e)}^{\top}\boldsymbol{y}_{e}+\lambda_{e}\boldsymbol{x}_{t(e) }^{\top}\boldsymbol{x}_{t(e)}=\boldsymbol{x}_{t(e^{\prime})}^{\top} \boldsymbol{y}_{e^{\prime}}+\lambda_{e^{\prime}}\boldsymbol{x}_{t(e^{\prime}) }^{\top}\boldsymbol{x}_{t(e^{\prime})}\]
where \(\lambda_{e}=0\) for \(e\in F\cap E_{r}\). For the hyperedges in \(G_{r}\), the terms \(\boldsymbol{x}_{t(e)}^{\top}\boldsymbol{x}_{t(e)}\) vanish. Hence (6.5) holds for all \(e,e^{\prime}\in(F\cap E_{r})\cup G_{r}\).
\((b\Rightarrow a):\) Let \(\mu_{r}\in\mathbb{C}\) be the value of (6.5) if \((F\cap E_{r})\cup G_{r}\neq\varnothing\) and zero otherwise. Define
\[\lambda_{e}=\begin{cases}0&e\in(F\cap E_{r})\cup G_{r}\\ (\boldsymbol{x}_{t(e)}^{\top}\boldsymbol{x}_{t(e)})^{-1}(\mu_{r}-\boldsymbol {x}_{t(e)}^{\top}\boldsymbol{y}_{e})&\text{otherwise.}\end{cases}\]
Choose some \(e_{r}\in E_{r}\). We show that, for such a choice of \(\lambda_{e}\), there exists a tensor \(T_{r}\in\mathbb{C}^{e_{r}}\) that satisfies
\[T_{r}(\mathbf{x}_{s(e)})=\mathbf{y}_{e}+\lambda_{e}\mathbf{x}_{t(e)} \tag{6.6}\]
for all \(e\in E_{r}\), and hence there exists a tensor \(T_{r}\) that satisfies (6.3) and (6.4). A change of basis in each \(\mathbb{C}^{d_{i}}\) does not affect the existence or non-existence of solutions to (6.6). Consider the change of basis that sends each \(\mathbf{x}_{i}\) to the first standard basis vector in \(\mathbb{C}^{d_{i}}\), which we denote by \(\mathbf{e}_{i,1}=(1,0,\ldots,0)^{\top}\). For each \(e\in E_{r}\), there is a permutation \(\sigma\) of \([m]\) sending \(v(e)\) to \(v(e_{r})\) by Definition 2.6(i.b). Then (6.6) becomes the condition
\[\left(T_{r}\right)_{1,\ldots,1,\ell,1,\ldots,1}=(\mathbf{y}_{e})_{\ell}+\lambda_{ e}\delta_{1,\ell}\text{ for all }\ell\in[d_{t(e)}],\]
where \(\delta_{i,j}\) is the Kronecker delta and the \(\ell\) on the left hand side appears in position \(\sigma(m)\). We define \(T_{r}\) to be the tensor whose non-zero entries are given by the above equation. This is well-defined, since \(\sigma(m)\neq\sigma^{\prime}(m)\) for \(\sigma\neq\sigma^{\prime}\), by Definition 2.6(i.c). It remains to show that we do not attempt to assign different values to the same entry of \(T_{r}\). When \(\ell=1\), we assign the value \((\mathbf{y}_{e})_{1}+\lambda_{e}\). For all edges this quantity equals \((\mathbf{y}_{e})_{1}=\mu_{r}\).
To conclude this section, we show that \(\mathscr{B}:=\mathscr{B}(\mathbf{R},F)\) satisfies the conditions of Definition 5.2. This shows that \(\mathscr{B}\) is almost generated. First we define the subvarieties \(Y_{i}\) and the vector bundles \(\mathscr{B}_{i}\) over \(Y_{i}\) that appear in Definition 5.2.
We use the following notation. A linear functional \(\varphi:\mathbb{C}^{d_{t(e)}}/\operatorname{span}(\mathbf{x}_{t(e)})\to\mathbb{C}\) can be uniquely represented by a vector \(\mathbf{u}\in\mathbb{C}^{d_{t(e)}}\) such that \(\mathbf{u}^{\top}\mathbf{x}_{t(e)}=0\) and \(\varphi([\mathbf{y}])=\mathbf{u}^{\top}\mathbf{y}\), [20, Lemma 7]. In particular when \(\mathbf{x}_{t(e)}\in Q_{t(e)}\), we abbreviate \(\mathbf{x}_{t(e)}^{\top}[\mathbf{y}]\) to \(\mathbf{x}_{t(e)}^{\top}\mathbf{y}\).
For a subset \(\alpha\subseteq[n]\), define the smooth proper irreducible subvariety
\[Y_{\alpha}=X_{1}\times\cdots\times X_{n},\quad\text{where}\quad X_{i}=\begin{cases} \mathbb{P}(Q_{i})&i\in\alpha\\ \mathbb{P}(\mathbb{C}^{d_{i}})&i\notin\alpha.\end{cases}\]
In particular, \(Y_{\varnothing}=X\). Fix \(F\subseteq E\) and define \(F^{\prime}=\{t(e)\}_{e\in F}\). Fix \(\alpha\subseteq[n]\setminus F^{\prime}\). Let \(G_{r}\subseteq E_{r}\setminus F\) denote the edges whose target vertex lies in \(\alpha\). Define \(\mathscr{B}_{\alpha}\) to be the vector bundle over \(Y_{\alpha}\) whose fiber at \(\mathbf{\chi}=([\mathbf{x}_{1}],\ldots,[\mathbf{x}_{n}])\in Y_{\alpha}\) is the subspace \(U(\alpha,\mathbf{\chi})\) of linear maps \(\tau=(\tau_{e})_{e\in E}\in(\mathscr{B})_{\mathbf{\chi}}\) satisfying
\[\mathbf{x}_{t(e)}^{\top}\tau_{e}(\otimes_{j=1}^{\mu(e)}\mathbf{x}_{s_{j}(e)})=\mathbf{x}_{ t(e^{\prime})}^{\top}\tau_{e^{\prime}}(\otimes_{j=1}^{\mu(e^{\prime})}\mathbf{x}_{s_{j}(e^ {\prime})}), \tag{6.7}\]
for any edges \(e,e^{\prime}\in(F\cap E_{r})\cup G_{r}\), for every \(r\in[M]\).
**Proposition 6.7**.: _Let the map \(L_{F}\) be as in (6.2). For any subset of hyperedges \(F\subseteq E\), the vector subspace \(L_{F}\left(\bigoplus_{r=1}^{M}\mathbb{C}^{e_{r}}\right)\) almost generates \(\mathscr{B}(\mathbf{R},F)\)._
Proof.: We first show that the vector bundles \(\mathscr{B}_{\alpha}\) satisfy Definition 5.2(i). If \(\alpha,\beta\subseteq[n]\setminus F^{\prime}\), then \(\alpha\subsetneq\beta\) if and only if \(Y_{\beta}\) is a proper subvariety of \(Y_{\alpha}\). Furthermore, \(\mathscr{B}_{\beta}\) is a subbundle of \(\mathscr{B}_{\alpha}\big{|}_{Y_{\alpha}}\), since \(U(\beta,\mathbf{\chi})\) is a vector subspace of \(U(\alpha,\mathbf{\chi})\).
Next we prove that Definition 5.2(ii) holds. Recall that \(\varLambda(\mathbf{\chi}):=\{\sigma(\mathbf{\chi})\mid\sigma\in\varLambda\}\). We show that \(\varLambda(\mathbf{\chi})\subseteq(\mathscr{B}_{\alpha})_{\mathbf{\chi}}\). If \(\mathbf{\chi}\in Y_{\alpha}\), then an element of \(\varLambda(\mathbf{\chi})\) is an \(|E|\)-tuple of linear maps \(L_{e,F}(T_{r})(\mathbf{\chi})\) for some tensors \(T_{r}\in\mathbb{C}^{e_{r}}\), \(r\in[M]\). By the proof of (\(a\Rightarrow b\)) in Lemma 6.6, \(\tau_{e}:=L_{e,F}(T_{r})(\mathbf{\chi})\) satisfy (6.7), so \(\varLambda(\mathbf{\chi})\subseteq(\mathscr{B}_{\alpha})_{\mathbf{\chi}}\).
Finally we show that Definition 5.2(iii) holds. If \(\mathbf{\chi}\) lies on \(Y_{\alpha}\) but not on any proper subvariety \(Y_{\beta}\), then every \((\tau_{e})_{e\in E}\in(\mathscr{B}_{\alpha})_{\mathbf{\chi}}\) satisfies (6.7) and no additional equations. Thus
there exist tensors \(T_{r}\) with \(L_{e,F}(T_{r})=\tau_{e}\) for \(e\in E_{r}\) and \(\tau\in\varLambda(\mathbf{\chi})\), by Lemma 6.6. Hence, \(\varLambda(\mathbf{\chi})=(\mathscr{B}_{\alpha})_{\mathbf{\chi}}\).
## 7. The top Chern class of the singular vector bundle
In this section we compute the top Chern class of the singular vector bundle \(\mathscr{B}(\mathbf{R})\), generalizing [20, Lemma 3]. Combining this computation with Theorem 5.3 and Proposition 6.7 finds the degree of the singular vector variety, completing the proof of Theorem 3.1.
**Proposition 7.1**.: _Let \(\mathbf{R}=(\mathbf{d},T)\) be a hyperquiver representation and \(\mathscr{B}(\mathbf{R})\) be the singular vector bundle over \(X=\prod_{i=1}^{n}\mathbb{P}(\mathbb{C}^{d_{i}})\). Then the top Chern class of \(\mathscr{B}(\mathbf{R})\) is_
\[\prod_{e\in E}\sum_{k=1}^{d_{e(e)}}h_{t(e)}^{k-1}h_{s(e)}^{d_{t(e)}-k},\quad \text{where}\quad h_{s(e)}=\sum_{j=1}^{\mu(e)}h_{s_{j}(e)},\]
_in the Chow ring \(A^{*}(X)\cong\mathbb{Z}[h_{1},\ldots,h_{n}]/(h_{1}^{d_{1}},\ldots,h_{n}^{d_{n}})\)._
Proof.: We seek the Chern polynomial \(C(t,\mathscr{B}(\mathbf{R}))\). The coefficient of its highest power of \(t\) is the top Chern class. The Chern polynomial is multiplicative over short exact sequences, see Definition A.5(iii). Hence
\[C(t,\mathscr{F}(d))=C(t,\mathscr{T}(d))C(t,\mathscr{Q}(d)), \tag{7.1}\]
by (4.1). We compute \(C(t,\mathscr{T}(d))\). Let \(h\in A^{*}(\mathbb{P}(\mathbb{C}^{d}))\cong\mathbb{Z}[h]/(h^{d})\) be the class of a hyperplane in \(\mathbb{P}(\mathbb{C}^{d})\). By Definition A.5(i)-(ii), \(h\) is the first Chern class \(c_{1}(\mathscr{H}(d))\) and the Chern polynomial of \(\mathscr{H}(d)\) is \(C(t,\mathscr{H}(d))=1+ht\). Thus \(C(t,\mathscr{T}(d))=C(-t,\mathscr{H}(d)^{\vee})=1-ht\), by Proposition A.8(b).
Next we compute \(C(t,\mathscr{Q}(d))\). We have \(C(t,\mathscr{F}(d))=1\), by Proposition A.8(a). The Chern polynomial of \(\mathscr{Q}(d)\) is the inverse of \((1-ht)\), by (7.1). Using the formal factorization \(1-x^{n}=\prod_{k=0}^{n}(1-\zeta_{n}^{k}x)\) over \(A^{*}(X)\otimes\mathbb{C}\), we therefore have
\[C(t,\mathscr{Q}(d))=\sum_{j=0}^{d-1}(ht)^{j}=\frac{1-(ht)^{d-1}}{1-ht}=\frac{ \prod_{k=0}^{d-1}(1-\zeta_{d}^{k}ht)}{1-ht}=\prod_{k=1}^{d-1}(1-\zeta_{d}^{k} ht)\]
where \(\zeta_{d}\in\mathbb{C}\) is a \(d\)-th root of unity.
We have \(c_{1}(\pi_{i}^{*}\mathscr{H}(d_{i}))=\pi_{i}^{*}c_{1}(\mathscr{H}(d_{i}))= \pi_{i}^{*}h_{i}=h_{i}\in A^{*}(X)\), by Definition A.5(iv) and Definition A.2(ii). Thus the Chern polynomials of \(\pi_{i}^{*}\mathscr{H}(d_{i})\), \(\pi_{i}^{*}\mathscr{T}(d_{i})\), and \(\pi_{i}^{*}\mathscr{Q}(d_{i})\) equal those of \(\mathscr{H}(d)\), \(\mathscr{T}(d)\), and \(\mathscr{Q}(d)\) respectively but with \(h\) replaced by \(h_{i}\in A^{*}(X)\), by (4.2).
We have found the Chern roots of \(\pi_{i}^{*}\mathscr{H}(d_{i})\) and \(\pi_{i}^{*}\mathscr{Q}(d_{i})\), so we obtain Chern characters \(\operatorname{ch}(\pi_{i}^{*}\mathscr{H}(d_{i}))=\exp(h_{i})\) and \(\operatorname{ch}(\pi_{i}^{*}\mathscr{Q}(d_{i}))=\sum_{k=1}^{d_{i}-1}\exp(- \zeta_{d_{i}}^{k}h_{i})\). By Propositions 4.2(c) and
A.8(c), the Chern character \(\operatorname{ch}(\mathscr{B}(e))\) equals
\[\operatorname{ch}\left(\bigotimes_{j=1}^{\mu(e)}\pi_{s_{j}(e)}^{*} \mathscr{H}(d_{s_{j}(e)})\otimes\pi_{t(e)}^{*}\mathscr{Q}(d_{t(e)})\right) =\operatorname{ch}\left(\bigotimes_{j=1}^{\mu(e)}\pi_{s_{j}(e)}^ {*}\mathscr{H}(d_{s_{j}(e)})\right)\operatorname{ch}(\pi_{t(e)}^{*}\mathscr{Q }(d_{t(e)}))\] \[=\left(\prod_{j=1}^{\mu(e)}\exp(h_{s_{j}(e)})\right)\left(\sum_{k =1}^{d_{t(e)}-1}\exp(-\zeta_{d_{t(e)}}h_{t(e)})\right)\] \[=\sum_{k=1}^{d_{t(e)}-1}\exp\left(\sum_{j=1}^{\mu(e)}h_{s_{j}(e)}- \zeta_{d_{t(e)}}^{k}h_{t(e)}\right).\]
Switching to Chern polynomial form, we obtain
\[C(t,\mathscr{B}(e))=\prod_{k=1}^{d_{t(e)}-1}\left(1+\left(\sum_{j=1}^{\mu(e)}h _{s_{j}(e)}-\zeta_{d_{t(e)}}^{k}h_{t(e)}\right)t\right).\]
This product has degree \((d_{t(e)}-1)\) in \(t\), with top coefficient
\[\prod_{k=1}^{d_{t(e)}-1}\left(\sum_{j=1}^{\mu(e)}h_{s_{j}(e)}-\zeta_{t(e)}^{k} h_{t(e)}\right).\]
It follows from Definition A.5(iii) that \(C(t,\mathscr{B}(\boldsymbol{R}))=\prod_{e\in E}C(t,\mathscr{B}(e))\). The product has degree \((\sum_{e\in E}d_{t(e)}-|E|)\) in \(t\), with top coefficient (i.e., top Chern class of \(\mathscr{B}(\boldsymbol{R})\)) equal to
\[\prod_{e\in E}\prod_{k=1}^{d_{t(e)}-1}\left(\sum_{j=1}^{\mu(e)}h_{s_{j}(e)}- \zeta_{t(e)}^{k}h_{t(e)}\right).\]
Finally, the formal identity \(x^{n}-y^{n}=\prod_{k=0}^{n}(x-\zeta_{n}^{k}y)\) gives
\[\prod_{e\in E}\prod_{k=1}^{d_{t(e)}-1}\left(\sum_{j=1}^{\mu(e)}h_ {s_{j}(e)}-\zeta_{t(e)}^{k}h_{t(e)}\right) =\prod_{e\in E}\frac{\left(\sum_{j=1}^{\mu(e)}h_{s_{j}(e)}\right) ^{d_{t(e)}-1}-h_{t(e)}^{d_{t(e)}-1}}{\sum_{j=1}^{\mu(e)}h_{s_{j}(e)}-h_{t(e)}}\] \[=\prod_{e\in E}\sum_{k=0}^{d_{t(e)}-1}\left(\sum_{j=1}^{\mu(e)}h_ {s_{j}(e)}\right)^{d_{t(e)}-1-k}h_{t(e)}^{k}\in A^{*}(M).\qed\]
To conclude the paper, we now prove our main theorem.
Proof of Theorem 3.1.: The zero locus of a generic global section of \(\mathscr{B}:=\mathscr{B}(\boldsymbol{R},F)\) is the singular vector variety \(\mathcal{S}(\boldsymbol{R})\), with zero singular values along the edges in \(F\), by Propositions 4.3 and 6.3. The singular vector bundle \(\mathscr{B}\) from Definition 6.1 is almost generated, by Proposition 6.7. Hence our Bertini-type theorem Theorem 5.3 applies to it, to characterise the zeros of a generic section. It remains to derive the polynomial (3.1), prove the emptiness statement for \(\mathcal{S}(\boldsymbol{R})\) as well as its dimension and degree, and prove the statement regarding finitely many singular vector tuples.
We first consider the case \(F=\varnothing\). The top Chern class \(c_{r}(\mathscr{B})\) is given by Proposition 7.1. If \(N=d-r=0\), then \(\mathcal{S}(\boldsymbol{R})\) has the claimed number of points by Theorem 5.3(b). Suppose
\(r<d\). Let \(s:X\hookrightarrow\mathbb{P}^{D}\) be the Segre embedding and let \([l]\in A^{*}(\mathbb{P}^{D})\) be the class of a hyperplane. Continuing (5.1), we have
\[\nu\left(\mathscr{B}\big{|}_{L}\right)[p] =[L]c_{r}(\mathscr{B})=[L]s_{*}(c_{r}(\mathscr{B}))=[l]^{N}s_{*}( c_{r}(\mathscr{B}))\] (definition of pushforward) \[=s_{*}(s^{*}([l]^{N})c_{r}(\mathscr{B}))=s^{*}([l]^{N})c_{r}( \mathscr{B})\] (projection formula) \[=s^{*}([l])^{N}c_{r}(\mathscr{B})=(\sum_{i=1}^{n}h_{i})^{N}c_{r}( \mathscr{B})\] ( [21, Example 8.4.3] ) (7.2)
where \(A^{*}(X)\cong\mathbb{Z}[h_{1},\ldots,h_{n}]/(h_{1}^{d_{1}},\ldots,h_{n}^{d_{n}})\), giving us the polynomial (3.1).
We prove the emptiness statement by showing that \(\nu\left(\mathscr{B}\big{|}_{L}\right)=0\) if and only if \(c_{r}(\mathscr{B})=0\). By the proof of Theorem 5.3, \(c_{r}(\mathscr{B})=0\) if and only if \(\mathcal{S}(\boldsymbol{R})=\varnothing\). If \(c_{r}(\mathscr{B})=0\), then \(\nu\left(\mathscr{B}\big{|}_{L}\right)=0\) by (7.2). Conversely, if \(c_{r}(\mathscr{B})\neq 0\), then there exists a monomial \(h_{1}^{a_{1}}\ldots h_{n}^{a_{n}}\) in \(c_{r}(\mathscr{B})\) such that \(a_{i}<d_{i}\) and \(\sum_{i=1}^{n}a_{i}=r\). There exists a monomial \(h_{1}^{d_{1}-1-a_{1}^{\prime}}\ldots h_{n}^{d_{n}-1-a_{n}^{\prime}}\) in \(\left(\sum_{i=1}^{n}h_{i}\right)^{d-r}\) such that \(\sum_{i=1}^{n}a_{i}^{\prime}=r\). Thus, these monomials pair in the product \([L]c_{r}(\mathscr{B})\) to form the monomial \([p]=h_{1}^{d_{1}-1}\ldots h_{n}^{d_{n}-1}\). The coefficient of this monomial is \(\nu\left(\mathscr{B}\big{|}_{L}\right)\), which is non-zero. Therefore if \(\nu\left(\mathscr{B}\big{|}_{L}\right)\neq 0\), \(\mathcal{S}(\boldsymbol{R})\) has the claimed dimension and degree by Theorem 5.3.
It remains to prove the last sentence of the theorem, which pertains to the case \(N=0\). Fix \(\varnothing\neq\alpha\subseteq[n]\) and define \(\mathscr{B}_{\alpha}\) as in the proof of Proposition 6.7. Then \(\operatorname{rank}\mathscr{B}_{\alpha}=\operatorname{rank}\mathscr{B}-(| \alpha|-1)>\operatorname{rank}\mathscr{B}-|\alpha|=\dim(X)-|\alpha|=\dim(Y_{ \alpha})\) as the fibers of \(\mathscr{B}_{\alpha}\) are vector subspaces of the fibers of \(\mathscr{B}\) cut down by \(|\alpha|-1\) linearly independent equations (6.7). Thus, every singular vector has multiplicity \(1\) and is non-isotropic by Theorem 5.3(b). Finally, if \(F\neq\varnothing\) then \(\operatorname{rank}\mathscr{B}>\dim(X)\) by (6.2), so \(\boldsymbol{R}\) has no singular values equal to \(0\), by Theorem 5.3(a).
**Acknowledgements.** We thank Giorgio Ottaviani and Shmuel Friedland for helpful discussions. AS was supported by the Society of Fellows at Harvard University. VN is supported by the EPSRC grant EP/R018472/1 and US AFOSR grant FA9550-22-1-0462.
|
2304.12077 | Extensions of the symmetry algebra and Lax representations for the
two-dimensional Euler equation | We find the twisted extensions of the symmetry algebra of the 2D Euler
equation in the vorticity form and use them to construct new Lax representation
for this equation. Then we generalize this result by considering the
transformation Lie--Rinehart algebras generated by finite-dimensional
subalgebras of the symmetry algebra and derive a family of Lax representations
for the Euler equation. The family depends on functional parameters and
contains a non-removable spectral parameter. | Oleg I. Morozov | 2023-04-24T13:16:24Z | http://arxiv.org/abs/2304.12077v5 | # Extensions of the symmetry algebra and Lax representations for the two-dimensional Euler equation
###### Abstract.
We find the twisted extensions of the symmetry algebra of the 2D Euler equation in the vorticity form and use them to construct new Lax representations for this equation. One of the Lax representations is shown to contain the non-removable spectral parameter.
Key words and phrases:Two-dimensional Euler equation in the vorticity form; symmetry algebra; twisted extension; Lax representation; non-removable parameter 2020 Mathematics Subject Classification: 58H05, 58J70, 35A30, 37K05, 37K10 _Dedicated to Peter Olver on the occasion of his 70th birthday_.
## 1. Introduction
In this paper, we consider the two dimensional Euler equation in the vorticity form, [13, SS10],
\[\Delta u_{t}=[u,\Delta u], \tag{1.1}\]
where \([u,v]=u_{x}\,v_{y}-u_{y}\,v_{x}\) and \(\Delta u=u_{xx}+u_{yy}\). This equation is one of the fundamental models of hydrodynamics and the subject of intensive study, see [2] and references therein. Our research is focused on the Lax representations for equation (1.1). Lax representations provide the basic construction that allows applications of a number of techniques for studying nonlinear partial differential equations, whence they are considered as the key feature indicating integrability thereof, see [37, 39, 35, 31, 9, 1, 21, 32, 5] and references therein. From the viewpoint of geometry of differential equations, Lax representations and other nonlocal structures of integrable nonlinear systems are naturally formulated in the language of differential coverings, [11, 12, 36].
The Lax representation
\[\left\{\begin{array}{rcl}r_{t}&=&[u,r],\\ &[\Delta u,r]&=&\lambda\,r\end{array}\right. \tag{1.2}\]
for equation (1.1) was found in [14], see also references therein and [15]. The parameter \(\lambda\) in system (1.2) is removable. Indeed, when \(\lambda\neq 0\), the change of the pseudopotential \(r=\exp(\lambda\,s)\) transforms (1.2) to the form
\[\left\{\begin{array}{rcl}s_{t}&=&[u,s],\\ &[\Delta u,s]&=&1.\end{array}\right. \tag{1.3}\]
The Lax representation (1.2) with \(\lambda=0\) was used in [16] to find a weak Darboux transformation for equation (1.1). Further generalizations of this Darboux transformation were presented and applied to find exact solutions for (1.1) in [19, 18]. In [17], the Lax representation (1.2) with \(\lambda=0\) was used to define a special Backlund transformation for (1.1), and the transformation was utilized to obtain a number of exact solutions. A dressing method for constructing solutions to equation (1.1) was proposed in [38].
In the series of recent papers [22] - [27] we have developed the method for finding Lax representations of nonlinear pdes. The method is based on the twisted extensions of the Lie symmetry algebras of the pdes under the study. In [28] the method has been extended to the Lie-Rinehart algebras, in particular, we have shown that extensions of Lie algebras by appending an integral of a non-trivial 1-cocycle are useful in constructing Lax representations.
In the present paper we apply the approach of [22] - [28] to equation (1.1). We find the twisted extensions of the symmetry algebra thereof. The linear combination (4.1) of the Maurer-Cartan forms of the extension provides a Lax representation (4.3) without a non-removable parameter. This Lax representation differs from (1.3). When we allow the extension by appending an integral of a non-trivial 1-cocycle, we find the Lax representation (4.6) with the non-removable parameter. For the special value of the parameter this Lax representation takes the form (1.3).
## 2. Preliminaries and notation
### Symmetries and differential coverings
The presentation in this subsection closely follows [11, 12, 36]. Let \(\pi\colon\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}^{n}\), \(\pi\colon(x^{1},\ldots,x^{n},u^{1},\ldots,u^{m})\mapsto(x^{1},\ldots,x^{n})\), be a trivial bundle, and \(J^{\infty}(\pi)\) be the bundle of its jets of infinite order. The local coordinates on \(J^{\infty}(\pi)\) are \((x^{i},u^{\alpha},u^{\alpha}_{I})\), where \(I=(i_{1},\ldots,i_{n})\) are multi-indices, and for every local section \(f\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\times\mathbb{R}^{m}\) of \(\pi\) the corresponding infinite jet \(j_{\infty}(f)\) is a section \(j_{\infty}(f)\colon\mathbb{R}^{n}\to J^{\infty}(\pi)\) such that \(u^{\alpha}_{I}(j_{\infty}(f))=\dfrac{\partial^{\#I}f^{\alpha}}{\partial x^{I} }=\dfrac{\partial^{i_{1}+\cdots+i_{n}}f^{\alpha}}{(\partial x^{1})^{i_{1}} \ldots(\partial x^{n})^{i_{n}}}\). We put \(u^{\alpha}=u^{\alpha}_{(0,\ldots,0)}\). Also, we will simplify notation in the following way: e.g., for \(n=3\), \(m=1\) we denote \(x^{1}=t\), \(x^{2}=x\), \(x^{3}=y\), and \(u^{1}_{(i,j,k)}=u_{t\ldots tx\ldots xy\ldots y}\) with \(i\) times \(t\), \(j\) times \(x\), and \(k\) times \(y\).
The vector fields
\[D_{x^{k}}=\dfrac{\partial}{\partial x^{k}}+\sum_{\#I\geq 0}\sum_{\alpha=1}^{m }u^{\alpha}_{I+1_{k}}\,\dfrac{\partial}{\partial u^{\alpha}_{I}},\qquad k\in \{1,\ldots,n\},\]
\((i_{1},\ldots,i_{k},\ldots,i_{n})+1_{k}=(i_{1},\ldots,i_{k}+1,\ldots,i_{n})\), are called _total derivatives_. They commute everywhere on \(J^{\infty}(\pi)\).
The _evolutionary vector field_ associated to an arbitrary vector-valued smooth function \(\varphi\colon J^{\infty}(\pi)\to\mathbb{R}^{m}\) is the vector field
\[\mathbf{E}_{\varphi}=\sum_{\#I\geq 0}\sum_{\alpha=1}^{m}D_{I}(\varphi^{ \alpha})\,\dfrac{\partial}{\partial u^{\alpha}_{I}}\]
with \(D_{I}=D_{(i_{1},\ldots\,i_{n})}=D^{i_{1}}_{x^{1}}\circ\cdots\circ D^{i_{n}}_{x ^{n}}\).
A system of pdes\(F_{r}(x^{i},u^{\alpha}_{I})=0\) of the order \(s\geq 1\) with \(\#I\leq s\), \(r\in\{1,\ldots,R\}\) for some \(R\geq 1\), defines the submanifold \(\mathcal{E}=\{(x^{i},u^{\alpha}_{I})\in J^{\infty}(\pi)\ |\ D_{K}(F_{r}(x^{i},u^{\alpha}_{I}))=0,\ \#K\geq 0\}\) in \(J^{\infty}(\pi)\).
A function \(\varphi\colon J^{\infty}(\pi)\to\mathbb{R}^{m}\) is called a _(generator of an infinitesimal) symmetry_ of equation \(\mathcal{E}\) when \(\mathbf{E}_{\varphi}(F)=0\) on \(\mathcal{E}\). The symmetry \(\varphi\) is a solution to the _defining system_
\[\ell_{\mathcal{E}}(\varphi)=0 \tag{2.1}\]
of equation \(\mathcal{E}\), where \(\ell_{\mathcal{E}}=\ell_{F}|_{\mathcal{E}}\) with the matrix differential operator
\[\ell_{F}=\left(\sum_{\#I\geq 0}\frac{\partial F_{r}}{\partial u_{I}^{\alpha}} \,D_{I}\right).\]
The _symmetry algebra_\(\mathrm{Sym}(\mathcal{E})\) of equation \(\mathcal{E}\) is the linear space of solutions to (2.1) endowed with the structure of a Lie algebra over \(\mathbb{R}\) by the _Jacobi bracket_\(\{\varphi,\psi\}=\mathbf{E}_{\varphi}(\psi)-\mathbf{E}_{\psi}(\varphi)\). The _algebra of contact symmetries_\(\mathrm{Sym}_{0}(\mathcal{E})\) is the Lie subalgebra of \(\mathrm{Sym}(\mathcal{E})\) defined as \(\mathrm{Sym}(\mathcal{E})\cap C^{\infty}(J^{1}(\pi))\).
Let the linear space \(\mathcal{W}\) be either \(\mathbb{R}^{N}\) for some \(N\geq 1\) or \(\mathbb{R}^{\infty}\) endowed with local coordinates \(w^{a}\), \(a\in\{1,\ldots,N\}\) or \(a\in\mathbb{N}\), respectively. The variables \(w^{a}\) are called _pseudopotentials_[37]. Locally, a _differential covering_ of \(\mathcal{E}\) is a trivial bundle \(\varpi\colon J^{\infty}(\pi)\times\mathcal{W}\to J^{\infty}(\pi)\) equipped with the _extended total derivatives_
\[\widetilde{D}_{x^{k}}=D_{x^{k}}+\sum_{s=0}^{\infty}T_{k}^{s}(x^{i},u_{I}^{ \alpha},w^{j})\,\frac{\partial}{\partial w^{s}}\]
such that \([\widetilde{D}_{x^{i}},\widetilde{D}_{x^{j}}]=0\) for all \(i\neq j\) on \(\mathcal{E}\). Define the partial derivatives of \(w^{s}\) by \(w^{s}_{x^{k}}=\widetilde{D}_{x^{k}}(w^{s})\). This gives the over-determined system of pdes
\[w^{s}_{x^{k}}=T_{k}^{s}(x^{i},u_{I}^{\alpha},w^{j}) \tag{2.2}\]
which is compatible whenever \((x^{i},u_{I}^{\alpha})\in\mathcal{E}\). System (2.2) is referred to as the _covering equations_ or the _Lax representation_ of equation \(\mathcal{E}\).
Dually, the differential covering is defined by the _Wahlquist-Estabrook forms_
\[dw^{s}-\sum_{k=1}^{m}T_{k}^{s}(x^{i},u_{I}^{\alpha},w^{j})\,dx^{k} \tag{2.3}\]
as follows: when \(w^{s}\) and \(u^{\alpha}\) are considered to be functions of \(x^{1}\),..., \(x^{n}\), forms (2.3) are equal to zero iff system (2.2) holds.
Two differential coverings \(\varpi_{1}\) and \(\varpi_{2}\) of equation \(\mathcal{E}\) with the extended total derivatives \(\widetilde{D}_{x^{k}}^{(1)}\) and \(\widetilde{D}_{x^{k}}^{(2)}\) are called _equivalent_ if there exists a diffeomorphism \(\Phi\) such that the diagram
is commutative and \(\Phi_{*}(\widetilde{D}_{x^{k}}^{(1)})=\widetilde{D}_{x^{k}}^{(2)}\).
### Twisted extensions of Lie algebras
Consider a Lie algebra \(\mathfrak{g}\) over \(\mathbb{R}\) with the non-trivial first cohomology group \(H^{1}(\mathfrak{g})\) and take a non-trivial closed \(1\)-form \(\alpha\) on \(\mathfrak{g}\). Then for any \(c\in\mathbb{R}\) define new differential \(d_{c\alpha}\colon C^{k}(\mathfrak{g},\mathbb{R})\to C^{k+1}(\mathfrak{g}, \mathbb{R})\) by the formula \(d_{c\alpha}\theta=d\theta-c\,\alpha\wedge\theta\). From \(d\alpha=0\) it follows that \(d_{c\alpha}^{2}=0\). The cohomology groups of the complex
\[C^{1}(\mathfrak{g},\mathbb{R})\stackrel{{ d_{c\alpha}}}{{ \longrightarrow}}\ldots\stackrel{{ d_{c\alpha}}}{{\longrightarrow}}C^{k}( \mathfrak{g},\mathbb{R})\stackrel{{ d_{c\alpha}}}{{ \longrightarrow}}C^{k+1}(\mathfrak{g},\mathbb{R})\stackrel{{ d_{c \alpha}}}{{\longrightarrow}}\ldots\]
are referred to as the _twisted cohomology groups_[29, 30] of \(\mathfrak{g}\) and denoted by \(H^{*}_{c\alpha}(\mathfrak{g})\).
Suppose that for a Lie algebra \(\mathfrak{g}\) with the Maurer-Cartan forms \(\{\omega^{i}\mid i\in\mathbb{N}\}\) and the structure equations
\[d\omega^{i}=\frac{1}{2}\,a^{i}_{jk}\,\omega^{j}\wedge\omega^{k} \tag{2.4}\]
there hold \(H^{1}(\mathfrak{g})\neq\{0\}\) and there is \(c_{0}\in\mathbb{R}\) such that \(H^{2}_{c_{0}\alpha}(\mathfrak{g})\neq\{[0]\}\). Then for a non-trivial twisted \(2\)-cocycle \(\Omega\) such that \([\Omega]\in H^{2}_{c_{0}\alpha}(\mathfrak{g})\setminus\{[0]\}\) the equation
\[d\sigma=c_{0}\,\alpha\wedge\sigma+\Omega \tag{2.5}\]
is compatible with equations (2.4). The Lie algebra \(\hat{\mathfrak{g}}\) with the Maurer-Cartan forms \(\{\omega^{i},\sigma\}\) and the structure equations (2.4), (2.5) is referred to as the _twisted extension of \(\mathfrak{g}\) generated by the cocycle \(\Omega\)_.
## 3. The symmetry algebra and the twisted extensions
### The change of coordinates
To simplify computations in subsequent sections we perform the following change of variables: we write equation (1.1) as \(\tilde{u}_{\tilde{t}\tilde{x}\tilde{x}}+\tilde{u}_{\tilde{t}\tilde{y}\tilde{y }}=\tilde{u}_{\tilde{x}}\left(\tilde{u}_{\tilde{x}\tilde{x}\tilde{y}}+\tilde{ u}_{\tilde{y}\tilde{y}\tilde{y}}\right)-\tilde{u}_{\tilde{y}}\left(\tilde{u}_{ \tilde{x}\tilde{x}\tilde{x}}+\tilde{u}_{\tilde{x}\tilde{y}\tilde{y}}\right)\) and then put \(\tilde{t}=t,\ \tilde{x}=\frac{1}{2}\left(1+\mathrm{i}\right)(x+y)\), \(\tilde{y}=-\frac{1}{2}\left(1-\mathrm{i}\right)(x-y)\), and \(\tilde{u}=u\). This yields the transformed Euler equation
\[u_{txy}=u_{x}\,u_{xyy}-u_{y}\,u_{xxy}, \tag{3.1}\]
We can write this equation similar to (1.1) as \(\mathrm{D}(u_{t})=[u,\mathrm{D}(u)]\) with \(\mathrm{D}(u)=u_{xy}\).
The generators, the Maurer-Cartan forms, and the structure equations of the contact symmetry algebra
The Lie algebra \(\mathfrak{s}\) of the contact symmetries of equation (3.1) has generators
\[\begin{array}{rcl}\psi_{0}&=&-t\,u_{t}-u,\\ \psi_{1}&=&-u_{t},\\ \psi_{2}&=&t\,x\,u_{x}-t\,y\,u_{y}-x\,y,\\ \psi_{3}&=&t\,u_{t}-x\,u_{x}+y\,u_{y}+u,\\ \psi_{4}&=&t\,u_{t}-y\,u_{y}+2\,u,\\ \varphi_{1}(A_{1})&=&-A_{1}\,u_{x}+A_{1}^{\prime}\,y,\\ \varphi_{2}(A_{2})&=&-A_{2}\,u_{y}-A_{2}^{\prime}\,x,\\ \varphi_{3}(A_{3})&=&A_{3},\end{array}\]
where \(A_{i}=A_{i}(t)\) are arbitrary smooth functions of \(t\). For the purposes of the present study we restrict the attention to the polynomial functions \(A_{i}\), that is, we consider the subalgebra \(\mathfrak{s}_{0}\) generated by \(\psi_{i}\) and \(\varphi_{j}(t^{k})\) with \(i\in\{0,...,4\}\), \(j\in\{1,2,3\}\), and \(k\in\mathbb{N}\cup\{0\}\). Then the Maurer-Cartan forms \(\alpha_{i}\), \(\omega_{jk}\) of the Lie algebra \(\mathfrak{s}_{0}\) can be defined as the dual \(1\)-forms to the generators, that is, by imposing the equations \(\alpha_{i}(\psi_{i^{\prime}})=\delta_{ii^{\prime}}\), \(\alpha_{i}(\varphi_{j}(t^{k}))=0\), \(\omega_{jk}(\psi_{i})=0\), and \(\omega_{jk}(\varphi_{j^{\prime}}(t^{k^{\prime}}))=0\). The Lie algebra \(\mathfrak{s}_{0}\) can be defined as the dual \(1\)-forms to the generators, that is, by imposing the equations \(\alpha_{i}(\psi_{i^{\prime}})=\delta_{ii^{\prime}}\), \(\alpha_{i}(\varphi_{j}(t^{k}))=0\), \(\omega_{jk}(\psi_{i})=0\), and \(\omega_{jk}(\varphi_{j^{\prime}}(t^{k^{\prime}}))=0\).
\(\delta_{jj^{\prime}}\,\delta_{kk^{\prime}}\). Using the technique of moving frames [33, 4, 34], we can write the structure equations of \(\mathfrak{s}_{0}\) in the form
\[\left\{\begin{array}{lll}d\alpha_{0}&=&0,\\ d\alpha_{1}&=&\alpha_{0}\wedge\alpha_{1},\\ d\alpha_{2}&=&-\alpha_{0}\wedge\alpha_{2},\\ d\alpha_{3}&=&\alpha_{1}\wedge\alpha_{2},\\ d\alpha_{4}&=&0,\\ d\Omega_{1}&=&\Omega_{1,h}\wedge(h\,\alpha_{0}+\alpha_{1})+\Omega_{1}\wedge( h\,\alpha_{2}+\alpha_{0}-\alpha_{3}),\\ d\Omega_{2}&=&\Omega_{2,h}\wedge(h\,\alpha_{0}+\alpha_{1})+\Omega_{2}\wedge(h \,\alpha_{2}-\alpha_{0}-\alpha_{3}+\alpha_{4}),\\ d\Omega_{3}&=&\Omega_{3,h}\wedge(h\,\alpha_{0}+\alpha_{1})+(\alpha_{4}-3\, \alpha_{0})\wedge\Omega_{3}+\Omega_{1,h}\wedge\Omega_{2}\\ &&+\Omega_{1}\wedge\Omega_{2,h},\end{array}\right. \tag{3.2}\]
where
\[\Omega_{j}=\sum_{k=0}^{\infty}\frac{h^{k}}{k!}\,\omega_{jk}\]
are the formal series with the formal parameter \(h\) such that \(dh=0\) and \(\Omega_{j,h}=\partial_{h}\Omega_{j}\) are the formal derivatives of \(\Omega_{j}\) with respect to \(h\). The Lie algebra \(\mathfrak{s}_{0}\) is not of the Kac-Moody type, see [27, SS 3]. We can find the Maurer-Cartan forms by integrating the structure equations. In the subsequent sections we need the explicit expressions for the following forms:
\[\alpha_{0}=\frac{dq}{q},\quad\alpha_{1}=q\,dt,\quad\alpha_{2}=\frac{du_{xy}}{ q},\quad\alpha_{3}=\frac{du_{xyy}}{u_{xyy}}-u_{xy}\,dt,\]
\[\alpha_{4}=\frac{du_{xxy}}{u_{xxy}}+\frac{du_{xyy}}{u_{xyy}},\quad\omega_{10}= \frac{u_{xyy}}{q}\,(dy+u_{x}\,dt),\quad\omega_{20}=\frac{u_{xxy}}{q}\,(dx-u_{y }\,dt),\]
\[\omega_{30}=\frac{u_{xxy}u_{xyy}}{q^{3}}\,(du-u_{t}\,dt-u_{x}\,dx-u_{y}\,dy),\]
In these forms \(q\) is a non-zero parameter.
### The twisted extensions
From the structure equations (3.2) it follows that \(H^{1}(\mathfrak{s}_{0})=\langle\alpha_{0},\alpha_{4}\rangle\). The vector field \(U=t\,\partial_{t}+x\,\partial_{x}+y\,\partial_{y}+u\,\partial_{u}\) associated to the symmetry \(4\,\psi_{0}+\psi_{3}+2\,\psi_{4}=-t\,u_{t}-x\,u_{x}-y\,u_{y}+u\) defines an inner grading, [6, SS1.5.2], on the Lie algebra \(\mathfrak{s}_{0}\). The computations similar to those used in the proof of Thm. 2 from [23] give
\[H^{2}_{c_{1}\alpha_{0}+c_{2}\alpha_{4}}(\mathfrak{s}_{0})=\left\{\begin{array}[ ]{ll}\langle[\alpha_{1}\wedge\omega_{30}+\omega_{10}\wedge\omega_{20}]\rangle,&c_{1}=-2,\ c_{2}=1,\\ \langle[\alpha_{0}\wedge\alpha_{2}],[\alpha_{2}\wedge\alpha_{3}],[\alpha_{2} \wedge\alpha_{4}]\rangle,&c_{1}=-1,\ c_{2}=0,\\ \langle[\alpha_{0}\wedge\alpha_{1}],[\alpha_{1}\wedge\alpha_{3}],[\alpha_{1} \wedge\alpha_{4}]\rangle,&c_{1}=1,\ c_{2}=0,\\ \langle[\alpha_{0}\wedge\alpha_{4}]\rangle,&c_{1}=0,\ c_{2}=0,\\ \{[0]\},&\text{otherwise}.\end{array}\right.\]
The non-trivial twisted 2-cocycles define the eight-dimensional twisted extension of the Lie algebra \(\mathfrak{s}_{0}\) obtained by appending the structure equations
\[\left\{\begin{array}{rcl}d\sigma_{1}&=&(\alpha_{4}-2\,\alpha_{0})\wedge\sigma _{1}+\alpha_{1}\wedge\omega_{30}+\omega_{10}\wedge\omega_{20},\\ d\sigma_{2}&=&-\alpha_{0}\wedge\sigma_{2}+\alpha_{0}\wedge\alpha_{2},\\ d\sigma_{3}&=&-\alpha_{0}\wedge\sigma_{3}+\alpha_{2}\wedge\alpha_{3},\\ d\sigma_{4}&=&-\alpha_{0}\wedge\sigma_{4}+\alpha_{2}\wedge\alpha_{4},\\ d\sigma_{5}&=&\alpha_{0}\wedge\sigma_{5}+\alpha_{0}\wedge\alpha_{1},\\ d\sigma_{6}&=&\alpha_{0}\wedge\sigma_{6}+\alpha_{1}\wedge\alpha_{3},\\ d\sigma_{7}&=&\alpha_{0}\wedge\sigma_{7}+\alpha_{1}\wedge\alpha_{4},\\ d\sigma_{8}&=&\alpha_{0}\wedge\alpha_{4}\end{array}\right. \tag{3.3}\]
to equations (3.2). In what follows we need the explicit expression for the 1-form
\[\sigma_{1}=\frac{u_{xxy}u_{xyy}}{q^{2}}(dv-u\,dt+\tfrac{1}{2}\,(y\,dx-x\,dy))\]
obtained by integration of the first equation in (3.3) with the known 1-forms \(\alpha_{0}\), \(\alpha_{4}\), and \(\omega_{j0}\).
## 4. Lax representations
### The Wahlquist-Estabrook forms and the Lax representations
To find a Lax representation for equation (3.1) we consider the linear combination
\[\tau_{1}=\sigma_{1}-\omega_{10}-\omega_{20}\\ =\frac{u_{xxy}u_{xyy}}{q^{2}}\,\left(dv-\frac{1}{2\,u_{xyy}}\,(2 \,q-y\,u_{xyy})\,dx-\frac{1}{2\,u_{xxy}}\,(2\,q+x\,u_{xyy})\,dy\right.\\ \left.-\left(\frac{u_{x}u_{xyy}-u_{y}u_{xxy}}{u_{xxy}u_{xyy}}\,q+u \right)\,dt\right). \tag{4.1}\]
We rename
\[q=(v_{x}+\tfrac{1}{2}y)\,u_{xyy}\]
to make the coefficient at \(dx\) inside the outer parentheses equal to \(v_{x}\). This yields the Wahlquist-Estabrook form
\[\tau_{1}=\frac{u_{xxy}}{(v_{x}+\tfrac{1}{2}y)^{2}\,u_{xyy}}\left(dv-v_{x}\,dx- \left(\frac{u_{x}\,u_{xyy}-u_{y}\,u_{xxy}}{u_{xxy}}\,\left(v_{x}+\tfrac{1}{2} y\right)+u\right)\,dt\right.\\ \left.-\frac{1}{u_{xxy}}\,\left(u_{xyy}\,v_{x}+\frac{1}{2}\,(x\,u_{ xxy}+y\,u_{xyy})\right)\,dy\right)\]
of the Lax representation
\[\left\{\begin{array}{rcl}v_{t}&=&\frac{u_{x}\,u_{xyy}-u_{y}\,u_{xxy}}{u_{xxy} }\,\left(v_{x}+\tfrac{1}{2}y\right)+u,\\ v_{y}&=&\frac{1}{u_{xxy}}\,\left(u_{xyy}\,v_{x}+\frac{1}{2}\,(x\,u_{xxy}+y\,u_{ xyy})\right)\end{array}\right.\]
for equation (3.1). We can write this system in the form
\[\left\{\begin{array}{rcl}v_{t}&=&[u,v]+u-\tfrac{1}{2}\,\mathrm{E}(u),\\ [\mathrm{D}(u),v]&=&\frac{1}{2}\,\mathrm{E}(\mathrm{D}(u)),\end{array}\right. \tag{4.2}\]
where \(\operatorname{E}(u)=x\,u_{x}+y\,u_{y}\). Direct computations show that the compatibility conditions for system (4.2) follow from equation (3.1). The inverse transformation to the change of variables from SS3.1 gives the Lax representation
\[\left\{\begin{array}{rcl}v_{t}&=&[u,v]+u-\frac{1}{2}\operatorname{E}(u),\\ \left[\Delta u,v\right]&=&\frac{1}{2}\operatorname{E}(\Delta(u))\end{array}\right. \tag{4.3}\]
for equation (1.1).
To construct another Lax representation for equation (3.1) we apply the procedure of extension of a Lie algebra by appending an integral of a non-trivial \(1\)-cocycle, see Remark 2 in [28]. To this end we consider the linear combination
\[\tau_{2}=\sigma_{1}-\omega_{10}-\left(1-\frac{1}{q}\right)\,\omega_{20}.\]
Now the coefficient at \(\omega_{20}\) depends on the integral \(q\) of the \(1\)-cocycle \(\alpha_{0}\). We put
\[q=1+\left(v_{x}+\tfrac{1}{2}y\right)u_{xyy}.\]
and obtain
\[\tau_{2}=\frac{u_{xxy}u_{xyy}}{\left(1+\left(v_{x}+\tfrac{1}{2}y\right)u_{xyy }\right)^{2}}\left(dv-v_{x}\,dx-\left(\frac{u_{x}\,u_{xyy}-u_{y}\,u_{xxy}}{u_{ xxy}}\,\left(v_{x}+\tfrac{1}{2}y\right)+u+\frac{u_{x}}{u_{xxy}}\right)\,dt\right.\]
\[\left.-\frac{1}{u_{xxy}}\,\left(u_{xyy}\,v_{x}+\frac{1}{2}\left(x\,u_{xxy}+y\, u_{xyy}\right)+1\right)\,dy\right).\]
This Wahlquist-Estabrook form defines the Lax representation
\[\left\{\begin{array}{rcl}v_{t}&=&\frac{u_{x}\,u_{xyy}-u_{y}\,u_{xxy}}{u_{ xxy}}\,\left(v_{x}+\tfrac{1}{2}y\right)+u+\frac{u_{x}}{u_{xxy}},\\ v_{y}&=&\frac{1}{u_{xxy}}\,\left(u_{xyy}\,v_{x}+\frac{1}{2}\left(x\,u_{xxy}+y\, u_{xyy}\right)+1\right),\end{array}\right.\]
or
\[\left\{\begin{array}{rcl}v_{t}&=&[u,v]+u-\frac{1}{2}\operatorname{E}(u),\\ \left[\operatorname{D}(u),v\right]&=&1+\frac{1}{2}\operatorname{E}(\operatorname {D}(u)).\end{array}\right. \tag{4.4}\]
The compatibility conditions of this system are consequences of equation (3.1).
### The non-removable parameter
The symmetry \(\psi_{0}=-t\,u_{t}-u\) does not admit a lift to a symmetry of equations (4.4). For the third prolongation \(\operatorname{pr}_{3}V\) of the associated vector field \(V=t\,\partial_{t}-u\,\partial_{u}\) we have
\[\exp(\varepsilon\operatorname{pr}_{3}V)^{*}\,\tau_{2}=\] \[\frac{u_{xxy}}{\left(v_{x}+\tfrac{1}{2}y\right)^{2}u_{xyy}}\left( dv-v_{x}\,dx-\left(\frac{u_{x}\,u_{xyy}-u_{y}\,u_{xxy}}{u_{xxy}}\,\left(v_{x}+ \tfrac{1}{2}y\right)+u+\frac{\operatorname{e}^{\varepsilon}u_{x}}{u_{xxy}} \right)\,dt\right.\] \[\left.-\frac{1}{u_{xxy}}\,\left(u_{xyy}\,v_{x}+\frac{1}{2}\left(x \,u_{xxy}+y\,u_{xyy}\right)+\operatorname{e}^{\varepsilon}\right)\,dy\right).\]
In accordance with [12, SSS 3.2, 3.6], [10, 7, 20, 8] the parameter \(\varepsilon\) in the corresponding Lax representation
\[\left\{\begin{array}{rcl}v_{t}&=&[u,v]+u-\frac{1}{2}\operatorname{E}(u),\\ \left[\operatorname{D}(u),v\right]&=&\operatorname{e}^{\varepsilon}+\frac{1}{2 }\operatorname{E}(\operatorname{D}(u))\end{array}\right. \tag{4.5}\]
is non-removable, that is, differential coverings defined by system (4.5) with different constant values of \(\varepsilon\) are not equivalent. We put \(\lambda=\mathrm{e}^{-\varepsilon}\) and \(w=\lambda\,v\) in (4.5). This gives the Lax representation
\[\left\{\begin{array}{rcl}w_{t}&=&[u,w]+\lambda\,(u-\frac{1}{2}\,\mathrm{E}(u )),\\ \left[\mathrm{D}(u),w\right]&=&1+\frac{1}{2}\,\lambda\,\mathrm{E}(\mathrm{D}( u))\end{array}\right.\]
with the non-removable parameter \(\lambda\) for equation (3.1). The inverse transformation to the change of variables from SS3.1 gives the Lax representation
\[\left\{\begin{array}{rcl}w_{t}&=&[u,w]+\lambda\,(u-\frac{1}{2}\,\mathrm{E}(u )),\\ \left[\Delta u,w\right]&=&1+\frac{1}{2}\,\lambda\,\mathrm{E}(\Delta(u))\end{array}\right. \tag{4.6}\]
with the non-removable parameter for equation (1.1). The last system gets the form (1.3) as \(\lambda\to 0\).
## 5. Concluding remarks
We have shown that the technique of twisted extensions of Lie algebras proposed in [22] - [28] is useful for constructing new Lax representations of the 2D Euler equation, including the Lax representation with the non-removable parameter. We note that this result provides a new example of twisted extensions of Lie algebras that are not of the Kac-Moody type. We hope that our method will be applicable to other equations with non-trivial second twisted cohomology groups of the symmetry algebras that are of importance in hydrodynamics, climate modelling, and magnetohydrodynamics. Likewise, it is interesting to check whether the obtained Lax representations can be implemented to study the Euler equation, in particular, to find Darboux transformations or special Backlund transformations and to generate new interesting solutions thereof. We intend to address these issues in the future work.
## Acknowledgments
I am very grateful to I.S. Krasil\({}^{\prime}\)shchik for important discussions. I thanks M.V. Pavlov for useful remarks.
Computations of the generators of the symmetry algebra of equation (3.1) were done using the Jets software [3].
|
2307.04345 | Continual Learning as Computationally Constrained Reinforcement Learning | An agent that efficiently accumulates knowledge to develop increasingly
sophisticated skills over a long lifetime could advance the frontier of
artificial intelligence capabilities. The design of such agents, which remains
a long-standing challenge of artificial intelligence, is addressed by the
subject of continual learning. This monograph clarifies and formalizes concepts
of continual learning, introducing a framework and set of tools to stimulate
further research. | Saurabh Kumar, Henrik Marklund, Ashish Rao, Yifan Zhu, Hong Jun Jeon, Yueyang Liu, Benjamin Van Roy | 2023-07-10T05:06:41Z | http://arxiv.org/abs/2307.04345v2 | # Continual Learning as
###### Abstract
An agent that accumulates knowledge to develop increasingly sophisticated skills over a long lifetime could advance the frontier of artificial intelligence capabilities. The design of such agents, which remains a long-standing challenge, is addressed by the subject of continual learning. This monograph clarifies and formalizes concepts of continual learning, introducing a framework and tools to stimulate further research. We also present a range of empirical case studies to illustrate the roles of forgetting, relearning, exploration, and auxiliary learning.
Metrics presented in previous literature for evaluating continual learning agents tend to focus on particular behaviors that are deemed desirable, such as avoiding catastrophic forgetting, retaining plasticity, relearning quickly, and maintaining low memory or compute footprints. In order to systematically reason about design and compare agents, a coherent, holistic objective that encompasses all such requirements would be helpful. To provide such an objective, we cast continual learning as reinforcement learning with limited compute resources. In particular, we pose the continual learning objective to be the maximization of infinite-horizon average reward subject to a computational constraint. Continual supervised learning, for example, is a special case of our general formulation where the reward is taken to be negative log-loss or accuracy. Among implications of maximizing average reward are that remembering all information from the past is unnecessary, forgetting non-recurring information is not "catastrophic," and learning about how an environment changes over time is useful.
Computational constraints give rise to informational constraints in the sense that they limit the amount of information used to make decisions. A consequence is that, unlike more traditional framings of machine learning in which per-timestep regret vanishes as an agent accumulates information, the regret experienced in continual learning typically persists. Related to this is that a stationary environment can appear nonstationary due to informational constraints, creating a need for perpetual adaptation. Informational constraints also give rise to the familiar stability-plasticity dilemma, which we formalize in information-theoretic terms.
###### Contents
* 1 Introduction
* 2 An Objective for Continual Learning
* 2.1 Continual Interaction
* 2.2 Average Reward
* 2.3 Continual Learning Agents
* 2.3.1 Tracking
* 2.3.2 Exploration
* 2.3.3 Delayed Consequences
* 2.3.4 Supervised Learning
* 2.4 Computational Constraints
* 2.5 Learning Complex Skills over a Long Lifetime Summary
* 3 Agent State and Information Capacity
* 3.1 Agent State
* 3.2 Information Content
* 3.3 Information Capacity
* 3.4 Performance versus Information Capacity
* 3.4.1 Prediction Error
* 3.4.2 Informational Error Quantifies Absent Information
* 3.4.3 Information Capacity Constrains Performance
* 4 Vanishing-Regret Versus Continual Learning
* 4.1 Vanishing-Regret Learning
* 4.1.1 Learning Targets, Target Policies, and Vanishing Regret
* 4.1.2 Regret Analysis
* 4.1.3 The Cosmic Learning Target
* 4.2 Continual Learning
* 4.2.1 Constraints Induce Persistent Regret
* 4.2.2 Nonstationary Learning Targets
* 4.3 On Learning About an Unknown Environment Summary
* 5 Stability Versus Plasticity
* 5.1 Stability-Plasticity Decomposition
* 5.2 A Didactic Example
* 5.2.1 LMS with an AR(1) Process
* 5.2.2 Constraining Information Content
* 5.2.3 Analysis
* 5.2.4 Stepsize Adaptation Summary
* 6 Case Studies
* 6.1 Continual Supervised Learning
* 6.1.1 Environment
* 6.1.2 Agents
* 6.1.3 Evaluation Protocol
* 6.1.4 Results
* 6.2 Continual Exploration
* 6.2.1 Exploration in Stationary Environments
* 6.2.2 Exploration in Nonstationary Environments
* 6.2.3 Coin Swapping Games
* 6.2.4 Experiments in AR(1) Bandits
* 7
6.3 Continual Learning from Delayed Consequences * 6.4 Continual Auxiliary Learning * 6.4.1 Methods * 6.4.2 Results Summary
* 7 Conclusion
* A MDP Exchangeability
* B Capacity-Constrained LMS with an AR(1) Process
* B.1 Reparameterizing the Agent
* B.2 Plasticity and Forgetting Error
* B.3 The Optimal Learning Rate is Independent of Information Capacity
* B.4 Modifying IDBD to Account for Quantization Noise
* C Case Studies
* C.1 Continual Supervised Learning
* C.2 Continual Learning with Delayed Consequences
* C.3 Meta-Gradient Derivation for Continual Auxiliary Learning
Introduction
Continual learning remains a long-standing challenge. An agent that efficiently accumulates knowledge to develop increasingly sophisticated skills over a long lifetime could advance the frontier of artificial intelligence capabilities (Hadsell et al., 2020; Khetarpal et al., 2022; Ring, 2005; Thrun and Pratt, 1998). Success requires continuously ingesting new knowledge while retaining old knowledge that remains useful. Existing incremental machine learning techniques have failed to demonstrate this capability as they do not judiciously control what information they ingest, retain, or forget. Indeed, catastrophic forgetting (ejecting useful information from memory) and implasticity (forgoing useful new information) are recognized as obstacles to effective continual learning.
Traditional framings of machine learning view agents as acquiring knowledge about a fixed unknown latent variable. The aim is to develop methods that quickly learn about the latent variable, which we will refer to as the _learning target_, as data accumulates. For instance, in supervised learning, the learning target could be an unknown function mapping inputs to labels. In the reinforcement learning literature, the learning target is often taken to be the unknown transition matrix of a Markov decision process. With such traditional framings, an agent can be viewed as driving per-timestep regret - the performance shortfall relative to what could have been if the agent began with perfect knowledge of the learning target - to zero. If the agent is effective, regret vanishes as the agent accumulates knowledge. When regret becomes negligible, the agent is viewed as "done" with learning.
In contrast, continual learning addresses environments in which there may be no natural fixed learning target and an agent ought to never stop acquiring new knowledge. The need for continual learning can arise, for example, if properties of the agent's data stream appear to change over time. To perform well, an agent must constantly adapt its behavior in response to these evolving patterns.
There is a large gap between the state of the art in continual learning and what may be possible, making the subject ripe for innovation. This difference becomes evident when examining an approach in common use, which entails periodically training a new model from scratch on buffered data. To crystalize this, consider as a hypothetical example an agent designed to trade stocks. At the end of each month, this agent trains a new neural network model from scratch on data observed over the preceding twelve months. Replacing the old, this new model then governs decisions over the next month. This agent serves as a simple baseline that affords opportunity for improvement. For example, training each month's model from scratch is likely wasteful since it does not benefit from computation invested over previous months. Further, by limiting knowledge ingested by each model to that available from data acquired over the preceding twelve months, the agent forgoes the opportunity to acquire complex skills that might only be developed over a much longer duration.
While it ought to be possible to design more effective continual learning agents, how to go about that or even how to assess improvement remains unclear. Work on deep learning suggests that agent performance improves with increasing sizes of models, data sets, and inputs. However, computational resource requirements scale along with these and become prohibitive. A practically useful objective must account for computation. The primary goals of this monograph are to propose such an objective for continual learning and to understand key factors to consider in designing a performant continual learning agent. Rather than offer definitive methods, we aim to stimulate research toward identifying them.
Metrics presented in previous literature for evaluating continual learning agents tend to focus on particular behaviors that are deemed desirable, such as avoiding catastrophic forgetting, retaining plasticity, relearning quickly, and maintaining low memory or compute footprints (Ashley et al., 2021; Dohare et al., 2021; Fini et al., 2020; Kirkpatrick et al., 2017). For instance, the most common evaluation metric measures prediction accuracy on previously seen tasks to study how well an agent retains past information (Wang et al., 2023). However, the extent to which each of these behaviors matters is unclear. In order to systematically reason about design decisions and compare agents, a coherent, holistic objective that reflects and encompasses all such requirements would be helpful.
In this paper, we view continual learning under the lens of reinforcement learning (Agarwal et al., 2019; Bertsekas and Tsitsiklis, 1996; Meyn, 2022; Sutton and Barto, 2018; Szepesvari, 2010) to provide a formalism for what an agent is expected to accomplish. Specifically, we consider maximization of infinite-horizon average reward subject to a computational constraint. Average reward emphasizes long-term performance, which is suitable for the purpose of designing long-lived agents. The notion of maximizing average reward generalizes that of online average accuracy, as used in some literature on continual supervised learning (Cai et al., 2021; Ghunaim et al., 2023; Hammoud et al., 2023; Hu et al., 2022; Lin et al., 2021; Prabhu et al., 2023a; Xu et al., 2022).
As reflected by average reward, an agent should aim to perform well on an ongoing basis in the face of incoming
data it receives from the environment. Importantly, remembering all information from the past is unnecessary, and forgetting non-recurring information is not "catastrophic." An agent can perform well by remembering the subset that continues to remain useful. Although our objective relaxes the requirement of retaining all information to only retaining information useful in the future, even this remains difficult, or even impossible, in practice. Computational resources limit an agent's capacity to retain and process information. The computational constraint in our continual learning objective reflects this gating factor. This is in line with recent work highlighting the need to consider computational costs in continual learning (Prabhu et al., 2023b).
The remainder of this monograph is organized as follows. In Section 2, we introduce our framing of continual learning as reinforcement learning with an objective of maximizing average reward subject to a computational constraint. In Section 3, we introduce information-theoretic tools inspired by Jeon et al. (2023); Lu et al. (2021) to offer a lens for studying agent behavior and performance. In Section 4, we interpret in these information-theoretic terms what it means for an agent to perpetually learn rather than drive regret to zero and be "done" with learning. This line of thought draws inspiration from Abel et al. (2023), who define a notion of convergence and associates continual learning with non-convergence. In Section 5, we formalize the concepts of stability and plasticity to enable coherent analysis of trade-offs between these conflicting goals. Finally, in Section 6, to highlight the implications of our continual learning objective, we study simulation results from a set of case studies. In the first case study, on _continual supervised learning_, we study the role of forgetting in relation to our continual learning objective. In the second and third case studies, on _continual exploration_ and _continual learning with delayed consequences_, we study implications of nonstationarity and our objective on how an agent ought to explore and learn from delayed consequences.
## 2 An Objective for Continual Learning
Continual learning affords the never-ending acquisition of skills and knowledge (Ring, 1994). An agent operating over an infinite time horizon can develop increasingly sophisticated skills, steadily building on what it learned earlier. On the other hand, due to computational resource constraints, as such an agent observes an ever-growing volume of data, it must forgo some skills to prioritize others. Designing a performant continual learning agent requires carefully trading off between these considerations. A suitable mathematical formulation of the design problem must account for that. While many metrics have been proposed in the literature, they have tended to focus on particular behaviors that are deemed desirable. A coherent, holistic objective would help researchers to systematically reason about design decisions and compare agents. In this section, we formulate such an objective in terms of computationally constrained reinforcement learning (RL).
The subject of RL addresses the design of agents that learn to achieve goals through interacting with an environment (Sutton and Barto, 2018). As we will explain, the _general_ RL formulation subsumes the many perspectives that appear in the continual learning literature. We review RL and its relation to continual learning, illustrating with examples. We will then highlight the critical role of computational constraints in capturing salient trade-offs that arise in continual learning. Imposing a computational constraint on the general RL formulation gives rise to a coherent objective for continual learning. Aside from formulating this objective, we reflect on several implications of framing continual learning in these terms.
### Continual Interaction
We consider continual interaction across a general agent-environment interface as illustrated in Figure 1. At each time step \(t=0,1,2,\ldots\), an agent executes an action \(A_{t}\) and then observes a response \(O_{t+1}\) produced by the environment. Actions take values in an action set \(\mathcal{A}\). Observations take values in an observation set \(\mathcal{O}\). The agent's experience through time \(t\) forms a sequence \(H_{t}=(A_{0},O_{1},A_{1},O_{2},\ldots,A_{t-1},O_{t})\), which we refer to as its _history_. We denote the set of possible histories by \(\mathcal{H}=\cup_{t=0}^{\infty}(\mathcal{A}\times\mathcal{O})^{t}\).
An environment is characterized by a triple \(\mathcal{E}=(\mathcal{A},\mathcal{O},\rho)\), where \(\rho\) is an observation probability distribution, which satisfies \(\rho(\cdot|H_{t},A_{t})=\mathbb{P}(O_{t+1}\in\cdot|H_{t},A_{t})\). From the agent designer's perspective, it is as though the environment samples \(O_{t+1}\) from \(\rho(\cdot|H_{t},A_{t})\).
The agent generates each action \(A_{t}\) based on the previous history \(H_{t}\). This behavior is characterized by a policy \(\pi\), for which \(\pi(\cdot|H_{t})=\mathbb{P}_{\pi}(A_{t}\in\cdot|H_{t})\). The subscript \(\pi\) indicates the policy under which this probability is calculated. We refer to this policy, which characterizes the agent's behavior, as the _agent policy_. While the agent may carry
out sophisticated computations to determine each action, from the environment's perspective, it is as though the agent simply samples \(A_{t}\) from \(\pi(\cdot|H_{t})\).
Note that we take event probabilities to represent uncertainty from the agent designer's perspective. For example, \(\mathbb{P}_{\pi}((A_{t},O_{t+1})\in\tilde{\mathcal{A}}\times\tilde{\mathcal{O}}| H_{t})\) is the designer's subjective assessment of the chance that the next action-observation pair will fall in a set \(\tilde{\mathcal{A}}\times\tilde{\mathcal{O}}\) conditioned on \(H_{t}\), given that the agent implements \(\pi\). We take the function \(\rho\) to be deterministic, or equivalently, known to the designer. Note that the fact that \(\rho\) is deterministic does not mean observations are fully determined by history and action. Rather, \(\mathbb{P}(O_{t+1}\in\tilde{\mathcal{O}}|H_{t},A_{t})=\rho(\tilde{\mathcal{O }}|H_{t},A_{t})\) can lie between zero and one. With some abuse of notation, as shorthand, with singleton sets \(\tilde{\mathcal{A}}\) and \(\tilde{\mathcal{O}}\), we write \(\pi(a|h)\equiv\pi(\{a\}|h)\) and \(\rho(o|h,a)\equiv\rho(\{o\}|h,a)\). The following simple example offers a concrete instantiation of our formulation and notation.
**Example 1**.: **(coin tossing)** _Consider an environment with two actions \(\mathcal{A}=\{1,2\}\), each of which identifies a distinct coin. At each time \(t\), an action \(A_{t}\) selects a coin to toss next. Observations are binary, meaning \(\mathcal{O}=\{0,1\}\), with \(O_{t+1}\) indicating whether the selected coin lands heads. The coin biases \((p_{1},p_{2})\) are independent but initially unknown, and the designer's uncertainty prescribes prior distributions. These environment dynamics are characterized by a function \(\rho\) for which \(\rho(1|h,a)\) is the probability that coin \(a\) lands heads, regardless of the history \(h\)._
As a concrete special case, suppose the prior distribution over each coin's bias is uniform over the unit interval. Then, at each time \(t\), each bias \(p_{a}\) is distributed beta\((\alpha_{t,a},\beta_{t,a})\), with parameters initialized at \(\alpha_{0,a}=\beta_{0,a}=1\) and updated according to \((\alpha_{t+1,A_{t}},\beta_{t+1,A_{t}})=(\alpha_{t,A_{t}}+O_{t+1},\beta_{t,A_{ t}}+1-O_{t+1})\). Hence, \(\rho(1|H_{t},A_{t})=\alpha_{t,A_{t}}/(\alpha_{t,A_{t}}+\beta_{t,A_{t}})\).
The form of interaction we consider is fully general; each observation can exhibit _any_ sort of dependence on history. Notably, we do not assume that observations identify the state of the environment. While such an assumption - that the "environment is MDP," so to speak - is common to much of the RL literature (Sutton and Barto, 2018), a number of researchers have advocated for the general action-observation interface, especially when treating design of generalist agents for complex environments (Daswani et al., 2013, 2014; Dong et al., 2022; Hutter, 2007; Lu et al., 2021; McCallum, 1995; Ring, 1994, 2005).
Note that we characterize the environment via a deterministic function \(\rho\). Hence, the environment is known to the agent designer. This is in contrast to common formulations that treat the environment as unknown. As we will explain in Section 4, it is often more natural to treat environments of the sort that call for continual learning as known. Further, characterizing the environment as known never sacrifices generality, even when it is natural to treat the environment as unknown. In particular, observations generated by an unknown (random) function \(\tilde{\rho}\) are indistinguishable from those generated by a known (deterministic) function \(\rho\) for which \(\rho(\cdot|H_{t},A_{t})=\mathbb{E}[\tilde{\rho}(\cdot|H_{t},A_{t})|H_{t},A_{t}]\). To make this concrete, consider the environment of Example 1, which is characterized by a random function \(\tilde{\rho}\), defined by \(\tilde{\rho}(1|h,a)=p_{a}\), for each \(h\) and \(a\), together with a prior distribution over \(\tilde{\rho}\). The prior distribution over \(\tilde{\rho}\) is induced by a prior distribution over the coin biases \((p_{1},p_{2})\). This environment can alternatively be characterized by the deterministic function \(\rho\) for which \(\rho(1|H_{t},A_{t})=\mathbb{E}[\tilde{\rho}(1|H_{t},A_{t})|H_{t},A_{t}]\). In this context, where we marginalize over an unknown latent variable \(\tilde{\rho}\), the deterministic function \(\rho(\cdot|H_{t},A_{t})\) is is the _posterior predictive distribution_. If, for example, the prior distribution over coin biases is uniform then \(\rho(1|h,a)=\frac{\alpha_{a,t}}{\alpha_{a,t}+\beta_{a,t}}=1-\rho(0|h,a)\) for each \(h\) and \(a\).
Figure 1: The agent-environment interface.
Finally, there is a possibly subtle point that the coin missing example highlights. The observation probability function for the coin tossing example is \(\rho(1|h,a)=\frac{\alpha_{a,t}}{\alpha_{a,t}+\beta_{a,t}}\) which means that, from the perspective of the agent, the outcomes of the coin tosses are not iid. Indeed, if they were iid there would be nothing to learn. Data generating processes that the literature refers to as iid are typically iid only if conditioned on a latent random variable. For instance, in supervised learning, the data is typically assumed to be iid conditioned on an unknown function and assuming the input distribution is known. This is why the observation probability function \(\rho\), which does not depend on any latent variable, prescribes a non-iid distribution over outcomes.
### Average Reward
The agent designer's preferences are expressed through a reward function \(r:\mathcal{H}\times\mathcal{A}\times\mathcal{O}\rightarrow\mathbb{R}\). The agent computes a reward \(R_{t+1}=r(H_{t},A_{t},O_{t+1})\) at each time. As is customary to the RL literature, this reward indicates whether the agent is achieving its purpose, or goals.
A coherent objective requires trading off between short and long term rewards because decisions expected to increase reward at one point in time may reduce reward at another. As a continual learning agent engages in a never-ending process, a suitable objective ought to emphasize _the long game_. In other words, a performant continual learning agent should attain high expected rewards over asymptotically long horizons. This behavior is incentivized by an average reward objective function:
\[\overline{r}_{\pi}=\liminf_{T\rightarrow\infty}\mathbb{E}_{\pi}\left[\frac{1 }{T}\sum_{t=0}^{T-1}R_{t+1}\right]. \tag{1}\]
Analogously with \(\mathbb{P}_{\pi}\), the subscript of \(\mathbb{E}_{\pi}\) indicates the policy under which this expectation is calculated. We will frame the goal of agent design to be maximizing average reward subject to a computation constraint that we will later introduce and motivate. The expectation integrates with respect to probability distributions prescribed by \(\pi\) and \(\rho\). In particular, \(\pi\) provides each next action distribution, conditioned on history, while \(\rho\) provides each next observation distribution, conditioned on history and action. The probability distribution of a history \(H_{t}\) is the product of these conditional probabilities across actions and observations through time \(t\), and \(H_{t}\) determines rewards received through that time.
Average reward may seem like a strange choice of objective for assessing agent performance in an environment of the kind described in Example 1. In particular, many policies, some of which learn quickly and some of which learn extremely slowly, converge on the better coin and thus attain the maximal average reward. The fact that this objective can not discriminate between these policies poses a limitation. However, in the realm of continual learning, such an environment is degenerate. Environments of interest are those in which an agent ought to continually acquire new useful information rather than effectively complete its learning after some time. Indeed, the environment of Example 1 is stationary, whereas the subject of continual learning aims to address nonstationarity. A simple modification of Example 1 gives rise to a nonstationary environment.
**Example 2**.: **(coin swapping)** _Recall the environment of Example 1, but suppose that, at each time \(t\), before action \(A_{t}\) is executed, each coin \(a\) is replaced by a new coin with some fixed probability \(q_{a}\). Coin replacement events are independent, and each new coin's bias is independently sampled from its prior distribution. With this change, biases of the two available coins can vary over time. Hence, we introduce time indices and denote biases by \((p_{t,1},p_{t,2})\)._
Agents that learn quickly attain higher average reward in nonstationary environments such as this one. In particular, there is benefit to quickly learning about coin biases and capitalizing on that knowledge for as long as possible before the coins are replaced.
While some work on continual learning has advocated for average reward as an objective (Chen et al., 2022, Sharma et al., 2021), discounted reward has attracted greater attention (Khetarpal et al., 2022, Ring, 2005). Perhaps this is due to the technical burden associated with design and analysis of agents that aim to maximize average reward. We find average reward to better suit the spirit of continual learning as it emphasizes long-term performance. Continual learning affords the possibility of learning very sophisticated skills that build on experience accumulated over a long lifetime, and emphasizing long-term behavior incentivizes design of agents that learn such skills if possible.
The continual learning literature has spawned a multitude of other metrics for assessing agent performance. Prominent criteria tend to focus on detecting particular behaviors rather than holistic evaluation. Examples include the ability to perform an old task well after learning a new task, the ability to more quickly learn to perform a new task, and better performance on a new task before gathering enough data to learn new skills (Vallabha
and Markowitz, 2022). Average reward subsumes such criteria when they are relevant to online operation over an infinite time horizon. For example, an agent's initial performance on a new task as well as the time required to learn to do better impact average reward. On the other hand, the ability to perform well on an old task that will never reappear is irrelevant, and average reward is appropriately insensitive to that.
Assessing performance in terms of average reward gives rise to additional intriguing implications. First, even if the agent forgets recurring information, the agent can do well if it can relearn that quickly when needed. This is intuitive: a competent software engineer who forgets a programming language can, when required for a new year-long project, quickly relearn it and successfully complete the project. Second, the agent can benefit from predicting changing patterns. Modeling dynamics can help the agent decide what skills to retain. For instance, certain recurrence is periodic and therefore predictable, like queries about ice cream during the summer. Something not recurring may also be predictable such as when an elected official finishes their term and steps out of the limelight. An agent could in principle learn to predict future events to prioritize skills. Our theoretical and empirical analyses will further elucidate on these implications of the average reward objective.
### Continual Learning Agents
In our abstract formulation, a continual learning agent is characterized by an agent policy \(\pi\), which specified the conditional distribution of each action \(A_{t}\sim\pi(\cdot|H_{t})\). To crystalize this notion, in this section we describe a few specific instances as concrete examples of agents. In each instance we present the interface \((\mathcal{A},\mathcal{O})\) for which the agent is designed, the algorithm the agent implements to compute each action \(A_{t}\), and an environment in which the agent ought to be effective.
#### 2.3.1 Tracking
Suppose observations \(O_{1},O_{2},O_{3},\ldots\) are noisy measurements of a latent stochastic process. For example, each could be a thermostat reading, which is not exactly equal to the current temperature. A tracking agent generates estimates \(A_{t}\) of the latent process, each of which can be viewed as predictions of \(O_{t+1}\). The least mean squares (LMS) algorithm implements a simple tracking agent. While the algorithm more broadly applies to vector-valued observations, to start simple, we consider only the scalar case.
**Example 3**.: **(scalar LMS)** _This agent interacts with an environment through real-valued actions and observations: \(\mathcal{A}=\mathcal{O}=\mathbb{R}\). Each action \(A_{t}\) represents a prediction of the next observation \(O_{t+1}\), and to penalize errors, the reward function is taken to be negative squared error: \(r(H_{t},A_{t},O_{t+1})=-(O_{t+1}-A_{t})^{2}\). Initialized with \(\mu_{0}\in\mathbb{R}\), the agent updates this parameter according to_
\[\mu_{t+1}=\eta\mu_{t}+\alpha(O_{t+1}-\eta\mu_{t}),\]
_where \(\eta\in[0,1]\) and \(\alpha\in[0,1]\) are shrinkage and stepsize hyperparameters. The agent executes actions \(A_{t}=\eta\mu_{t}\)._
The LMS algorithm is designed to track a latent process that generates observations. Let us offer an example of an environment driven by such a process, for which the algorithm is ideally-suited. Consider a random sequence \((\theta_{t}:t\in\mathbb{Z}_{+})\), with the initial latent variable \(\theta_{0}\) distributed according to a prior \(\mathcal{N}(\mu_{0},\Sigma_{0})\) and updated according to \(\theta_{t+1}=\eta\theta_{t}+Z_{t+1}\), with each process perturbation \(Z_{t+1}\) independently sampled from \(\mathcal{N}(0,\zeta^{2})\). Further, suppose that \(O_{t+1}=\theta_{t+1}+W_{t+1}\), with each observation noise sample \(W_{t+1}\) drawn independently from \(\mathcal{N}(0,\sigma^{2})\).
What we have described provides a way of generating each observation \(O_{t+1}\). We have thus fully characterized an environment \(\mathcal{E}=(\mathcal{A},\mathcal{O},\rho)\). Note that we are using the parameter \(\eta\) for two purposes. First, it is a hyperparameter of the agent described in Example 3. Second, it serves in the specification of this hypothetical environment that we frame as one for which the agent is ideally suited. For this environment, with an optimal choice of stepsize, the LMS algorithm attains minimal average reward over all agent designs. Figure 2(a) plots average reward as a function of stepsize. The tracking behavior in Figure 2(b) is attained by the optimal stepsize.
#### 2.3.2 Exploration
In Example 3, actions do not impact observations. The following agent updates parameters in a similar incremental manner but then uses them to select actions that determine what information is revealed through observations.
**Example 4**.: **(nonstationary Gaussian Thompson sampling)** _This agent interfaces with a finite action set \(\mathcal{A}\) and real-valued observations \(\mathcal{O}=\mathbb{R}\). The reward is taken to be the observation, so that
\(O_{t+1}\). Initialized with \(\mu_{0}\in\mathbb{R}^{\mathcal{A}}\), the agent updates parameters according to_
\[\mu_{t+1,a}=\left\{\begin{array}{ll}\eta\mu_{t,a}+\alpha_{t+1}(O_{t+1}-\eta \mu_{t,a})&\qquad\text{if }a=A_{t}\\ \eta\mu_{t,a}&\qquad\text{otherwise}.\end{array}\right.\]
_Note that each observation only impacts the component of \(\mu_{t}\) indexed by the executed action. The stepsize varies with time according to \(\alpha_{t+1}=\Sigma_{t+1,A_{t}}/\sigma^{2}\), for a sequence initialized with a vector \(\Sigma_{0}\in\mathbb{R}^{\mathcal{A}}_{+}\) and updated according to_
\[\Sigma_{t+1,a}=\left\{\begin{array}{ll}\frac{1}{\frac{\eta^{2}\Sigma_{t,a}+ \zeta^{2}}{\eta^{2}\Sigma_{t,a}+\zeta^{2}}+\frac{1}{\sigma^{2}}}&\qquad\text{ if }a=A_{t}\\ \eta^{2}\Sigma_{t,a}+\zeta^{2}&\qquad\text{otherwise},\end{array}\right.\]
_where \(\eta\in\mathbb{R}\), \(\zeta\in\mathbb{R}_{+}\), \(\sigma\in\mathbb{R}_{+}\) are scalar hyperparameters. Each \(\hat{\theta}_{t,a}\) is drawn independently from \(\mathcal{N}(\mu_{t,a},\Sigma_{t,a})\). Then, \(A_{t}\) is sampled uniformly from the set \(\operatorname*{arg\,max}_{a\in\mathcal{A}}\hat{\theta}_{t,a}\)._
This agent is especially suited for a particular environment, which is characterized by latent random variables \(\theta_{a}\), each independent and initially distributed \(\mathcal{N}(\mu_{0,a},\Sigma_{0,a})\). Rewards and observations are generated according to \(R_{t+1}=O_{t+1}=\theta_{A_{t}}+W_{t+1}\), with each \(W_{t+1}\) sampled independently from \(\mathcal{N}(0,\sigma^{2})\). It is natural to think of this as a stationary environment because the latent mean rewards \(\theta_{a}\) do not change over time.
The environment we have described constitutes a _bandit_; this is a somewhat antiquated reference to a slot machine, which tends to "rob" players of their money. With multiple arms, as illustrated in Figure 3, a player chooses at each time an arm \(A_{t}\) and receives a payout \(R_{t+1}\). Because payout distributions are not listed, the player can learn them only by experimenting. As the gambler learns about each arm's payouts, they face a dilemma: in the immediate future they expect to earn more by exploiting arms that yielded high payouts in the past, but by continuing to explore alternative arms they may learn how to earn higher payouts in the future.
Figure 3: A two-armed bandit.
Figure 2: Tracking a latent scalar process using the LMS algorithm. The plots are generated with \(\mu_{0}=0\), \(\Sigma_{0}=1\), \(\eta=0.9\), \(\zeta=0.5\), and \(\sigma=1\).
In the stationary environment we have described, the distribution for arm \(a\) is Gaussian with unknown mean \(\theta_{a}\), which can be learned by observing payouts. For this environment, if \(\eta=1\) and \(\zeta=0\), the agent can be interpreted as selecting actions via Thompson sampling (Russo et al., 2018; Thompson, 1933). In particular, at each time \(t\), the posterior distribution of \(\theta_{a}\) is \(\mathcal{N}(\mu_{t,a},\Sigma_{t,a})\), and the agent samples statistically plausible estimates \(\hat{\theta}_{t,a}\) from this distribution, then executes an action that maximizes among them.
If \(\eta\in(-1,1)\) and \(\zeta>0\), on the other hand, the environment is considered a nonstationary bandit because the arm payout distributions vary over time. In particular, the environment is characterized by latent random variables updated according to \(\theta_{t+1,a}=\eta\theta_{t,a}+Z_{t+1,a}\), with each \(Z_{t+1,a}\) independently sampled from \(\mathcal{N}(0,\zeta^{2})\). The agent maintains parameters of the posterior distribution of \(\theta_{t,a}\), which is \(\mathcal{N}(\mu_{t,a},\Sigma_{t,a})\). For each action \(a\), the agent samples \(\hat{\theta}_{t,a}\) from the posterior distribution. The agent then executes an action that maximizes among these samples.
This nonstationary Gaussian Thompson sampling agent may be applied in other environments as well, whether or not observations are driven by Gaussian processes. For example, with a suitable choice of \(\zeta\), it may exhibit reasonable behavior in the coin swapping environment of Example 2. However, the agent does not adequately address environments in which actions induce delayed consequences.
#### 2.3.3 Delayed Consequences
The agent of Example 4 maintains parameters \(\mu_{t}\) that serve the prediction \(\eta\mu_{t,A_{t}}\) of the expected immediate reward \(R_{t+1}\). These predictions can guide the agent to select actions that earn high immediate reward. To learn from and manage delayed rewards, the Q-learning algorithm (Watkins, 1989) instead maintains predictions \(Q(S_{t},A_{t})\) of the expected discounted return \(\sum_{k=0}^{\infty}\gamma^{k}R_{t+k+1}\). In addition to the action \(A_{t}\), such predictions are conditioned on a situational state \(S_{t}\), which provides context for the agent's decision. The following example elaborates.
**Example 5**.: **(optimistic Q-learning)** _This agent is designed to interface with any observation set \(\mathcal{O}\) and any finite action set \(\mathcal{A}\). It maintains an action value function \(Q_{t}\), which maps the situational state \(S_{t}\), which takes values in a finite sets \(\mathcal{S}\), and action \(A_{t}\) to a real value \(Q_{t}(S_{t},A_{t})\). The situational state \(S_{t}\) represents features of the history \(H_{t}\) that are predictive of future value. It is determined by an update function \(f\), which forms part of the agent design, according to_
\[S_{t+1}=f(S_{t},A_{t},O_{t+1}),\]
_and substitutes for history in the computation of rewards, which take the form \(R_{t+1}=r(S_{t},A_{t},S_{t+1})\). We consider an optimistic version of Q-learning, which updates the action value function according to_
\[Q_{t+1}(s,a)=\left\{\begin{array}{ll}Q_{t}(S_{t},A_{t})+\alpha\left(R_{t+1} +\gamma\max_{a\in\mathcal{A}}Q_{t}(S_{t+1},a)-Q_{t}(S_{t},A_{t})\right)+ \zeta&\text{if }(s,a)=(S_{t},A_{t})\\ Q_{t}(s,a)+\zeta&\text{otherwise.}\end{array}\right.\]
_Hyperparameters include a discount factor \(\gamma\in(0,1)\), a stepsize \(\alpha\in(0,1)\), and an optimistic boost \(\zeta\in\mathbb{R}_{+}\). Each action \(A_{t}\) is sampled uniformly from \(\arg\max_{a\in\mathcal{A}}Q_{t}(S_{t},a)\)._
To interpret this algorithm, consider situational state dynamics generated by a latent MDP, identified by a tuple \((\mathcal{S},\mathcal{A},r,P,S_{0})\). Here, \(P\) is a random variable that specifies transition probabilities \(P_{a,s,s^{\prime}}\) for actions \(a\in\mathcal{A}\) and situational states \(s,s^{\prime}\in\mathcal{S}\). In particular, \(\mathbb{P}(S_{t+1}|P,A_{t},S_{t})=P_{A_{t},S_{t},S_{t+1}}\) for all \(t\). If the hyperparameters \(\gamma\), \(\alpha\), and \(\zeta\) anneal over time at suitable rates to 1, 0, and 0, along the lines a more sophisticated version of Q-learning analyzed by Dong et al. (2022), the sequence \((Q_{t}:t\in\mathbb{Z}_{+})\) should converge to the optimal action value function of the MDP. By the same token, the agent would attain optimal average reward.
Versions of Q-learning permeate the RL literature. The one we have described presents some features that are worth discussing in relation to continual learning. First, action values are boosted by exploration bonuses, as considered by Sutton (1990), but in a manner that incentivizes visits to state-action pairs that have not been visited for a long time even if they have already been visited many times. Second, unlike common treatments that assume the aforementioned Markov property, our algorithm applies more broadly. It suffices for \(S_{t}\) to enable useful predictions of value that guide effective decisions rather than approximate the state of the environment, which could be far more complex. This suits the spirit of continual learning, since the subject is oriented toward addressing very complex environments. Another noteworthy feature is that the hyperparameters \(\gamma\), \(\alpha\), and \(\zeta\) are fixed. If the situational state dynamics were stationary, annealing hyperparameters over time is beneficial. However, as we will further discuss in Section 6.3, fixed parameters fare better in the face of nonstationarity.
#### 2.3.4 Supervised Learning
Much of the literature on continual learning focuses on supervised learning (SL). As noted by Khetarpal et al. (2022), continual SL is a special case of reinforcement learning. In particular, take each observation to be a data pair \(O_{t}=(Y_{t},X_{t})\), consisting of a label \(Y_{t}\) assigned to the previous input \(X_{t-1}\) and a next input \(X_{t}\). Labels take values in a finite set \(\mathcal{Y}\) and inputs take values in a set \(\mathcal{X}\) which could be finite, countable, or uncountable. The set \(\mathcal{O}\) of observations is a product \(\mathcal{Y}\times\mathcal{X}\). Take each action \(A_{t}\) to be a predictive distribution \(P_{t}\), which assigns a probability \(P_{t}(y)\) to each label \(y\in\mathcal{Y}\). Hence, \(P_{t}\) takes values in a unit simplex \(\Delta_{\mathcal{Y}}\). We view \(P_{t}\) as a prediction of the label \(Y_{t+1}\) that will be assigned to the input \(X_{t}\). The observation probability function \(\rho\) samples the next data pair \(O_{t+1}=(Y_{t+1},X_{t+1})\) in a manner that depends on history only through past observations, not past actions. Finally, take the reward function to be \(r(H_{t},A_{t},O_{t+1})=\ln P_{t}(Y_{t})\). With this formulation, average reward is equivalent to average negative log-loss:
\[\overline{r}_{\pi}=\liminf_{T\rightarrow\infty}\mathbb{E}_{\pi}\left[\frac{1 }{T}\sum_{t=0}^{T-1}\ln P_{t}(Y_{t+1})\right]. \tag{2}\]
This is a common objective used in online supervised learning. The following agent is designed to minimize log-loss. Recall that \(\Delta_{\mathcal{Y}}\) denotes the unit simplex or, equivalently, the set of probability vectors with one component per element of \(\mathcal{Y}\).
**Example 6**.: **(incremental deep learning)** _Consider an input space \(\mathcal{X}\) and finite set of labels \(\mathcal{Y}\). This agent is designed to interface with actions \(\mathcal{A}=\Delta_{\mathcal{Y}}\) and observations \(\mathcal{O}=\mathcal{Y}\times\mathcal{X}\). Consider a deep neural network with a softmax output node. The inference process maps an input \(X_{t}\) to a predictive distribution \(P_{t}(\cdot)=f_{\theta_{t}}(\cdot|X_{t})\). Here, \(f\) is an abstract representation of the neural network and \(\theta_{t}\) is the vector of parameters (weights and biases) at time \(t\). Trained online via stochastic gradient descent (SGD) with a fixed stepsize to reduce log-loss, these parameters evolve according to_
\[\theta_{t+1}=\theta_{t}+\alpha\nabla\ln f_{\theta_{t}}(Y_{t+1}|X_{t}).\]
_This is a special case of our general reinforcement learning formulation, with action \(A_{t}=P_{t}\), observation \(O_{t+1}=(Y_{t+1},X_{t+1})\), and reward \(r(H_{t},A_{t},O_{t+1})=\ln P_{t}(Y_{t+1})\)._
Note that we could alternatively consider average accuracy as an objective by taking the action to be a label \(A_{t}=\hat{Y}_{t+1}\). This label could be generated, for example, by sampling uniformly from \(\arg\max_{y\in\mathcal{Y}}P_{t}(y)\). A reward function \(r(H_{t},A_{t},O_{t+1})=\mathbb{1}(Y_{t+1}=\hat{Y}_{t+1})\) can then be used to express accuracy. This is perhaps the objective most commonly used by applied deep learning researchers.
The incremental deep learning agent is designed for a prototypical supervised learning environment, where the relationship between inputs and labels is characterized by a random latent function \(F\). In particular, \(F\perp X_{0:t}\) and \(\mathbb{P}(Y_{t+1}\in\cdot|F,H_{t})=F(\cdot|X_{t})\).
The agent can also be applied to a nonstationary supervised learning environment, where the latent function varies over time, taking the form of a stochastic process \((F_{t}:t\in\mathbb{Z}_{+})\). With this variation, \(F_{t}\perp X_{0:t}\) and \(\mathbb{P}(Y_{t+1}\in\cdot|F_{t},H_{t})=F_{t}(\cdot|X_{t})\). However, due to loss of plasticity, incremental deep learning does not perform as well in such an environment as one would hope (Dohare et al., 2021). In particular, while the nonstationarity makes it important for the agent to continually learn, its ability to learn from new data degrades over time. A simple alternative addresses this limitation by periodically replacing the model under use with a new one, trained from scratch on recent data.
**Example 7**.: **(model replacement)** _Given a neural network architecture and algorithm that trains the model on a fixed batch of \(N\) data pairs, one can design a continual supervised learning agent as follows. At each time \(t=0,\tau,2\tau,\ldots\), reinitialize the neural network parameters and train on \(N\) most recent data pairs \((X_{t-n},Y_{t+1-n}:n=1,\ldots,N)\). No further training occurs until time \(t+\tau\), when the model is reinitialized and retrained. Each prediction \(A_{t}=P_{t}\) is given by \(P_{t}(\cdot)=f_{\theta_{t}}(\cdot|X_{t})\), where \(\theta_{t}\) is the most recent model. In particular, \(\theta_{t+1}=\theta_{t}\) unless \(t\) is a multiple of \(\tau\). The hyperparameters \(\tau\) and \(N\) specify the replacement period and number of data pairs in each training batch._
This approach to continual learning is commonly used in production systems. For example, consider an agent that at the end of each \(t\)th day predicts the next day's average electricity spot price \(Y_{t+1}\). A prototypical system might, at the end of each month, initialize a neural network and train it on the preceding twelve months of data. Then, this model could be used over the subsequent month, at the end of which the next replacement arrives. The reason
for periodically replacing the model is that very recent data is most representative of future price patterns, which evolve with the changing electricity market.
The reason for not replacing the model more frequently is the cost of training. There are a couple reasons for training only on recent history, in this case over the past twelve months. One is that recent data tends to best represent patterns that will recur in the future. However, this does not in itself prevent use of more data; given sufficient computation, it may be beneficial to train on all history, with data pairs suitably weighted to prioritize based on recency. The binding constraint is on computation, which scales with the amount of training data.
While model replacement is a common approach to continual learning, it is wasteful in and limited by its use of computational resources. In particular, each new model does not leverage computation invested in past models because it is trained from scratch. Developing an incremental training approach that affords a model benefits from all computation carried out since inception remains an important challenge to the field. Dohare et al. (2021) propose one approach. The subject remains an active area of research, motivated by limitations of the model replacement approach. These limitations also highlight the need for computational constraints in formulating a coherent objective for continual learning that incentivizes better agent designs.
### Computational Constraints
Traditional machine learning paradigms such as supervised learning make use of fixed training datasets. A continual learning agent, on the other hand, processes endless data stream. An agent with bounded computational resources, regardless of scale, cannot afford to query every data point in its history \(H_{t}\) at each time step because this dataset grows indefinitely. Instead, such an agent must act based on a more concise representation of historical information. In complex environments, this representation will generally forgo knowledge, some of which could have been used given greater computational resources. The notion that more compute ought to always be helpful in complex environments may be intuitively obvious. Nevertheless, it is worth pointing out corroboration by extensive empirical evidence from training large models on text corpi, where performance improves steadily along the range of feasible compute budgets (Brown et al., 2020; Hoffmann et al., 2022; Rae et al., 2022; Smith et al., 2022; Thoppilan et al., 2022).
In order to reflect the gating nature of computational resources in continual learning, we introduce a per-timestep computation constraint, as considered, for example, by Bagus et al. (2022); Lesort (2020); Prabhu et al. (2023). This specializes the continual learning problem formulation of Abel et al. (2023), which recognizes that constraints on the set of feasible agent policies give rise to continual learning behavior but does not focus on computation as the gating resource. As an objective for continual learning we propose maximization of average reward subject to this constraint on per-timestep computation. We believe that such a constraint is what gives rise to salient challenges of continual learning, such as catastrophic forgetting and loss of plasticity. Our theoretical analysis and empirical case studies will serve to motivate and justify this view.
Constraining per time step compute gives rise to our formulation of computation-constrained reinforcement learning:
\[\begin{split}\max_{\pi}&\quad\overline{r}_{\pi}\\ \text{s.t.}&\text{computational constraint.}\end{split} \tag{3}\]
The nature of practical computational constraints and which is binding vary with prevailing technology and agent designs. For example, if calculations are carried out in parallel across an ample number of processors, it could be the channels for communication among them pose a binding constraint on overall computation. Or if agents are designed to use a very large amount of computer memory, that can become the binding constraint. Rather than study the capabilities of contemporary computer technologies to accurately identify current constraints, our above formulation intentionally leaves the constraint ambiguous. For the purposes of theoretical analysis and case studies presented in the remainder of the paper, we will usually assume all computation is carried out on a single processor that can execute a fixed number of serial floating point operations per time step, and that this is the binding constraint. We believe that insights generated under this assumption will largely carry over to formulations involving other forms of computational constraints.
While maximizing average reward subject to a computation constraint offers a coherent objective, an exact solution even for simple, let alone complex, environments is likely to be intractable. Nevertheless, a coherent objective is valuable for assessing and comparing alternative agent designs. Indeed, algorithmic ingredients often embedded in
RL agents, such as Q-learning, neural networks, SGD, and exploration schemes such as optimism and Thompson sampling are helpful because they enable a favorable tradeoff between average reward and computation. Further, these ideas are scalable in the sense that they can leverage greater computational resources when available. As Sutton (2019) argues, with steady advances in computer technology, agent designs that naturally improve due to these advances will stand the test of time, while those that do not will phase out.
### Learning Complex Skills over a Long Lifetime
An aspiration of continual learning is to design agents that exhibit ever more complex skills, building on skills already developed (Ring, 1994). As opposed to paradigms that learn from a fixed data set, this aspiration is motivated by the continual growth of the agent's historical data set, and thus, information available to the agent. With this perspective, continual learning researchers often ask whether specific design ingredients are really needed or if they should be supplanted by superior skills that the agent can eventually learn. For example, should an agent implement a hard-coded exploration scheme or learn to explore? Or ought an agent apply SGD to update its parameters rather than learn its own adaptation algorithm?
More concretely, consider the agent of Example 6, which implements SGD with a fixed stepsize. The performance of such an agent may improve with a stepsize adaptation scheme such as incremental delta-bar-delta (IDBD), which produces stepsizes that vary across parameters and time (Sutton, 1992). With such a stepsize adaptation scheme, the agent can be seen as _learning how to learn_ more effectively than fixed stepsize SGD. However, IDBD requires maintaining and updating three times as many parameters, and thus, increased compute. If compute were not a constraint, one could take things even further and design a more complex update procedure that requires not only computing gradients but also Hessians. For neural networks of practical scale, the associated computational requirements would be egregious.
As our discussion of two alternative to fixed stepsize SGD suggests, there is always room to improve average reward by designing the agent to learn more sophisticated skills. However, as we will discuss in the next section, this complexity is constrained by the agent's information capacity, which is gated by computational constraints. There are always multiple ways to invest this capacity. For example, instead of maintaining statistics required by a stepsize adaptation scheme, a designer could increase the size of the neural network, which might also increase average reward.
Related to this is the stability-plasticity dilemma (McCloskey and Cohen, 1989; Ratcliff, 1990), a prominent subject in the continual learning literature. Stability is the resilience to forgetting useful skills, while plasticity is the ability to acquire new skills. Empirical studies demonstrate that agents do forget and that, as skills accumulate, become less effective at acquiring new skills (Dohare et al., 2021; Goodfellow et al., 2013; Kirkpatrick et al., 2017; Lesort, 2020). Researchers have worked toward agent designs that improve stability and plasticity. But limited information capacity poses a fundamental tradeoff. For example, Mirzadeh et al. (2022) and Dohare et al. (2021) demonstrate that larger neural networks forget less and maintain greater plasticity. And as we will further discuss in Section 5, in complex environments, constrained agents must forget and/or lose plasticity, with improvements along one dimension coming at a cost to the other.
**Summary**
* An **environment** is characterized by a tuple \((\mathcal{A},\mathcal{O},\rho)\), comprised of a set of **actions**, a set of **observations**, and an **observation probability function**.
* The agent's experience through time \(t\) forms a **history**\(H_{t}=(A_{0},O_{1},\ldots,A_{t-1},O_{t})\).
* Observations are generated as though \[O_{t+1}\sim\rho(\cdot|H_{t},A_{t}).\]
* The behavior of an agent is characterized by an **agent policy**\(\pi\). Actions are generated as though \[A_{t}\sim\pi(\cdot|H_{t}).\]
* The designer's preferences are encoded in terms of a **reward function**\(r\), which generates **rewards** \[R_{t+1}=r(H_{t},A_{t},O_{t+1}).\]
* The **average reward** attained by an agent policy \(\pi\) in an environment \((\mathcal{A},\mathcal{O},\rho)\) is \[\overline{r}_{\pi}=\liminf_{T\to\infty}\mathbb{E}_{\pi}\left[\frac{1}{T}\sum_ {t=0}^{\infty}R_{t+1}\right].\]
* Design of a continual learning agent can be framed as **maximizing average reward subject to a per-timestep computational constraint**: \[\max_{\pi} \overline{r}_{\pi}\] s.t. computational constraint.
* **Continual supervised learning with log-loss*
* is a special case in which
* \(\mathcal{O}=\mathcal{X}\times\mathcal{Y}\), where \(\mathcal{X}\) and \(\mathcal{Y}\) are input and label sets,
* \(\mathcal{A}=\Delta_{\mathcal{Y}}\) is the unit simplex of predictive distributions,
* each observation is a pair \(O_{t+1}=(Y_{t+1},X_{t+1})\) comprising a label assigned to the previous input \(X_{t}\) and the next input \(X_{t+1}\),
* the observation distribution \(\rho(\cdot|H_{t},A_{t})\) depends on \((H_{t},A_{t})\) only through past observations \(O_{1:t}\),
* the reward function expresses the negative log-loss \(R_{t+1}=\ln P_{t}(Y_{t+1})\).
Another common reward function used in supervised learning expresses the accuracy \(R_{t+1}=\mathbb{1}(Y_{t+1}=\hat{Y}_{t+1})\), where the action is a label \(A_{t}=\hat{Y}_{t+1}\).
Agent State and Information Capacity
Practical agent designs typically maintain a bounded summary of history, which we refer to as the _agent state_ and which is used to select actions. Information encoded in the agent state is constrained to regulate computational requirements. In this section, we formalize these concepts in information-theoretic terms, along the lines of Jeon et al. (2023); Lu et al. (2021), and explore their relation to agent performance. The tools we develop allow us to more clearly distinguish continual from convergent learning and define and analyze stability and plasticity, as we do in Sections 4 and 5.
### Agent State
Computational constraints prevent an agent from processing every element of history at each time step because the dataset grows indefinitely. To leverage more information than can be efficiently accessed from history, the agent needs to maintain a representation of knowledge that enables efficient computation of its next action. In particular, the agent must implement a policy \(\pi\) that depends on a statistic \(U_{t}\) derived from \(H_{t}\), rather than directly on \(H_{t}\) itself. Such a policy samples each action according to
\[A_{t}\sim\pi(\cdot|U_{t}).\]
The statistic \(U_{t}\) must itself be computed using budgeted resources. An agent that computes \(U_{t}\) directly from \(H_{t}\) would run into constraints of the same sort that motivated construction of agent state in the first place. In particular, the agent cannot accesses all history within a time step and must maintain an agent state that facilitates efficient computation of the next agent state. To this end, \(U_{t}\) serves two purposes: computation of \(A_{t}\) and \(U_{t+1}\). Specifically, there must be an update function \(\psi\) such that
\[U_{t+1}\sim\psi(\cdot|U_{t},A_{t},O_{t+1}).\]
This sort of incremental updating allows \(U_{t+1}\) to selectively encode historical information while amortizing computation across time. Since \(U_{t}\) includes all information about \(H_{t}\) that the agent will subsequently use, it can be thought of as a state; thus the term _agent state_.
Agents presented in Section 2.3 each use a highly compressed representation of history as agent state. In the scalar LMS algorithm of Example 3, the agent state \(U_{t}=\mu_{t}\) is an estimate of a latent variable \(\theta_{t}\). The agent state \(U_{t}=(\mu_{t,a},\Sigma_{t,a}:a\in\mathcal{A})\) of Example 4 includes mean and variance estimates for each action. The optimistic Q-learning agent (Example 5) maintains a situational state and action value function as its agent state \(U_{t}=(S_{t},Q_{t})\). Finally, with supervised deep learning, as described in Example 6, the agent state \(U_{t}=(X_{t},\theta_{t})\) includes the current input and a vector of neural network parameters. If the agent were also to maintain a replay buffer \(B_{t}\) of recent action-observation pairs for supplemental training, the replay buffer would also reside within the agent state \(U_{t}=(X_{t},\theta_{t},B_{t})\). In each of these examples, the agent state is updated incrementally according to \(U_{t+1}\sim\psi(\cdot|U_{t},A_{t},O_{t+1})\) for some function \(\psi\).
### Information Content
Intuitively, the agent state retains information from history to select actions and update itself. But how ought this information be quantified? In this section, we offer a formal approach based on information theory (Shannon, 1948). Attributing precise meaning to _information_ affords coherent characterization and analysis of information retained by an agent as well as forgetting and plasticity, which are subjects we will discuss in subsequent sections.
The agent state \(U_{t}\) is a random variable since it depends on other random variables, including the agent's experience \(H_{t}\) through that time and possible algorithmic randomness arising in sampling from \(\psi(\cdot|U_{t-1},A_{t-1},O_{t})\). A simple way of quantifying the information content of \(U_{t}\) is in terms of the minimal number of bits that suffices for a lossless encoding. Or, using a unit of measurement more convenient for analysis of machine learning, the number of nats; there are \(\ln 2\) nats per bit.
For reasons we will discuss, at large scale in terms of environment complexity and model size, this number ought to be well-approximated by the entropy \(\mathbb{H}(U_{t})\). The entropy of a random variable \(B\) with countable range is defined by
\[\mathbb{H}(B)=\mathbb{E}[-\ln\mathbb{P}(B)].\]
More generally, if the range is uncountable then entropy is defined by \(\mathbb{H}(B)=\sup_{f\in\mathcal{F}_{\text{finite}}}\mathbb{H}(f(B))\), where \(\mathcal{F}_{\text{finite}}\) is the set of functions that map \(\mathcal{U}\) to a finite range.
Let us now discuss conditions under and the sense in which entropy closely approximates the number of nats that suffices for a lossless encoding. Consider a fixed environment and an agent design that is indexed by a parameter \(M\in\mathbb{Z}_{++}\), with memory requirements that increase with \(M\). Suppose that, for each \(M\), the agent state \(U_{t,1:M}\) is a vector with \(M\) components. Further, suppose that these components are the first \(M\) elements of a stationary stochastic process \((U_{t,m}:m\in\mathbb{Z}_{++})\). Under this assumption, \(\mathbb{H}(U_{t,1:M})=\Theta(M)\). As a simple example, \((U_{t,m}:m\in\mathbb{Z}_{++})\) could be an iid sequence, with each \(U_{t,m}\) representing a feature extracted from the history \(H_{t}\). If these features are ordered from most to least important for selecting effective actions, it is natural to retain only the first \(M\) in the agent state if that exhausts the memory budget. A standard result in information theory implies that, for any \(\epsilon>0\), there is a \(\mathbb{H}(U_{t,1:M})+\epsilon M\) nat encoding that affords lossless recovery of \(U_{t,1:M}\) with a probability that approaches one as \(M\) grows (Cover and Thomas, 2012, Theorem 5.4.2). Hence, as \(M\) grows, the percentage difference between \(\mathbb{H}(U_{t,1:M})\) and the required number of nats vanishes.
The preceding argument assumed the agent state \(U_{t}\) to be a high-dimensional vector with components generated by a stationary stochastic process. While this is not typically the case, we expect the key insight - that \(\mathbb{H}(U_{t})\) closely approximates, in percentage terms, the number of nats that suffices for lossless encoding - to extend broadly to contexts where the agent state is a large object and the environment is much more complex, even than that. Intuitively and loosely speaking, this insight applies when the information encoded in \(U_{t}\) originates from many independent random sources. A complex environment ought to afford an enormous diversity of random sources, and a large agent state ought to encode information from a large number. It is exactly such contexts that motivate the subject of continual learning.
### Information Capacity
Recall our continual learning objective:
\[\max_{\pi} \overline{r}_{\pi}\] s.t. computational constraint.
The nature of the computational constraint was purposely left ambiguous. If computer memory is binding, that directly constrains information content. We will assume for the remainder of this paper that the binding constraint is the number of floating point operations that can be carried out per time step (FLOPS). This does not necessarily constrain information content. For example, even if we take the agent state to be \(U_{t}=H_{t}\) and the entropy \(\mathbb{H}(U_{t})\) grows indefinitely, an agent can efficiently select each action based on sparsely queried data, perhaps by randomly sampling a small number of action-observation pairs from history. However, as a practical matter, common agent designs apply computation in ways that limit the amount of information that the agent retains. We refer to the constraint on information content of agent state as the _information capacity_.
A constraint on FLOPS limits information content when an agent is designed so that computation grows with information capacity. Supervised deep learning, as described in Example 6, offers a concrete case. Recall that, over each \(t\)th time step, the agent carries out a single SGD step:
\[\theta_{t+1}=\theta_{t}+\alpha\nabla\ln f_{\theta_{t}}(Y_{t+1}|X_{t}).\]
Each data pair \((X_{t},Y_{t+1})\) is immediately processed when observed, then discarded. Compute per data pair grows proportionally with the number of model parameters. Hence, a computation constraint restricts the number of model parameters. Conversely, the number of model parameters determines per-timestep computation. If each parameter is encoded by \(K\) nats, a neural network model with \(N\) parameters can encode \(NK\) nats of information. We refer to this as the _physical capacity_ of the neural network.
While information content is constrained not to exceed the physical capacity, large neural networks trained via SGD actually use only a fraction of their physical capacity to retain information garnered from data. Much of the \(NK\) nats instead serves to facilitate computation. Therefore, the physical capacity constrains the information capacity to far fewer nats: \(\mathbb{H}(U_{t})\ll NK\).
Similar reasoning applies if the agent state is expanded to include a replay buffer. In this case, the physical capacity is \(NK+B\) nats, if \(B\) nats are used to store the replay buffer. Again, the information capacity is constrained to
far fewer nats than the physical capacity. As before only a fraction of the neural network's physical capacity stores information content. But also, a replay buffer that stores raw data from history can typically be compressed losslessly to occupy a much smaller number of nats.
### Performance versus Information Capacity
If the agent makes efficient use of its information capacity, it is natural to expect agent performance to increase as this constraint is loosened. Information theory offers an elegant interpretation of this relation. To illustrate this, let us work through this interpretation for the case of continual SL.
#### 3.4.1 Prediction Error
Recall that, in our continual SL formulation, the agent's action is a predictive distribution \(A_{t}=P_{t}\) and the reward is taken to be \(r(H_{t},A_{t},O_{t+1})=\ln P_{t}(Y_{t+1})\). Hence, the objective is to minimize average log-loss. To enable an elegant analysis, we define a prediction
\[P_{t}^{*}=\mathbb{P}(Y_{t+1}=\cdot|H_{t})=\operatorname*{arg\,max}_{Q\in \Delta_{\mathcal{Y}}}\mathbb{E}[\ln Q(Y_{t+1})|H_{t}] \tag{4}\]
as a _gold standard_. The expected reward \(\mathbb{E}[\ln P_{t}^{*}(Y_{t+1})|H_{t}]\) represents the largest that a computationally unconstrained agent can attain given the history \(H_{t}\). The difference between this gold standard value and the expected reward attained by the agent is expressed by the KL-divergence:
\[\mathbf{d}_{\text{KL}}(P_{t}^{*}||P_{t})=\sum_{y\in\mathcal{Y}}P_{t}^{*}(y)\ln \frac{P_{t}^{*}(y)}{P_{t}(y)}=\mathbb{E}[\ln P_{t}^{*}(Y_{t+1})-\ln P_{t}(Y_{t +1})|H_{t}].\]
This KL-divergence serves as a measure of error between the agent's prediction \(P_{t}\) and the gold standard \(P_{t}^{*}\). Maximizing average reward is equivalent to minimizing this prediction error, since
\[\underbrace{\mathbb{E}[\ln P_{t}(Y_{t+1})]}_{\text{reward}}=\underbrace{ \mathbb{E}[\ln P_{t}^{*}(Y_{t+1})]}_{\text{optimal reward}}-\underbrace{ \mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t}^{*}||P_{t})]}_{\text{prediction error}} \tag{5}\]
and \(\mathbb{E}[\ln P_{t}^{*}(Y_{t+1})]\) does not depend on \(P_{t}\).
When making its prediction \(P_{t}\), the agent only has information supplied by its agent state \(U_{t}\). The best prediction that can be generated based on this information is
\[\tilde{P}_{t}=\mathbb{P}(Y_{t+1}=\cdot|U_{t})=\operatorname*{arg\,max}_{Q\in \Delta_{\mathcal{Y}}}\mathbb{E}[\ln Q(Y_{t+1})|U_{t}]. \tag{6}\]
If the agent's prediction \(P_{t}\) differs from \(\tilde{P}_{t}\), the error attained by the agent decomposes into informational versus inferential components, as established by the following result.
**Theorem 1**.: _For all \(t\),_
\[\underbrace{\mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t}^{*}||P_{t})]}_{\text{ prediction error}}=\underbrace{\mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t}^{*}||\tilde{P}_{t})]}_{\text{ informational error}}+\underbrace{\mathbb{E}[\mathbf{d}_{\text{KL}}(\tilde{P}_{t}||P_{t})]}_{\text{ inferential error}}.\]
Proof.: For all \(t\),
\[\mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t}^{*}||P_{t})]= \mathbb{E}[\ln P_{t}^{*}(Y_{t+1})-\ln P_{t}(Y_{t+1})]\] \[= \mathbb{E}[\mathbb{E}[\ln P_{t}^{*}(Y_{t+1})-\ln\tilde{P}_{t}(Y_ {t+1})|H_{t}]]+\mathbb{E}[\mathbb{E}[\ln\tilde{P}_{t}(Y_{t+1})-\ln P_{t}(Y_{t +1})|U_{t}]]\] \[= \mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t}^{*}||\tilde{P}_{t})]+ \mathbb{E}[\mathbf{d}_{\text{KL}}(\tilde{P}_{t}||P_{t})].\]
#### 3.4.2 Informational Error Quantifies Absent Information
The informational error can be interpreted as historical information absent from the agent state \(U_{t}\) but that would be useful for predicting \(Y_{t+1}\). This can be expressed in elegant information theoretic terms. To do so, we first review a few information measures for which Figure 4 illustrates intuitive relationships. Let \(B\) and \(C\) be random variables, and to simplify math, though the concepts extend to address uncountable ranges, let us assume for this discussion that each has countable range. The conditional entropy of \(B\) conditioned on \(C\) is defined by
\[\mathbb{H}(B|C)=\mathbb{E}\left[-\ln\mathbb{P}(B|C)\right].\]
It follows from the definition of conditional probability that \(\mathbb{H}(B|C)=\mathbb{E}\left[-\ln\mathbb{P}(B,C)+\ln\mathbb{P}(C)\right]= \mathbb{H}(B,C)-\mathbb{H}(C)\). This represents the expected number of nats that remain to be revealed by \(B\) after \(C\) is observed, or the union of the two discs in the venn diagram minus the content of the blue disc. The mutual information between \(B\) and \(C\) is defined by
\[\mathbb{I}(B;C)=\mathbb{H}(B)-\mathbb{H}(B|C)=\mathbb{H}(C)-\mathbb{H}(C|B)= \mathbb{I}(C;B).\]
This represents the number of nats shared by \(B\) and \(C\), depicted as the intersection between the two discs. If the variables are independent then \(\mathbb{I}(B;C)=0\). Finally, the mutual conditional information between \(B\) and \(C\), conditioned on a third random variable \(D\), is defined by
\[\mathbb{I}(B;C|D)=\mathbb{I}(B;C,D)-I(B;D).\]
This represents information remaining in the intersection of \(B\) and \(C\) after \(D\) is observed.
The following result establishes that the informational error \(\mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t}^{*}|\tilde{P}_{t})]\) equals the information \(\mathbb{I}(Y_{t+1};H_{t}|U_{t})\) that the history \(H_{t}\) presents about \(Y_{t+1}\) but that is absent from \(U_{t}\).
**Theorem 2**.: _For all \(t\),_
\[\underbrace{\mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t}^{*}|\tilde{P}_{t})]}_{ \text{informational error}}=\underbrace{\mathbb{I}(Y_{t+1};H_{t}|U_{t})}_{ \text{absent info}}.\]
Proof.: From the definitions of conditional entropy, \(P_{t}^{*}\), and \(\tilde{P}_{t}\), we have \(\mathbb{H}(Y_{t+1}|H_{t})=\mathbb{E}[-\ln P_{t}^{*}(Y_{t+1})]\) and \(\mathbb{H}(Y_{t+1}|U_{t})=\mathbb{E}[-\ln\tilde{P}_{t}(Y_{t+1})]\). It follows that,
\[\mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t}^{*}|\tilde{P}_{t})]= \mathbb{E}[\ln P_{t}^{*}(Y_{t+1})-\ln\tilde{P}_{t}(Y_{t+1})]\] \[= \mathbb{H}(Y_{t+1}|U_{t})-\mathbb{H}(Y_{t+1}|H_{t})\] \[= \mathbb{H}(Y_{t+1}|U_{t})-\mathbb{H}(Y_{t+1}|U_{t},H_{t})\] \[= \mathbb{I}(Y_{t+1};H_{t}|U_{t}).\]
The third equality follows from the fact that \(Y_{t+1}\perp U_{t}|H_{t}\).
#### 3.4.3 Information Capacity Constrains Performance
It is natural to think that information capacity can constrain performance. This relationship is formalized by the following result.
Figure 4: Venn diagram relating various information measures.
**Theorem 3**.: _For all \(t\),_
\[\underbrace{\mathbb{E}[\mathbf{d}_{\mathrm{KL}}(P_{t}^{*}\|P_{0}^{*})]}_{\text{ uninformed error}}-\underbrace{\mathbb{E}[\mathbf{d}_{\mathrm{KL}}(P_{t}^{*}\|\tilde{P}_{t})]}_{\text{ informational error}}=\underbrace{\mathbb{I}(Y_{t+1};U_{t})}_{\text{useful info}}\leq\underbrace{\mathbb{H}(U_{t})}_{\text{info}}.\]
Proof.: For all \(t\),
\[\mathbb{E}[\mathbf{d}_{\mathrm{KL}}(P_{t}^{*}\|\tilde{P}_{t})]= \mathbb{I}(Y_{t+1};H_{t}|U_{t})\] \[= \mathbb{I}(Y_{t+1};H_{t},U_{t})-\mathbb{I}(Y_{t+1};U_{t})\] \[= \mathbb{I}(Y_{t+1};H_{t})-\mathbb{I}(Y_{t+1};U_{t})\] \[= \mathbb{E}[\mathbf{d}_{\mathrm{KL}}(P_{t}^{*}\|P_{0}^{*})]- \mathbb{I}(Y_{t+1};U_{t}).\]
We arrive at the first inequality in our result by rearranging terms. The second follows from the fact that mutual information between two random variables is bounded by the entropy of each.
Note that \(P_{0}^{*}=\mathbb{P}(Y_{t+1}=\cdot|H_{0})=\mathbb{P}(Y_{t+1}=\cdot)\) is an uninformed prediction, which is based on no data. The left-hand-side expression \(\mathbb{E}[\mathbf{d}_{\mathrm{KL}}(P_{t}^{*}\|P_{0}^{*})]-\mathbb{E}[\mathbf{ d}_{\mathrm{KL}}(P_{t}^{*}\|\tilde{P}_{t})]\) is the reduction in informational error relative to an uninformed prediction. The middle expression \(\mathbb{I}(Y_{t+1};U_{t})\) is the degree to which the agent state \(U_{t}\) informs the agent about \(Y_{t+1}\). The right-hand-side expression \(\mathbb{H}(U_{t})\) is the information content of the agent state. If the information capacity constrains this content to be small, that in turn constrains the reduction in error. When the constraint is binding, larger information capacity affords smaller errors.
## Summary
* An **agent state**\(U_{t}\) is a summary of the history \(H_{t}\) maintained to facilitate efficient computation of each action \[A_{t}\sim\pi(\cdot|U_{t}),\] and subsequent agent state \[U_{t+1}\sim\psi(\cdot|U_{t},A_{t},O_{t+1}).\] The agent policy of an agent designed in this way is characterized by the pair \((\psi,\pi)\).
* The **information content** of agent state is the number of nats required to encode it. At large scale and under plausible technical conditions, this is well-approximated by the **entropy**\(\mathbb{H}(U_{t})\).
* An agent's **information capacity** is a constraint on the information content and is typically limited by computational resources.
* Information capacity limits agent performance, which we measure in terms of average reward.
* In the special case of **continual supervised learning** with rewards \(R_{t+1}=\ln P_{t}(Y_{t+1})\), expected reward is determined by prediction error via \[\underbrace{\mathbb{E}[\ln P_{t}(Y_{t+1})]}_{\text{reward}}=\underbrace{ \mathbb{E}[\ln P_{t}^{*}(Y_{t+1})]}_{\text{optimal reward}}-\underbrace{ \mathbb{E}[\mathbf{d}_{\mathrm{KL}}(P_{t}^{*}\|P_{t})]}_{\text{prediction error}},\] where \(P_{t}\) denotes the agent's prediction and \(P_{t}^{*}(\cdot)=\mathbb{P}(Y_{t}=\cdot|H_{t})\) is the optimal prediction. Prediction error decomposes into informational and inferential errors: \[\underbrace{\mathbb{E}[\mathbf{d}_{\mathrm{KL}}(P_{t}^{*}\|P_{t})]}_{\text{ prediction error}}=\underbrace{\mathbb{E}[\mathbf{d}_{\mathrm{KL}}(P_{t}^{*}\|\tilde{P}_{t})]}_{\text{ informational error}}+\underbrace{\mathbb{E}[\mathbf{d}_{\mathrm{KL}}(\tilde{P}_{t}\|P_{t})]}_{\text{ inferential error}},\] where \(\tilde{P}_{t}(\cdot)=\mathbb{P}(Y_{t}=\cdot|U_{t})\) is the best prediction that can be produced based on the agent state. The
informal error is equal to the information absent from agent state that would improve the prediction:
\[\underbrace{\mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t}^{*}\|\tilde{P}_{t})]}_{\text{ informational error}}=\underbrace{\mathbb{I}(Y_{t+1};H_{t}|U_{t})}_{\text{absent info}}.\]
Informational error is limited by the information capacity according to
\[\underbrace{\mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t}^{*}\|P_{0}^{*})]}_{\text{ uninformed error}}-\underbrace{\mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t}^{*}\|\tilde{P}_{t})]}_{\text{ informational error}}=\underbrace{\mathbb{I}(Y_{t+1};U_{t})}_{\text{ useful info}}\leq\underbrace{\mathbb{H}(U_{t})}_{\text{info}}.\]
## 4 Vanishing-Regret Versus Continual Learning
Traditional framings of machine learning view the agent as accumulating knowledge about a fixed latent variable, which we refer to as a _learning target_. For example, in supervised learning, the learning target is typically taken to be a mapping from input to label probabilities. As a sanity check on agent performance, researchers then study regret relative to a hypothetical agent with privileged knowledge of the learning target. By the same token, traditional treatments of reinforcement learning center around learning about a latent MDP, or a derivative, such as a value function or a policy. Any of these can serve as a learning target, and again, regret analysis serves as a sanity check on agent performance.
Traditional framings orient agent designs toward ensuring that average regret vanishes as the dataset grows. To highlight this emphasis, we use the term _vanishing-regret learning_. Agents designed in this vein aim to reduce per-timestep regret, and are viewed as essentially "done" with learning when that becomes negligible.
Continual learning agents, on the other hand, engage in a perpetual mode of knowledge acquisition. Unlike vanishing-regret learning, the pace of continual learning does not taper off. Instead, in the never-ending process, some information is retained and some forgotten as new information is ingested. The notion of vanishing regret gives way to understanding this non-transient behavior.
In this section, we elaborate on this distinction between continual and vanishing-regret learning. To do so, we formalize the notion of a learning target and what it means for regret to vanish. We then discuss how computational constraints incentivize different qualitative behavior. The contrast offers insight into how and why continual learning agents should be designed differently. This discussion also leads us to reflect on why we have taken the environment \(\mathcal{E}=(\mathcal{A},\mathcal{O},\rho)\) to be deterministic or, in other words, known, in contrast to work on vanishing-regret learning, which typically treats the environment as unknown.
### Vanishing-Regret Learning
We begin by motivating the notation of a learning target and then discuss the role of vanishing regret in traditional machine learning. We leverage information theory as a lens that affords general yet simple interpretations of these concepts.
#### 4.1.1 Learning Targets, Target Policies, and Vanishing Regret
Recall the coin tossing environment of Example 1. It is natural to think of an agent as learning about the coin biases from toss outcomes, with an eye toward settling on a simple policy that selects the coin with favorable bias. This framing can serve to guide comparisons among alternative agents and give rise to insight on how design decisions impact performance. For example, one could study whether each agent ultimately behaves as though it learned the coin biases. Such an analysis assesses agent performance relative to a benchmark, which is framed in terms of an agent with privileged knowledge of a _learning target_ comprised of coin biases and a _target policy_ that selects the favorable coin.
In the most abstract terms, a learning target is a random variable, which we will denote by \(\chi\). A target policy \(\tilde{\pi}\) assigns to each realization of \(\chi\) a random policy. In particular, \(\tilde{\pi}_{\chi}\) is a random variable that takes values in the set of policies. Intuitively, \(\chi\) represents an interpretation of what an agent aims to learn about, and \(\tilde{\pi}_{\chi}\) is the policy the agent would use if \(\chi\) were known. Note that the target policy selects actions with privileged knowledge of the learning target. We will denote the average reward of the target policy conditioned on the learning target by
\[\overline{\tau}_{\chi,\tilde{\pi}}=\liminf_{T\to\infty}\mathbb{E}_{\tilde{ \pi}_{\chi}}\left[\frac{1}{T}\sum_{t=0}^{T-1}R_{t+1}\middle|\chi\right], \tag{7}\]
where the subscript \(\tilde{\pi}_{\chi}\) indicates the policy under which the conditional expectation is evaluated. Note that \(\overline{\tau}_{\chi,\tilde{\pi}}\) is a random variable because it depends on \(\chi\). This level of reward may be not attainable by any viable agent policy. This is because it can rely on knowledge of \(\chi\), which the agent does not observe. An agent policy, on the other hand, generates each action \(A_{t}\) based only on the history \(H_{t}\). Rather than offer a viable agent policy, the learning target and target policy serve as conceptual tools that more loosely guide agent design and analysis.
For each duration \(T\in\mathbb{Z}_{++}\), let
\[\overline{\tau}_{\pi,T}=\mathbb{E}_{\pi}\left[\frac{1}{T}\sum_{t=0}^{T-1}R_{t+1} \right]\qquad\text{and}\qquad\overline{\tau}_{\chi,\tilde{\pi},T}=\mathbb{E}_{ \tilde{\pi}_{\chi}}\left[\frac{1}{T}\sum_{t=0}^{T-1}R_{t+1}\Big{|}\chi\right],\]
so that \(\overline{\tau}_{\pi}=\liminf_{T\to\infty}\overline{\tau}_{\pi,T}\) and \(\overline{\tau}_{\tilde{\pi},\chi}=\liminf_{T\to\infty}\overline{\tau}_{\chi,\tilde{\pi},T}\). Like \(\overline{\tau}_{\chi,\tilde{\pi},T}\), the limit \(\overline{\tau}_{\tilde{\pi},\chi}\) is a random variable. We define the _average regret_ over duration \(T\) incurred by \(\pi\) with respect to \((\chi,\tilde{\pi})\) to be
\[\overline{\operatorname{Regret}}_{\chi,\tilde{\pi}}(T|\pi)=\mathbb{E}[ \overline{\tau}_{\chi,\tilde{\pi},T}-\overline{\tau}_{\pi,T}]. \tag{8}\]
This represents the expected per-timestep shortfall of the agent policy \(\pi\) relative to the target policy \(\tilde{\pi}_{\chi}\), which is afforded the advantage of knowledge about the learning target \(\chi\).
We say \(\pi\) exhibits _vanishing regret_ with respect to \((\chi,\tilde{\pi})\) if \(\limsup_{T\to\infty}\overline{\operatorname{Regret}}_{\chi,\tilde{\pi}}(T| \pi)\leq 0\). If rewards are bounded and limits of \(\overline{\tau}_{\chi,\tilde{\pi},T}\) and \(\overline{\tau}_{\pi,T}\) exist then, by the dominated convergence theorem, \(\limsup_{T\to\infty}\mathbb{E}[\overline{\tau}_{\chi,\tilde{\pi},T}- \overline{\tau}_{\pi,T}]\leq\mathbb{E}[\overline{\tau}_{\chi,\pi}-\overline{ \tau}_{\pi}]\), and therefore, vanishing regret is implied by \(\mathbb{E}[\overline{\tau}_{\chi,\tilde{\pi}}-\overline{\tau}_{\pi}]\leq 0\). Intuitively, the notation of vanishing regret indicates that \(\pi\) eventually performs at least as well as the target policy \(\tilde{\pi}_{\chi}\) in spite of the latter's privileged knowledge of \(\chi\).
The choice of learning target and target policy are subjective in the sense that they are not uniquely determined by an agent-environment pair. Rather, they are chosen as means to interpret the agent's performance in the environment. In particular, given \((\chi,\tilde{\pi})\), we can analyze how the agent learns about \(\chi\) and uses that knowledge to make effective decisions. In this regard, three properties make for useful choices:
1. Given knowledge of the learning target \(\chi\), an agent can execute \(\tilde{\pi}_{\chi}\) in a computationally efficient manner.
2. The target policy \(\tilde{\pi}_{\chi}\) attains a desired level of average reward.
3. An agent can learn enough about \(\chi\) in reasonable time to perform about as well as \(\tilde{\pi}_{\chi}\).
The first property ensures that knowledge of \(\chi\) is actionable, and the second requires that resulting actions are performant to a desired degree. The third property ensures that acquiring useful knowledge about \(\chi\) is feasible. These properties afford analysis of performance in terms of whether and how quickly an agent learns about \(\chi\). We will further explore this sort of analysis in Section 4.1.2. But we close this section with a simple example of logit data and an agent that produces optimal predictions conditioned on history. This serves as an introduction to the notion of a learning target and what makes one useful.
**Example 8**.: **(learning targets for logit data)** _Consider a binary observation sequence \((O_{t}:t\in\mathbb{Z}_{++})\) that is iid conditioned on a latent variable \(\theta\). In particular, let \(\mathcal{O}=\{0,1\}\), \(\mathbb{P}(O_{t+1}=1|\theta,H_{t},A_{t})=e^{\theta}/(1+e^{\theta})\), and \(\mathbb{P}(\theta\in\cdot)\sim\mathcal{N}(0,1)\). Each action is a predictive distribution \(A_{t}=P_{t}\) and results in reward \(R_{t+1}=\ln P_{t}(O_{t+1})\). We consider an optimal agent policy \(\pi_{*}\), which generates predictive distributions \(P_{t}^{*}\sim\pi_{*}(\cdot|H_{t})\) that perfectly condition on history: \(P_{t}^{*}(\cdot)=\mathbb{P}(O_{t+1}=\cdot|H_{t})\). We will consider three different choices of learning target \(\chi\) together, in each case, with the target policy \(\tilde{\pi}_{\chi}\) that assigns all probability to \(P_{t}^{\chi}=\mathbb{P}(O_{t+1}=\cdot|\chi,H_{t})\). This target policy executes actions that are optimally conditioned on knowledge of \(\chi\) in addition to history._
_For the "obvious" learning target \(\chi=\theta\), \(\overline{\operatorname{Regret}}_{\chi,\tilde{\pi}}(T|\pi)\) vanishes at a reasonable rate, which we will characterize in the next section. Intuitively, this is because, as data accumulates, the agent is able to produce estimates of \(\theta\) that suffice for accurate predictions._
_For contrast, let us now consider two poor choices of learning targets, each of which represents a different extreme. One is the "uninformative" learning target \(\chi=\emptyset\), for which \(\overline{\tau}_{\chi,\tilde{\pi},T}=\overline{\tau}_{\pi,T}\) for all \(T\). Convergence is as fast as can be, and in fact, instant. However, the role of \(\chi\) is vacuous. At the other extreme, suppose the learning target \(\chi=(O_{t+1}:t\in\mathbb{Z}_{+})\) includes all observations from the past, present, and future. With this privileged knowledge, predictions \(P_{t}^{*}(\cdot)=\mathbbm{1}(O_{t+1}=\cdot)\) perfectly anticipate observations. However, with this "overinformative" learning target, \(\overline{\operatorname{Regret}}_{\chi,\tilde{\pi}}(T|\pi)\) does not vanish and instead converges to \(\mathbb{E}[\ln(1+e^{-\theta})]>0\). This is because the agent never learns enough to compete with target policy, which has privileged access to future observations._
#### 4.1.2 Regret Analysis
The machine learning literature offers a variety of mathematical tools for regret and sample complexity analysis. These tools typically study how agents accumulate information about a designated learning target and make decisions that become competitive with those that could be made with privileged knowledge of the learning target.
To more concretely illustrate the nature of this analysis and the role of learning targets, we will present in this section examples of results in this area. In order to keep the exposition simple and transparent, we will restrict attention to supervised learning.
Rather than cover results about specific agents, we will review results about an optimal agent policy, which produces predictions \(P_{t}^{*}(\cdot)=\mathbb{P}(Y_{t+1}=\cdot|H_{t})\). Such an agent maximizes \(\mathbb{E}_{\pi}[\frac{1}{T}\sum_{t=0}^{T-1}R_{t+1}]\) over every duration \(T\), with rewards given by \(R_{t+1}=\ln P_{t}^{*}(Y_{t+1})\). Hence, the results we review pertain to what is possible rather than what is attained by a particular algorithm.
For any learning target \(\chi\), we will take the target policy to be that which generates actions \(P_{t}^{\chi}(y)=\mathbb{P}(Y_{t+1}=y|\chi,H_{t})\). This represents an optimal prediction for an agent with privileged knowledge of \(\chi\) in addition to the history \(H_{t}\). For this target policy, average regret satisfies
\[\overline{\text{Regret}}_{\chi,\tilde{\pi}}(T|\pi)= \overline{}_{\chi,\tilde{\pi},T}-\overline{\tau}_{\pi,T}\] \[= \mathbb{E}\left[\mathbb{E}\left[\frac{1}{T}\sum_{t=0}^{T-1}\ln P _{t}^{\chi}(Y_{t+1})\Big{|}\chi\right]\right]-\mathbb{E}\left[\frac{1}{T}\sum _{t=0}^{T-1}\ln P_{t}^{*}(Y_{t+1})\right]\] \[= \mathbb{E}\left[\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left[\ln \frac{P_{t}^{\chi}(Y_{t+1})}{P_{t}^{*}(Y_{t+1})}\Big{|}\chi\right]\right]\] \[= \mathbb{E}\left[\frac{1}{T}\sum_{t=0}^{T-1}\mathbf{d}_{\text{KL} }(P_{t}^{\chi}\|P_{t}^{*})\right].\]
To characterize regret incurred by an optimal agent, we start with a basic result of (Jeon et al., 2023, Theorem 9):
\[\overline{\text{Regret}}_{\chi,\tilde{\pi}}(T|\pi)\leq\frac{\mathbb{H}(\chi)} {T}. \tag{9}\]
The right-hand side is the entropy of the learning target divided by the duration \(T\). If this entropy is finite then, as \(T\) grows, the right-hand side, and therefore regret, vanishes. That the bound increases with the entropy of the learning target is intuitive: if there is more to learn, regret ought to be larger for longer.
While the aforementioned regret bound offers useful insight, it becomes vacuous when the \(\mathbb{H}(\chi)=\infty\). Entropy is typically infinite when \(\chi\) is continuous-valued. In order to develop tools that enable analysis of continuous variables and that lead to much tighter bounds, even when \(\mathbb{H}(\chi)<\infty\), let us introduce a new concept: the rate-distortion function. Let
\[\Theta_{\epsilon}=\left\{\tilde{\chi}:\mathbb{E}\left[\mathbf{d}_{\text{KL}} \left(P_{t}^{\chi}\|P_{t}^{\tilde{\chi}}\right)\right]\leq\epsilon\text{ for all }t\right\}\]
be the set of all random variables \(\tilde{\chi}\) that enable predictions close to those afforded by the \(\chi\). The _rate_ of \(\chi\) with _distortion_ tolerance \(\epsilon\) is defined by the rate-distortion function:
\[\mathbb{H}_{\epsilon}(\chi)=\inf_{\tilde{\chi}\in\Theta_{\epsilon}}\mathbb{I}( \chi;\tilde{\chi}). \tag{10}\]
Loosely speaking, this is the number of nats required to identify a useful approximation to the learning target. The rate-distortion function \(\mathbb{H}_{\epsilon}\) serves an alternative upper bound as well as a lower bound (Jeon et al., 2023, Theorem 12):
\[\sup_{\epsilon\geq 0}\min\left(\frac{\mathbb{H}_{\epsilon}(\chi)}{T},\epsilon \right)\leq\overline{\text{Regret}}_{\chi,\tilde{\pi}}(T|\pi)\leq\inf_{ \epsilon\geq 0}\left(\frac{\mathbb{H}_{\epsilon}(\chi)}{T}+\epsilon\right). \tag{11}\]
Even when \(\mathbb{H}(\chi)=\infty\), the rate \(\mathbb{H}_{\epsilon}(\chi)\) can increase at a modest pace as \(\epsilon\) vanishes. We build on Example 8 to offer a simple and concrete illustration. While that example was not framed as one of supervised learning, it can be viewed as such by taking each observation \(O_{t+1}\) to encode only a label \(Y_{t+1}\) resulting from a non-informative input \(X_{t}\). We will characterize the rate-distortion function and regret for each of the three learning targets considered in Example 8.
**Example 9**.: **(rate-distortion and convergence for logit data)** _Recall the environment of Example 8, in which binary labels are generated according to \(\mathbb{P}(O_{t+1}=1|\theta,H_{t},A_{t})=e^{\theta}/(1+e^{\theta})\) based on a latent variable \(\theta\sim\mathcal{N}(0,1)\)._
_For the "obvious" learning target \(\chi=\theta\) and all \(\tilde{\chi}\), \(\mathbb{E}[\mathbf{d}_{\mathrm{KL}}(P_{t}^{\chi}\|P_{t}^{\chi})]\leq\mathbb{E}[( \chi-\mathbb{E}[\chi|\tilde{\chi}])^{2}]\). As a result, \(\tilde{\chi}\sim\mathcal{N}(\chi,\epsilon)\) is an element of \(\Theta_{\epsilon}\), and therefore,_
\[\mathbb{H}_{\epsilon}(\chi)\leq\frac{1}{2}\ln\left(1+\frac{1}{\epsilon} \right).\]
_It follows from Equation 11 that_
\[\overline{\mathrm{Regret}}_{\chi,\tilde{\pi}}(T|\pi)\leq\ \inf_{\epsilon\geq 0} \left(\frac{1}{2T}\ln\left(1+\frac{1}{\epsilon}\right)+\epsilon\right)\ \leq\ \frac{\ln\left(1+2T\right)+1}{2T}.\]
_For the "uninformative" learning target \(\chi=\emptyset\), \(\mathbb{H}_{\epsilon}(\chi)=0\), and therefore \(\overline{\mathrm{Regret}}_{\chi,\tilde{\pi}}(T|\pi)=0\) for all \(T\). For the "overinformative" learning target \(\chi=(O_{t+1}:t\in\mathbb{Z}_{+})\), on the other hand, for \(\epsilon<\mathbb{E}[\ln(1+e^{\theta})/(1+e^{\theta})]\), \(\mathbb{H}_{\epsilon}(\chi)=\infty\) and \(\overline{r}_{\chi,T}-\overline{r}_{*,T}>\mathbb{E}[\ln(1+e^{\theta})/(1+e^{ \theta})]/2>0\)._
For this example, though the "obvious" choice of learning target \(\chi=\theta\) yields \(\mathbb{H}(\chi)=\infty\), the rate \(\mathbb{H}_{\epsilon}(\chi)\) is modest even for small positive values of \(\epsilon\). Because of this, an optimal agent converges quickly, as expressed by \(O((\ln T)/T)\) bound. Further, knowledge \(\theta\) enables efficient computation of predictions \(P_{t}(1)=\mathbb{P}(Y_{t+1}=1|\theta,H_{t})=e^{\theta}/(1+e^{\theta})\). On the other hand, the "underinformed" learning target \(\chi=\emptyset\) is not helpful, and with respect to the "overinformed" learning target, regret does not vanish.
#### 4.1.3 The Cosmic Learning Target
The universe is identified by a finite number of bits. Consider a "cosmic" learning target \(\chi^{*}\) that expresses the entirety of this information. Lloyd [2002] derives an upper bound of \(\mathbb{H}(\chi^{*})\leq 10^{120}\). For an agent that retains all information about \(\chi^{*}\) that it observes, regret must vanish since, by Equation 9, \(\overline{\mathrm{Regret}}_{\chi,\tilde{\pi}}(T|\pi)\leq\mathbb{H}(\chi^{*})/T\). However, this entropy is so large that the bound is effectively vacuous, establishing meaningful levels of error only for astronomically large \(T\). The following example illustrates how a simpler learning target can give rise to a more relevant analysis.
**Example 10**.: **(logit data from cosmic information)** _Consider again a supervised learning environment with binary labels, along the lines studied in Examples 8 and 9. In principle, given privileged knowledge of the cosmic learning target \(\chi^{*}\), an agent should be able to generate perfect predictions \(P_{t}^{\chi}\left(1\right)=\mathbb{P}(O_{t+1}=1|\chi^{*},H_{t},A_{t})= \mathbb{1}\left(O_{t+1}=1\right)\). This is because all uncertainty is resolved by \(\chi^{*}\). However, the rate at which \(\overline{\mathrm{Regret}}_{\chi^{*},\tilde{\pi}}(T|\pi)\) vanishes may be too slow to be relevant._
_Suppose that for some statistic \(\theta\), which is determined by \(\chi^{*}\), \(\mathbb{P}(O_{t+1}=1|\theta,H_{t},A_{t})\) is very closely approximated by \(e^{\theta}/(1+e^{\theta})\). Perhaps so closely that an agent with knowledge of \(\theta\) will not be able to tell the difference over any reasonable time frame. Taking \(\chi=\theta\) to be the learning target, we obtain the regret bound of Example 9._
A diligent reader may wonder how to reconcile the finiteness of cosmic information with infinite entropy of learning targets we sometimes consider, like in our treatment of logit data in Example 8. The continuous variable \(\chi=\theta\) of that example, for which \(\mathbb{H}(\theta)=\infty\), serves as an approximation. In particular, \(\theta\) is a random variable with respect to a probability space \((\Omega,\mathbb{F},\mathbb{P})\) and thus determined by an atomic outcome \(\omega\) in an infinite set \(\Omega\). The infinite set approximates the set of possible realizations of the cosmic learning target \(\chi^{*}\). In particular, the latter set is so large that, for practical purposes, we consider it infinite. This gives rise to a mathematical formalism that accommodates random variables with infinite entropy.
### Continual Learning
Unlike more traditional framings of machine learning, continual learning does not generally afford an obvious choice of learning target. In traditional framings, the agent is considered "done" with learning when it has gathered enough information about a learning target to make effective decisions. Performant continual learning agents perpetually ingest new information. This unending process is incentivized by constraints, as we now discuss.
#### 4.2.1 Constraints Induce Persistent Regret
Recall that practical agent designs typically maintain an agent state, with behavior characterized by functions \(\psi\) and \(\pi\). In particular, the agent state evolves according to \(U_{t+1}\sim\psi(\cdot|U_{t},A_{t},O_{t+1})\) and actions are sampled according to \(A_{t}\sim\pi(\cdot|U_{t})\). Hence, the agent policy is encoded in terms of the pair \((\psi,\pi)\). Let \(\overline{r}_{\psi,\pi}=\mathbb{E}_{\pi,\psi}[\frac{1}{T}\sum_{t=0}^{T-1}R_{t+1}]\) denote the average reward attained by such an agent.
The information content of \(U_{t}\) is quantified by the entropy \(\mathbb{H}(U_{t})\), which is constrained by the agent's information capacity. As discussed in Section 3.3, for common scalable agent designs, computation grows with this information content. Hence, computational constraints limit information capacity. To understand implications of this restriction, in this section, we focus on the problem of agent design with fixed information capacity \(C\):
\[\begin{split}\max_{\psi,\pi}&\quad\overline{r}_{ \psi,\pi}\\ \text{s.t.}&\quad\sup_{t}\mathbb{H}(U_{t})\leq C. \end{split} \tag{12}\]
The following simple example, which is similar to one presented in (Sutton et al., 2007, Section 2), illustrates how such a constraint induces persistent regret.
**Example 11**.: **(bit flipping)** _Consider an environment that generates a sequence of binary observations \(\mathcal{O}=\{0,1\}\), initialized with \(O_{1}\) distributed \(\operatorname{Bernoulli}(1/2)\), and evolving according to \(\mathbb{P}(O_{t+1}=o|p,H_{t},A_{t})=p(1-o)+(1-p)o\), where \(p\) is a random variable. Note that \(p\) governs the probability of a bit flip. Each action is a binary prediction \(A_{t}\) of the next observation \(O_{t+1}\) and yields reward \(R_{t+1}=\mathbb{1}\left(A_{t}=O_{t+1}\right)\). In other words, the agent predicts the next bit and receives a unit of reward if its prediction is correct._
_It is natural to consider \(\chi=p\) as a learning target. In particular, with privileged knowledge of this learning target, an agent can act according to a target policy \(\tilde{\pi}\) that generates a prediction \(A_{t}=1\) if and only if \(\mathbb{P}(O_{t+1}=1|p,H_{t})>1/2\) or, equivalently, \(p(1-O_{t})+(1-p)O_{t}>1/2\). The optimal unconstrained agent policy \(\pi\) generates action \(A_{t}=1\) if and only if \(\mathbb{P}(O_{t+1}=1|H_{t})>1/2\) and satisfies \(\overline{\operatorname{Regret}}_{\chi,\tilde{\pi}}(T|\pi)\to 0\). This is because, if not hindered by constraints, an agent can identify \(p\) over time from observations._
_Now suppose the agent is constrained by an information capacity of \(C=\mathbb{\ln 2}\ \mathrm{nats}=1\ \mathrm{bit}\). Encoding an accurate approximation of \(p\) in the agent state becomes infeasible. Instead, the optimal solution to Equation 12 is comprised of an agent state update function \(U_{t}=O_{t}\) and a policy function for which \(A_{t}=1\) if and only if \(\mathbb{E}[p](1-U_{t})+\mathbb{E}[1-p]U_{t}>1/2\). In other words, the agent simply retains its most recent observation as agent state and predicts a bit flip with probability \(\mathbb{E}[p]\). This agent does not aim to identify \(\chi=p\) nor, for that matter, any nontrivial learning target. And regret with respect to \(\overline{\operatorname{Regret}}_{\chi,\tilde{\pi}}(T|\pi)\) does not vanish._
#### 4.2.2 Nonstationary Learning Targets
Continual learning is often characterized as addressing nonstationary environments. But whether an environment is nonstationary is a subjective matter. With respect to the cosmic learning target \(\chi^{*}\), _any_ environment is stationary. However, as illustrated by Example 10, it can be useful to view an environment as nonstationary with respect to a simpler latent stochastic process \((\theta_{t}:t\in\mathbb{Z}_{++})\). An agent that tracks this process can be viewed at each time \(t\) as trying to learn about \(\theta_{t}\). This suggests viewing \(\chi_{t}=\theta_{t}\) as a time-varying - or _nonstationary_ - learning target.
Motivated by this view, we interpret _nonstationarity_ as indicating agent behavior that is naturally explained by a time-varying learning target. Example 11 illustrates how this sort of behavior is incentivized by restricting information capacity. The optimal unconstrained agent learns about the latent variable \(p\), which constitutes a fixed learning target. On the other hand, the capacity-constrained agent learns at each time about \(O_{t}\). This bit of information is ingested into the agent state and can be viewed as a nonstationary learning target \(\chi_{t}=O_{t}\).
Our narrative is aligned with that of Sutton et al. (2007), who suggest that tracking a nonstationary learning target is warranted even in a stationary environment if the environment is sufficiently complex. The cosmic learning target \(\chi^{*}\) is helpful to making this point. Consider a supervised learning environment for which an enormous amount of information is required to attain reasonable performance. In other words, for any reasonable \(\epsilon\), \(\mathbb{H}_{\epsilon}(\chi^{*})\) is huge. An agent with modest capacity \(C\ll\mathbb{H}_{\epsilon}(\chi^{*})\) ought not aim to accumulate all information required to attain error \(\epsilon\) because of its insufficient information capacity. However, it may be possible to attain error \(\epsilon\) by only retaining at each time step information that will be helpful in the near term. This amounts to tracking a nonstationary learning target.
Our notion of a nonstationary learning target relates to the work of Abel et al. (2023), which proposes a definition of continual reinforcement learning. This definition offers a formal expression of what it means for an agent to never stop learning. The characterization is subjective in that it defines non-convergence with respect to a _basis_, which is a fixed set of policies. An agent is interpreted as searching among these and then of converging if it settles on one
that it follows thereafter. The work associates continual learning with non-convergence. A learning target offers an alternative subjective construct for interpretation of agent behavior. Loosely speaking, if an agent converges on an element of the basis, that can be thought of as deriving from a fixed learning target. Perpetually transitioning among element of the basis is akin to pursuing a nonstationary learning target.
### On Learning About an Unknown Environment
In traditional framings of machine learning, it is common to characterize the environment as a random (unknown) latent variable. The agent is then interpreted as learning about the environment - that is, this random variable - from its experience. Indeed, the random environment can serve as a learning target in the vein of vanishing-regret learning. In supervised learning, if the input distribution is known, this latent variable may be the unknown function that maps inputs to distributions over labels. In some traditional framings of reinforcement learning, it may be the unknown transition probability matrix of a Markov Decision Process (MDP).
In contrast, our general formulation of continual learning, as described in Section 2.1, takes the environment, and in particular, the observation probability function \(\rho\), to be known. With this formulation, the environment is no longer a random variable and can not serve as a useful learning target. In this subsection, we will motivate this decision. We first discuss why a deterministic characterization is general and minimal. We then explain that, because the sort of environment of interest to continual learning does not give rise to an obvious fixed learning target, random characterizations of the environment do not afford the kind of interpretation and analysis enjoyed by vanishing-regret learning.
As discussed in Section 2.1, the dynamics represented by a random observation probability function \(\tilde{\rho}\) can be expressed in terms of a deterministic one \(\rho\) for which \(\rho(\cdot|H_{t},A_{t})=\mathbb{E}[\tilde{\rho}(\cdot|H_{t},A_{t})|H_{t},A_{ t}]\). The converse also holds. In particular, for any deterministic \(\rho\) and random variable \(\chi\), we can define an equivalent random \(\tilde{\rho}\) for which \(\tilde{\rho}(\cdot|H_{t},A_{t})=\mathbb{P}(O_{t+1}\in\cdot|\chi,H_{t},A_{t})\). Note that \(\tilde{\rho}\) offers an equivalent characterization because \(\rho(\cdot|H_{t},A_{t})=\mathbb{P}(O_{t+1}\in\cdot|H_{t},A_{t})=\mathbb{E}[ \mathbb{P}(O_{t+1}\in\cdot|\chi,H_{t},A_{t})|H_{t},A_{t}]=\mathbb{E}[\tilde{ \rho}(\cdot|H_{t},A_{t})|H_{t},A_{t}]\). Since this is true for any random variable \(\chi\), there are many random environments that represent the same deterministic environment. As such, the deterministic representation is minimal, while the random representation requires choice of a random variable \(\chi\), which is inconsequential to the dynamics of the environment.
In the traditional framing of supervised learning, data pairs are assumed to be _exchangeable_. Consequently, de Finetti's Theorem establishes existence of a learning target \(\chi\) conditioned on which the data pairs are iid [de Finetti, 1929]. It is natural to think of \(\chi\) as identifying an unknown environment and bringing to bear interpretation and analysis afforded by vanishing-regret learning. Specifically, assuming the input distribution is known, \(\chi\) is often taken to be an unknown function that maps inputs to distributions over labels. A similar story goes for the traditional framing of reinforcement learning. Under a weaker exchangeability condition presented in Appendix A, it is natural to characterize dynamics in terms of an unknown MDP \(\chi\). In both cases, an agent can learn over time to do about as well as if \(\chi\) were known.
In contrast, environments of the sort considered in continual learning tend not to give rise to an obvious choice of fixed learning target. In particular, regret generally does not vanish with respect to any particular learning target that enables efficient computation of performant actions. For example, Liu et al. [2023a] discusses how regret does not vanish with respect to common choices of learning targets in non-stationary bandit learning. Absent the analytic role of a learning target, we opted for minimality in using a deterministic environment characterization for our formulation of continual learning. That being said, for the purpose of exposition, it can often be more intuitive to characterize a specific environment using a latent variable model. For instance, many of the examples in this monograph are described as latent AR(1) processes, such as the scalar tracking problem in Section 2.3.1. Such a characterization can be helpful to facilitate interpretation.
## Summary
* A **learning target** (denoted by \(\chi\)) is random variable that represents what an agent aims to learn about.
* A **target policy** (denoted by \(\tilde{\pi}_{\chi}\)) is a random variable that takes values in the set of policies which represents the policy the agent would use if \(\chi\) were known.
* For each duration \(T\in\mathbb{Z}_{++}\), \[\overline{r}_{\pi,T}=\mathbb{E}_{\pi}\left[\frac{1}{T}\sum_{t=0}^{T-1}R_{t+1} \right]\qquad\text{and}\qquad\overline{r}_{\chi,\tilde{\pi},T}=\mathbb{E}_{ \tilde{\pi}_{\chi}}\left[\frac{1}{T}\sum_{t=0}^{T-1}R_{t+1}\Big{|}\chi\right].\]
* The **average regret** over duration \(T\) incurred by \(\pi\) w.r.t \((\chi,\tilde{\pi})\) is \[\overline{\text{Regret}}_{\chi,\tilde{\pi}}(T|\pi)=\mathbb{E}\left[\overline{ r}_{\chi,\tilde{\pi},T}-\overline{r}_{\pi,T}\right].\]
* \(\pi\) exhibits **vanishing regret** w.r.t. \((\chi,\tilde{\pi})\) if \(\limsup_{T\to\infty}\overline{\text{Regret}}_{\chi,\tilde{\pi}}(T|\pi)\leq 0\).
* In the special case of **continual supervised learning**, \(P_{t}^{*}(y)=\mathbb{P}(Y_{t+1}=y|H_{t})\) maximizes \(\overline{r}_{\pi,T}\) and \(P_{t}^{\chi}(y)=\mathbb{P}(Y_{t+1}=y|\chi,H_{t})\) maximizes \(\overline{r}_{\chi,\tilde{\pi},T}\) for every duration \(T\) For the above optimal choices of \(\pi,\tilde{\pi}_{\chi}\), \[\overline{\text{Regret}}_{\chi,\tilde{\pi}}(T|\pi)=\mathbb{E}\left[\frac{1}{T }\sum_{t=0}^{T-1}\mathbf{d}_{\text{KL}}(P_{t}^{\chi}\|P_{t}^{*})\right].\] For all durations \(T\), \[\overline{\text{Regret}}_{\chi,\tilde{\pi}}(T|\pi)\leq\frac{\mathbb{H}(\chi)} {T}.\] For continuous-valued \(\chi\), \(\mathbb{H}(\chi)\) is often \(\infty\), leading to a vacuous upper bound. The bound can be improved via _rate-distortion theory_. For any \(\epsilon\geq 0\), let \[\Theta_{\epsilon}=\left\{\tilde{\chi}:\mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t} ^{\chi}\|P_{t}^{\tilde{\chi}})]\leq\epsilon\text{ for all }t\right\}\] be the set of all random variables \(\tilde{\chi}\) which enable predictions with _distortion_ at most \(\epsilon\). The _rate_ of \(\chi\) with _distortion_ tolerance \(\epsilon\) is defined by the **rate-distortion function**: \[\mathbb{H}_{\epsilon}(\chi)=\inf_{\tilde{\chi}\in\Theta_{\epsilon}}\mathbb{I} (\chi;\tilde{\chi}).\] For all durations \(T\), \[\sup_{\epsilon\geq 0}\min\left(\frac{\mathbb{H}_{\epsilon}(\chi)}{T}, \epsilon\right)\leq\overline{\text{Regret}}_{\chi,\tilde{\pi}}(T|\pi)\leq \inf_{\epsilon\geq 0}\left(\frac{\mathbb{H}_{\epsilon}(\chi)}{T}+\epsilon \right).\]
* The notion of **learning about an uncertain environment*
* does not play a central role in continual learning as it does in vanishing-regret learning:
* In vanishing-regret learning, it is common to characterize the environment as a **random (unknown) latent variable**.
* In contrast, our general formulation of continual learning takes the environment, and in particular, the observation probability function \(\rho\), to be **deterministic (known)**. This characterization is general and minimal.
* In continual learning, random characterizations of the environment do not generally afford the kind of interpretation and analysis enjoyed by vanishing-regret learning. This is because performant agents do not generally focus on identifying a fixed learning target.
Stability Versus Plasticity
While the subjects of _catastrophic forgetting_ and _loss of plasticity_ have attracted a great deal of attention in the continual learning literature, the terms have lacked definitional clarity of the sort that would afford coherent reasoning about tradeoffs. Both notions relate to how an agent processes information. Catastrophic forgetting refers to elimination of useful information. Loss of plasticity refers to an inability to ingest useful new information. In this section, we build on information-theoretic tools introduced in the previous section to formalize these concepts and clarify the interaction between information capacity, stability, and plasticity. To keep the analysis simple, we restrict attention to the special case of continual supervised learning, as presented in Section 2.3.4. Recall that each action is a predictive distribution \(A_{t}=P_{t}\) and each reward \(R_{t+1}=\ln P_{t}(Y_{t+1})\) is the logarithm of the probability assigned to the realized label.
### Stability-Plasticity Decomposition
If the agent had infinite information capacity and could maintain history as its agent state \(U_{t}=H_{t}\) then the informational error would be \(\mathbb{E}[\mathbf{d}_{\mathrm{KL}}(P_{t}^{*}\|\tilde{P}_{t})]=\mathbb{I}(Y_ {t+1};H_{t}|H_{t})=0\). However, with an agent state that retains partial information, the error will typically be larger. In this way, and as expressed by Theorem 3, a constraint on capacity can induce error. This happens through requiring that the agent either forget some old information, forgo ingestion of some new information, or both.
We can formalize the relation between these quantities in information-theoretic terms. At each time step \(t\), the informational error \(\mathbb{E}[\mathbf{d}_{\mathrm{KL}}(P_{t}^{*}\|\tilde{P}_{t})]\) arises as a consequence of useful information forgotten or forgone due to impasticity over previous time steps. As shorthand, let \(H_{t-kt}=(O_{t-k},\ldots,O_{t})\). The error due to information forgotten \(k\) time steps earlier can be expressed as \(I(Y_{t+1};U_{t-k-1}|U_{t-k},H_{t-kt})\): the information about \(Y_{t+1}\) that is available in \(U_{t-k-1}\) but not the next agent state \(U_{t-k}\) or the subsequent experience \(H_{t-k+1:t}\). In other words, this is the information lost in transitioning from \(U_{t-k-1}\) to \(U_{t-k}\) and not recoverable by time step \(t\). The error due to implasticity \(k\) time steps earlier can be expressed as \(\mathbb{I}(Y_{t+1};O_{t-k}|U_{t-k},H_{t-k+1:t})\): the information about \(Y_{t+1}\) that is available in \(O_{t-k}\) but absent from the preceding agent state \(U_{t-k}\) and the subsequent experience \(H_{t-k+1:t}\). In other words, this is information presented by \(O_{t-k}\) but not \(U_{t-k}\) and that is not otherwise available through time step \(t\).
The following theorem provides a formal decomposition, attributing error to forgetting and implasticity. Note that actions are omitted because in the case of supervised learning they do not impact observations.
**Theorem 4**.: _For all \(t\in\mathbb{Z}_{+}\),_
\[\underbrace{\mathbb{E}[\mathbf{d}_{\mathrm{KL}}(P_{t}^{*}\|\tilde{P}_{t})]}_ {\mathrm{error}}=\sum_{k=0}^{t}\left(\underbrace{\mathbb{I}(Y_{t+1};U_{t-k- 1}|U_{t-k},H_{t-k:t})}_{\mathrm{forgetting\ at\ lag\ }k}+\underbrace{\mathbb{I}(Y_{t+1};O_{t-k}|U_{t-k},H_{t-k+1:t})}_{ \mathrm{impasticity\ at\ lag\ }k}\right).\]
Proof.: We have
\[\mathbb{E}[\mathbf{d}_{\mathrm{KL}}(P_{t}^{*}\|\tilde{P}_{t})] \stackrel{{(a)}}{{=}}\mathbb{I}(Y_{t+1};H_{t}|U_{t})\] \[\stackrel{{(b)}}{{=}}\mathbb{I}(Y_{t+1};H_{t},U_{0: t-1}|U_{t})\] \[\stackrel{{(c)}}{{=}}\sum_{k=0}^{t}\mathbb{I}(Y_{t+1 };O_{t-k},U_{t-k-1}|U_{t-k:t},H_{t-k+1:t})\] \[\stackrel{{(d)}}{{=}}\sum_{k=0}^{t}\mathbb{I}(Y_{t+1 };O_{t-k},U_{t-k-1}|U_{t-k},H_{t-k+1:t})\] \[\stackrel{{(e)}}{{=}}\sum_{k=1}^{t}\left(\mathbb{I} (Y_{t+1};U_{t-k-1}|U_{t-k},H_{t-k:t})+\mathbb{I}(Y_{t+1};O_{t-k}|U_{t-k},H_{t- k+1:t})\right),\]
where \((a)\) follows from Theorem 2; \((b)\) follows from \(U_{0:t-1}\perp Y_{t+1}|U_{t},H_{t}\); \((c)\) follows from the chain rule of mutual information, and we take \(Y_{0}\) and \(U_{-1}\) to be the singleton \(\emptyset\); \((d)\) follows from the fact that for all \(j\), \(U_{j+1}\) is determined by \((U_{j},O_{j+1})\) and independent algorithmic randomness; and \((e)\) follows from chain rule of mutual information.
Implasticity and forgetting terms are indexed by a lag \(k\) relative to the current time step \(t\). In the process of updating the agent state from \(U_{t-k-1}\) to \(U_{t-k}\) in response to an observation, information may be ingested and/or forgotten. The implasticity and forgetting terms measure how this ultimately impacts the agent's ability to predict \(Y_{t+1}\). Summing over lags \(k\) produces the immediate error \(\mathbb{I}(Y_{t+1};H_{t}|U_{t})\).
The implasticity at lag \(k\) measures the amount of information presented by \(O_{t-k}\) that is useful for predicting \(Y_{t+1}\) that the agent fails to ingest and is absent from the intermediate data \(H_{t-k+1:t}\). Note that, due to the conditioning on the data \(H_{t-k+1:t}\), this term only penalizes the inability to extract information about \(Y_{t+1}\) that will not be again available from observations between \(t-k\) and \(t\).
The forgetting at lag \(k\) derives from the agent ejecting information that is relevant to predicting \(Y_{t+1}\) when updating its agent state from \(U_{t-k-1}\) to \(U_{t-k}\). Again, due to conditioning on \(H_{t-k:t}\), this term only penalizes for forgotten information that cannot be recovered from other observations to be made before time step \(t\).
Our decomposition indicates that errors due to implasticity and forgetting are _forward looking_ in the sense that they only impact predictions at subsequent times; thus the lag \(k\) relative to prediction error. This is in contrast with much of the continual SL literature, which aims to develop agents that remember information that would have been useful in their past, even if that is unlikely to be useful to their future. Indeed, the term _catastrophic forgetting_ typically refers to loss of useful information, whether useful in the future or past.
Our expressions for implasticity and forgetting are complicated by sums over lags. The expressions simplify greatly when the input sequence \(X_{t}\) is independent of history, i.e., \(X_{t+1}\perp H_{t}\), and when the sequence of agent states and observations forms a stationary stochastic process. This corresponds to a prototypical nonstationary supervised learning formulation: iid inputs and a mapping to label probabilities that is modulated by a stationary stochastic process.
**Theorem 5**.: _Suppose \(((U_{t},O_{t}):t\in\mathbb{Z}_{+})\) is a stationary process and, for all \(t\in\mathbb{Z}_{+}\), \(X_{t+1}\perp H_{t}\). Then, for all \(t\in\mathbb{Z}_{+}\),_
\[\lim_{t\to\infty}\underbrace{\mathbb{E}[\mathbf{d}_{\mathrm{KL}}(P _{t}^{*}\|\tilde{P}_{t})]}_{\mathrm{error}}=\lim_{t\to\infty}\left(\underbrace{ \mathbb{I}(H_{t+1:\infty};U_{t-1}|U_{t},O_{t})}_{\mathrm{ forgetting}}+\underbrace{ \mathbb{I}(H_{t+1:\infty};O_{t}|U_{t})}_{\mathrm{implasticity}}\right).\]
Proof.: We establish the result by taking limits of the expressions from Theorem 4 for cumulative forgetting and implasticity. The forgetting term yields
\[\lim_{t\to\infty}\sum_{k=0}^{t}\mathbb{I}(Y_{t+1};U_{t-k-1}|U_{t -k},H_{t-k:t}) \stackrel{{(a)}}{{=}}\lim_{t\to\infty}\sum_{k=0}^{t} \mathbb{I}(Y_{t+k+1};U_{t-1}|U_{t},H_{t:t+k})\] \[\stackrel{{(b)}}{{=}}\lim_{t\to\infty}\sum_{k=0}^{t} \mathbb{I}(X_{t+k+1},Y_{t+k+1};U_{t-1}|U_{t},H_{t:t+k})\] \[\stackrel{{(c)}}{{=}}\lim_{t\to\infty}\mathbb{I}(H_{ t+1:2t+1};U_{t-1}|U_{t},O_{t})\] \[=\mathbb{I}(H_{t+1:\infty};U_{t-1}|U_{t},O_{t}),\]
where \((a)\) follows from the fact that since \((U_{t},O_{t})\) is stationary, \(\mathbb{I}(Y_{t+1};U_{t-k-1}|U_{t-k},H_{t-k:t})\) is a constant independent of \(t\), \((b)\) follows from the fact that \(X_{t+k+1}\perp H_{t+k}\), and \((c)\) follows from the chain rule of mutual information. The implasticity term yields
\[\lim_{t\to\infty}\sum_{k=0}^{t}\mathbb{I}(Y_{t+1};O_{t-k}|U_{t-k },H_{t-k+1:t}) \stackrel{{(a)}}{{=}}\lim_{t\to\infty}\sum_{k=0}^{t} \mathbb{I}(Y_{t+k+1};O_{t}|U_{t},H_{t+1:t+k})\] \[\stackrel{{(b)}}{{=}}\lim_{t\to\infty}\sum_{k=0}^{t} \mathbb{I}(X_{t+k+1},Y_{t+k+1};O_{t}|U_{t},H_{t+1:t+k})\] \[\stackrel{{(c)}}{{=}}\lim_{t\to\infty}\mathbb{I}(H_{ t+1:2t+1};O_{t}|U_{t})\] \[=\mathbb{I}(H_{t+1:\infty};O_{t}|U_{t}),\]
where \((a)\) follows from the fact that since \((U_{t},O_{t})\) is stationary, \(\mathbb{I}(Y_{t+1};O_{t-k}|U_{t-k},H_{t-k+1:t})\) is a constant independent of \(t\), \((b)\) follows from the fact that \(X_{t+k+1}\perp H_{t+k}\), and \((c)\) follows from the chain rule of mutual information.
The first term on the right-hand-side equates error due to forgetting with the information available in the previous agent state \(U_{t-1}\), but neither in the subsequent agent state \(U_{t}\) or the subsequent observation \(O_{t}\). The second term characterizes error due to implasticity as the information about the future \(H_{t+1:\infty}\) available in the current observation \(O_{t}\) but not ingested into the current agent state \(U_{t}\).
### A Didactic Example
We will illustrate concretely through a simple example how agent and environment dynamics influence implasticity and forgetting errors. We focus here on insight that can be drawn from analysis of the example, deferring details of the analysis to Appendix B.
#### 5.2.1 LMS with an AR(1) Process
We consider a very simple instance of continual SL in which the input set is a singleton \(\mathcal{X}=\{\emptyset\}\) and the label set \(\mathcal{Y}=\Re\) is of real numbers. Since inputs \(X_{0},X_{1},\ldots\) are uninformative, we take the history to only include labels \(H_{t}=(Y_{1},\ldots,Y_{t})\). Each label is generated according to
\[Y_{t+1}=\theta_{t}+W_{t+1},\]
where \(\theta_{t}\) is a latent variable that represents the state of the process and \(W_{t+1}\) is a sample of an iid \(\mathcal{N}(0,\sigma^{2})\) sequence. The sequence \(\theta_{t}\) is initialized with \(\theta_{0}\sim\mathcal{N}(0,1)\) and evolves according to
\[\theta_{t+1}=\eta\theta_{t}+V_{t+1},\]
where \(\eta\) is a fixed parameter and \(V_{t+1}\) is a sample of an iid \(\mathcal{N}(0,1-\eta^{2})\) sequence. Note that, for all \(t\), the marginal distribution of \(\theta_{t}\) is standard normal.
Consider an agent that maintains a real-valued agent state, initialized with \(U_{0}\sim\mathcal{N}(0,1)\) and updated according to the least mean squares (LMS) algorithm (Widrow and Hoff, 1960)
\[U_{t+1}=U_{t}+\alpha(Y_{t+1}-U_{t}),\]
with a fixed stepsize \(\alpha\in(0,1)\). This agent can then generate predictions \(P_{t}(\cdot)=\mathbb{P}(Y_{t+1}\in\cdot|U_{t})\), which are Gaussian distributions. This agent does not necessarily make optimal use of history, meaning that \(\mathbf{d}_{\mathrm{KL}}(P_{t}^{*}\|P_{t})>0\). However, for a suitably chosen stepsize, the agent attains zero error in steady state: \(\lim_{t\to\infty}\mathbf{d}_{\mathrm{KL}}(P_{t}^{*}\|P_{t})=0\). With this optimal stepsize the LMS algorithm becomes a steady-state Kalman filter (Kalman, 1960).
#### 5.2.2 Constraining Information Content
Note that, for our agent, the forgetting error \(\mathbb{I}(Y_{t+1};U_{t-k-1}|U_{t-k},H_{t-k:t})\) is always zero because \(U_{t-k-1}\) is determined by the next agent state \(U_{t-k}\) and the label \(Y_{t-k}\), which makes up part of \(H_{t-k:t}\), according to \(U_{t-k-1}=(U_{t-k}-\alpha Y_{t-k})/(1-\alpha)\). This degeneracy stems from the agent's infinite capacity - since the agent state is a real number, it can encode an infinite amount of information. As discussed earlier, practical designs of agents subject to computational constraints limit information capacity. As a microcosm that may yield insight relevant to such contexts, we will consider a variant of LMS with bounded information capacity.
More relevant qualitative behavior emerges when the agent restricts the information content \(\mathbb{H}(U_{t})\) of the agent state to operate with limited information capacity. Such a constraint arises, for example, if instead of retaining a continuous-valued variable the agent must quantize, encoding an approximation to the agent state with a finite alphabet. The state of such an agent might evolve according to
\[U_{t+1}=U_{t}+\alpha(Y_{t+1}-U_{t})+Q_{t+1}, \tag{13}\]
where \(Q_{t+1}\) represents quantization error. We can think of this error as quantization noise, which perturbs results of agent state updates. In particular, it is the difference between the real value \(U_{t}+\alpha(Y_{t+1}-U_{t})\) that the agent would store as \(U_{t+1}\), if it could, and the quantized value that it actually stores.
Because the effects of quantization noise can be difficult to quantify, for the purpose of analysis, it is common in information theory to approximate quantization noise as an independent Gaussian random variable. In some cases, this does not impact qualitative insights [e.g., Cover and Thomas 2012, Theorem 10.3.2]. For analytical tractability, we will approximate the quantization noise \((Q_{t}:t\in\mathbb{Z}_{++})\) as an iid \(\mathcal{N}(0,\delta^{2})\) sequence, for some \(\delta>0\), which we will refer to as the _quantization noise intensity_.
Like actual quantization, this Gaussian noise moderates information content of the agent state. However, while actual quantization keeps the information content \(\mathbb{H}(U_{t})\) bounded, with Gaussian noise, \(\mathbb{H}(U_{t})\) becomes infinite. This is because \(U_{t}\) encodes infinite irrelevant information expressed by the Gaussian noise itself. When Gaussian noise is used to approximation quantization, rather than \(\mathbb{H}(U_{t})\), it is more appropriate to take the information to be \(\mathbb{I}(U_{t};H_{t})\). This represents the number of nats of information from the history \(H_{t}\) retained by the agent state \(U_{t}\). In particular, \(\mathbb{I}(U_{t};H_{t})\) excludes irrelevant information expressed by the Gaussian noise \(Q_{t}\).
As one would expect, for our setting of LMS with an AR(1) process, \(\mathbb{I}(U_{t};H_{t})\) is infinite when \(\delta=0\), then monotonically decreases, vanishing as the quantization noise intensity \(\delta\) increases. We impose a constraint \(\mathbb{I}(U_{t};H_{t})\leq C\) to express a fixed information capacity \(C\). Given a choice of stepsize \(\alpha\), we take the quantization noise intensity to be the value \(\delta_{*}\) so that this constraint is binding: \(\mathbb{I}(U_{t};H_{t})=C\). This models a quantization scheme that maximizes information content subject to the information capacity. As established by Theorem 12 in Appendix B.1,
\[\delta_{*}(\alpha)=\alpha^{2}\frac{\sigma^{2}(1-\eta+\eta\alpha)+1+\eta-\eta \alpha}{1-\eta+\eta\alpha}\frac{\exp(-2C)}{1-\exp(-2C)}. \tag{14}\]
#### 5.2.3 Analysis
Figure 5 plots errors due to implasticity \(\mathbb{I}(Y_{t+1:\infty};Y_{t}|U_{t})\) and forgetting \(\mathbb{I}(Y_{t+1:\infty};U_{t-1}|U_{t},Y_{t})\) versus stepsize, for asymptotically large \(t\) and observation noise standard deviation \(\sigma=0.5\), an information capacity \(C=2\), and autoregressive model coefficients \(\eta=0.9,0.95,0.99\). As \(\alpha\) increases, the forgetting increases and the implasticity decreases. Then, as \(\alpha\) approaches one, the former decreases and the latter increases, albeit to much lesser extents. It is natural for a small stepsize to reduce forgetting and implasticity since increasing the stepsize increases the weight placed on recent versus previously observed data. For the same reason, as the figure indicates, the optimal stepsize \(\alpha^{*}\) decreases as \(\eta\) increases. This is because a larger coefficient \(\eta\) increases the mixing time of the process, and this warrants placing more weight on less recent observations to increase the duration over which they influence predictions.
Figure 6 plots the optimal stepsize as a function of \(\eta\) (Figure 5(a)), the information capacity (Figure 5(b)), and the quantization noise intensity \(\delta\) (Figure 5(c)). Note that we use \(\alpha^{*}\) to represent the optimal stepsize for fixed \(C\) and \(\tilde{\alpha}\) for fixed \(\delta\). As suggested by Figure 5, as \(\eta\) increases, the optimal stepsize decreases. This makes sense since the mixing time of the AR(1) process scales with \(1/(1-\eta)\), while the duration over which LMS averages observations scales with \(1/\alpha\). As the mixing time grows, the optimal stepsize \(\alpha^{*}\) decreases to induce averaging over longer durations. Interestingly, as demonstrated by the second plot, the optimal stepsize does not vary with
Figure 5: Average forgetting and implasticity errors versus the stepsize \(\alpha\) for environments with observation noise standard deviation \(\sigma=0.5\), agent capacity \(\mathbb{I}(U_{t};H_{t})=2\), and autoregressive model coefficients \(\eta=0.9,0.95,0.99\). As \(\alpha\) increases, the forgetting error increases and the implasticity error decreases, then the former decreases and the latter increases, albeit to much lesser extents. As \(\eta\) increases, the optimal stepsize \(\alpha^{*}\) decreases.
the information capacity \(C\). This observation is generalized and formalized by Theorem 15 in the appendix. It is interesting to note, however, that this does not imply invariance of the optimal stepsize to the quantization noise intensity \(\delta\). In particular, when information content is unconstrained, the optimal stepsize varies with \(\delta\), as demonstrated in Figure 5(c).
#### 5.2.4 Stepsize Adaptation
Given a capacity constraint, calculating \(\alpha^{*}\) requires knowledge of \(\eta\). A stepsize adaptation scheme can alleviate this need, instead incrementally adjusting the stepsize to produce a sequence, aiming to converge on \(\alpha^{*}\). As an example, we consider a variation of IDBD (Sutton, 1992). In particular, suppose we update the agent state according to
\[U_{t+1}=U_{t}+\alpha_{t+1}(Y_{t+1}-U_{t})+Q_{t+1},\]
where \(\alpha_{t+1}=e^{\beta_{t+1}}\) and \((\beta_{t}:t\in\mathbb{Z}_{++})\) is a scalar sequences generated according to
\[\beta_{t+1}=\beta_{t}+\zeta(Y_{t+1}-U_{t})h_{t}-\frac{1}{2}\zeta\alpha_{t} \frac{d}{d\alpha}\delta_{*}^{2}(\alpha_{t}),\]
\[h_{t+1}=\alpha_{t+1}(Y_{t+1}-U_{t})+(1-\alpha_{t+1})_{+}h_{t}.\]
The standard version of IDBD (Sutton, 1992) does not include the term that depends on \(\delta_{*}\). This term serves to adjust the quantization noise variance in response to changes in the stepsize \(\alpha_{t}\). For any particular stepsize \(\alpha\), \(\delta_{*}(\alpha)\) is the intensity at which the information capacity constraint \(\mathbb{I}(U_{t};H_{t})\leq C\) becomes binding. Hence, ours is a capacity-constrained version of IDBD. The functional form of our extra term is derived in Appendix B.4.
Figure 6(a) demonstrates that capacity-constrained IDBD converges on the optimal stepsize for any given information capacity constraint. This limit of convergence identifies not only a stepsize \(\alpha^{*}\) but also a quantization noise intensity \(\delta_{*}(\alpha^{*})\). The standard version of IDBD, even if executed with quantization noise intensity fixed at \(\delta=\delta_{*}(\alpha^{*})\), converges on a different stepsize, which is optimal for that intensity but not optimal subject to the capacity constraint. Consequently, as Figure 6(b) indicates, the error attained by capacity-constrained IDBD approaches the optimal error under the agent capacity, but does not with the standard version of IDBD.
Updating \(\beta_{t}\) relies on knowledge of the function \(\delta_{*}\), which depends on \(\eta\). This runs counter to the purpose of stepsize adaptation schemes, which ought to arrive at \(\alpha^{*}\) without knowledge of \(\eta\). However, the Gaussian noise \(Q_{t+1}\) represents a simplified abstraction of quantization effects that would manifest if an agent were to encode information in finite memory. With a real encoding algorithm, the capacity constraint is physical and inevitable. As such, the agent need not itself derive the quantization noise intensity and thus may not require knowledge of \(\eta\). How to devise effective stepsize adaptation schemes for practical capacity-constrained agents presents an interesting problem for future research.
A flaw in the narrative of this section is that it does not account for information capacity required to maintain \(\alpha_{t}\), \(\beta_{t}\), and \(h_{t}\), which should be incorporated into the agent state \(U_{t}\) if the agent implements capacity-constrained
Figure 6: As \(\eta\) increases, so does the mixing time \(1/(1-\eta)\). The optimal stepsize \(\alpha^{*}\) decreases in response to induce averaging over a longer duration. The information capacity \(C\) does not impact the optimal stepsize \(\alpha^{*}\). However, the optimal stepsize \(\tilde{\alpha}\) increases with the quantization noise standard deviation \(\delta\). These plots are generated with observation noise \(\sigma=0.5\). The center and right plots are generated with \(\eta=0.9\). For the center plot, \(\delta\) is chosen to meet the capacity constraint.
IDBD. In that event, the information capacity \(C\) must support a joint quantized estimate of \(\mu_{t}\) and these additional parameters. How information capacity should be managed in such situations remains an interesting subject for research.
## Summary
* In continual supervised learning, the informational error \(\mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t}^{*}\|\tilde{P}_{t})]\) can be decomposed into **forgetting*
* and **implasticity**: \[\underbrace{\mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t}^{*}\|\tilde{P}_{t})]}_{ \text{error}}=\sum_{k=0}^{t}\left(\underbrace{\mathbb{I}(Y_{t+1};U_{t-k-1}|U_{ t-k},H_{t-k:t})}_{\text{forgetting at lag $k$}}+\underbrace{\mathbb{I}(Y_{t+1};O_{t-k}|U_{t-k},H_{t-k+1:t})}_{\text{ implasticity at lag $k$}}\right).\]
* The _implasticity_ at lag \(k\) measures the amount of information presented by \(O_{t-k}\) that is useful for predicting \(Y_{t+1}\) that the agent fails to ingest and is absent from the intermediate data \(H_{t-k+1:t}\).
* The _forgetting_ at lag \(k\) measures the amount of information useful for predicting \(Y_{t+1}\) that the agent ejects when updating its agent state from \(U_{t-k-1}\) to \(U_{t-k}\) and is absent from the observations to be made before time step \(t\), \(H_{t-k:t}\).
* If each input \(X_{t}\) is independent of history, i.e., \(X_{t+1}\perp H_{t}\), and the sequence of agent states and observations forms a stationary stochastic process, the stability-plasticity decomposition implies that \[\lim_{t\to\infty}\underbrace{\mathbb{E}[\mathbf{d}_{\text{KL}}(P_{t}^{*}\| \tilde{P}_{t})]}_{\text{error}}=\lim_{t\to\infty}\left(\underbrace{\mathbb{I}( H_{t+1:\infty};U_{t-1}|U_{t},O_{t})}_{\text{forgetting}}+\underbrace{\mathbb{I}(H_{t+1: \infty};O_{t}|U_{t})}_{\text{implasticity}}\right).\]
* The first term equates error due to _forgetting_ with the information available in the previous agent state \(U_{t-1}\), but unavailable in either in the subsequent agent state \(U_{t}\) or the subsequent observation \(O_{t}\).
* The second term characterizes error due to _implasticity_ as the information about the future \(H_{t+1:\infty}\) available in the current observation \(O_{t}\) but not ingested into the current agent state \(U_{t}\).
Figure 7: Capacity-constrained IDBD converges on the optimal stepsize for the given capacity constraint. Even if the quantization noise intensity \(\delta\) is fixed at the limiting value associated with this convergence, the standard version of IDBD converges on a different stepsize, which is optimal for that intensity. As such, while the error attained by capacity-constrained IDBD approaches the optimal error under the agent capacity, it does not with the standard version of IDBD. The plots are generated with \(\zeta=0.01\), \(\eta=0.95\), \(\sigma=0.5\), \(C=0.5\), and corresponding \(\delta\) given by Theorem 12.
Case Studies
Continual learning has broad scope, with potential applications ranging from recommendation and dialogue systems to robotics and autonomous vehicles. In Section 2.4, we framed continual learning as computationally constrained RL with an objective of maximizing average reward subject to constraints (Equation 3). This objective is designed to encompass real-world requirements of continual learning systems across applications. In this section, we study the implications of our objective on the design of performant continual learning agents.
To study these implications, we perform three case studies. Each case study highlights a different facet of continual learning: continual supervised learning, continual exploration, and continual learning with delayed consequences. In our first case study, we consider the special case of continual learning where the agent's actions do not influence future observations. In particular, we study _continual supervised learning_, where an agent observes a nonstationary sequence of data pairs from its environment. The goal is to accurately predict labels \(Y_{t+1}\) from inputs \(X_{t}\). In other environments, such as bandits, the agent's actions influence the agent's immediate observations and therefore what information the agent is exposed to. Therefore, when selecting its next action, an agent should not only take into account the immediate reward (like in supervised learning), but also what information it can gain in order to perform well in the long term. Selecting actions for the purpose of acquiring new information is known as _exploration_. In our second case study, which is on _continual exploration_, we study the implications of nonstationarity for exploration. Our third case study focuses on the broader class of environments in which actions induce not only immediate consequences but also delayed consequences. We call this _continual learning with delayed consequences_. In each case study, we perform simple illustrative experiments to develop intuitions for what behaviors a performant continual learning agent might need.
### Continual Supervised Learning
In this section, we consider the continual supervised learning setting, which has been the focus of most prior work in continual learning. In continual supervised learning, an agent receives a nonstationary sequence of data pairs from its environment, and the agent's actions do not influence the data sequence. The agent's goal is to make accurate predictions on the observations it receives. As an example, consider a fraud detection system for credit card transactions. The goal of such a system is to classify transactions as fraudulent or not fraudulent. The techniques people use to commit fraud and evade detection may evolve over time, and the system must adapt to these changing fraud patterns.
Research in continual supervised learning over the years has proposed synthetic problems for comparing and evaluating different agent designs. For instance, an agent may be learning to classify digits and over time the digits may rotate (Buzzega et al., 2020). On such problems, a common evaluation protocol is to periodically measure the agent's performance on all types of data seen so far. In the digit classification example, the agent's prediction accuracy is evaluated on digits with previously encountered rotation angles. Though not stated explicitly, this means that the agent's objective is to learn about and remember everything in the past.
However, as we have mentioned before, this objective -- to remember everything -- is neither feasible nor necessary. In the real world, the goal is to do well on future tasks, not on those in the past. In the fraud detection example, some fraud techniques may become outdated or impossible to use due to new security measures, and the system may not need to remember how to handle those types of cases. Generally, in a dynamic environment, some knowledge becomes obsolete and not necessary to remember. Further, intelligent systems will be computationally constrained in practice and cannot remember everything. Our average-reward objective under computational constraints tries to capture these real-world requirements. Importantly, this objective has multiple implications for continual supervised learning, especially with regard to forgetting and plasticity.
In this section, we perform a set of simple experiments on a synthetic problem to highlight some of the implications of our average-reward objective in the continual supervised learning setting. We specifically study the impact of forgetting when information recurs and when the agent is computationally constrained. Through our experiments, we make the following points: (1) a performant agent can forget non-recurring information, (2) a performant agent can forget recurring information if it relearns that information quickly, and (3) under tight computational constraints, forgetting may be helpful.
#### 6.1.1 Environment
We perform our experimental evaluation on a modified version of Permuted MNIST, a common benchmark from the continual learning literature (Goodfellow et al., 2013). Permuted MNIST is characterized by a sequence of training datasets. Each dataset is constructed in two steps: (1) we randomly sample a permutation over all pixels, and (2) we apply this permutation to every image from the standard MNIST training dataset. Each such permuted training set is referred to as a _task_.
In our experiments, we aim to study the role of both non-recurring and recurring information on agents' performance, since it is only to the extent that information recurs that it is worthwhile to remember. Equivalently, there may be forward transfer, meaning information from previous tasks can be leveraged to do well on future tasks. On the Permuted MNIST benchmark, it is conceivable that there is some forward transfer (e.g. if the network learns permutation invariant features to classify digits). To design controlled experiments in which it is clear which information recurs and which information does not, we create a modified version with minimal forward transfer. Specifically, we additionally permute the labels randomly in each task. For instance, all the images with label 3 may get assigned a different label, such as 5. On this version of Permuted MNIST, most information by default is non-recurring.
In this setup, a continual learning data sequence consists of a sequence of tasks. In turn, each task consists of \(k\) batches of images with one batch arriving at every time step. Each incoming batch from the data sequence contains \(b_{\text{env}}\) images, where the subscript env is shorthand for "environment." In our experiments, \(b_{\text{env}}=16\). We call the number of time steps a task occurs before switching to the next task the task _duration_.
While the Permuted MNIST benchmark is typically characterized by a sequence of permutations where each permutation occurs once, we consider a particular type of environment dynamics that exhibits _periodic recurrence_. Specifically, on our version of Permuted MNIST, the first permutation recurs periodically: every alternate task is the first permutation, while all other tasks are determined by new permutations which do not recur. As an example, given 100 permutations P1, P2,..., P100 the sequence of tasks may look like P1, P2, P1, P3, P1, P4, P1, P5,.... Note that since we have minimized forward transfer between permutations by permuting labels in addition to input pixels, we can study the effect of information recurrence specifically induced by the first permutation recurring.
#### 6.1.2 Agents
A common agent design that improves an agent's ability to retain past information in continual supervised learning is to equip the agent with a replay buffer (Aljundi et al., 2019; Buzzega et al., 2020; Chaudhry et al., 2019; Chrysakis and Moens, 2020; Yoon et al., 2021). The agent may use this buffer to store past data pairs and continue extracting information from them. In line with this approach, we consider three simple agents which store data in a replay buffer and are trained using SGD. We constrain all agents to perform a single SGD step per time step. The agents perform each SGD step on a batch containing both incoming data (\(b_{\text{env}}\) data pairs) and data sampled uniformly from the replay buffer (\(b_{\text{replay}}\) data pairs). All agents use so-called _reservoir insertion_ to add data to the buffer, which ensures that the full buffer is a random sample from the full history (Vitter, 1985). Specifically, given the buffer size \(B\) and length \(T\) of the entire history \(H_{T}\), reservoir insertion guarantees each data pair has the same probability \(\frac{B}{T}\) of being stored in the buffer, without knowing the length of the entire data stream in advance. This strategy is common practice in the design of continual learning agents with a limited capacity replay buffer (Buzzega et al., 2020; Koh et al., 2023).
The three agents are Large Memory, which has a buffer size of 1 million data pairs, Small Memory, which can store only \(1,000\) data pairs, and Reset, which has the same replay buffer size as Large Memory but periodically resets all agent components (the neural network parameters are re-initialized and the buffer is emptied). In experiments where we study the effect of resetting, we refer to the Large Memory agent as No Reset. For all agents, we use a 2 hidden layer neural network with hidden layer width 1000, and we set \(b_{\text{replay}}=16\).
#### 6.1.3 Evaluation Protocol
Most works in continual supervised learning evaluate agents on their performance on previous tasks. In contrast, we evaluate agents on their ability to maximize average reward, which is in line with previous work that considers the online continual learning setting (Cai et al., 2021; Ghunaim et al., 2023; Prabhu et al., 2023). Because the primary goal in supervised learning is often to achieve good accuracy, we let the reward function be online accuracy.
Our objective is thus to maximize average online accuracy under computational constraints. Using the notation presented in Section 2, this objective can be written as follows:
\[\begin{array}{ll}\max_{\pi}&\liminf_{T\to\infty}\mathbb{E}_{\pi}\left[\frac{1} {T}\sum_{t=0}^{T-1}1(Y_{t+1}=\hat{Y}_{t+1})\right]\\ \text{s.t.}&\text{computational constraint}\end{array}\]
where at each time step \(t\), \(Y_{t+1}\) is the true label of observation \(X_{t}\), \(\hat{Y}_{t+1}\in\arg\max_{y}P_{t}(y\in\cdot)\) is the agent's predicted label, and \(\mathbb{1}\) is the indicator function.
Our average reward objective is the limit of an expectation that integrates over a growing sequence, which in most cases is infeasible to compute. Instead, we will evaluate an approximation by (1) limiting the duration of interaction and (2) approximating the expectation by averaging over a finite number of sequences. We propose a variation of the train/test protocol used in supervised learning. First, we split the data into two parts: a single development sequence and a set of evaluation sequences. We use the development sequence to tune an agent by varying hyperparameters. After selecting the best hyperparameters based on the development sequence, we reset the agent (re-initialize the neural network and replay buffer) and train the agent on the evaluation sequences. Finally, the agent's performance is averaged across all evaluation sequences.
In accordance with this evaluation protocol, on our modified Permuted MNIST environment, we split all data into two subsets, one used to generate the development sequence and the second used to generate evaluation sequences. Each subset contains \(100\) permutations, and the two subsets do not share any permutations. Consequently, there is no overlap in permutations between the development sequence and the evaluation sequences. Each permutation has \(400\) unique data pairs, for a total of \(40,000\) unique data pairs per subset of data. On each subset of data, we train agents over \(3\) random seeds, where the seed determines both the initialization of the agent and the generated sequence of data pairs the agent receives from that subset. We perform hyperparameter tuning on the development sequence and report evaluation performance averaged over all \(3\) seeds (and therefore \(3\) evaluation sequences). For additional details on the environments and evaluation protocol, see Appendix C.1.
#### 6.1.4 Results
**A performant agent can forget non-recurring information.** We consider a case in which each task has a duration of \(2,000\) time steps. This corresponds to the agent seeing each data pair from a task approximately \(100\) times, which allows the agent to achieve \(100\%\) accuracy on each task. We consider this setting so that we can study the effects of forgetting after an agent completely learns each task.
On this variant of Permuted MNIST, we evaluate the Large Memory and Small Memory agents (Figure 8). While Small Memory forgets previous non-recurring information quickly relative to Large Memory, both agents perform
Figure 8: (a) Performance on Permutation 2, which is a non-recurring task. For this plot, the agent’s accuracy is computed and averaged across all data points of Permutation 2. Large Memory remembers previous information, whereas Small Memory forgets. (b) Average accuracy. Despite this difference in forgetting, the two agents perform similarly under our objective. The key reason for this result is that in this environment there is little benefit to remembering non-recurring tasks in order to perform well.
similarly under our objective. This result highlights that a performant agent can forget information that is _non-recurring_. In particular, if a task does not occur more than once and the information required to successfully complete the task is not useful for any future tasks, there is no benefit to remembering this information. Since our version of Permuted MNIST permutes both input pixels and labels, there is little transfer of information between classifying digits with one permutation versus another. Therefore, it is reasonable to forget how to accurately predict labels for non-recurring permutations.
While this experiment is simple, the result helps clarify the role of forgetting in continual supervised learning. In the continual learning literature, catastrophic forgetting is highlighted as a critical issue, and performance on previous tasks is the primary metric used in prior work to evaluate methods (Wang et al., 2023). We argue that when discussing forgetting, it is important to recognize that the usefulness of a (perhaps large) subset of information in real-world applications is transient. Forgetting this information is not catastrophic.
**A performant agent can forget recurring information if it relearns that information quickly.** We evaluate the No Reset and Reset agents with two different permutation durations: \(2,000\) time steps and \(20,000\) time steps (Figure 9). The Reset agent forgets all information, both recurring and non-recurring, after each permutation. However, when permutation durations are long, its performance under our objective only suffers slightly compared to the performance of No Reset, which remembers recurring information. This is because the Reset agent is able to relearn the recurring task quickly relative to the duration of the task. This result is in line with Ashley et al. (2021) which highlights that we need to consider different measures when considering forgetting, including retention and the ability to relearn quickly. Further, the average reward objective resolves the extent to which each of these agent characteristics matters. For instance, the duration of information's utility is an important factor.
**Under tight computational constraints, forgetting may be helpful.** In machine learning, an agent is often parameterized by a neural network, and all parameters of the neural network are updated when taking gradient steps on data pairs. Given this protocol, a computational constraint per time step, which limits the number of FLOPS when updating the neural network, effectively limits the physical capacity of the neural network the agent can use. Tight computational constraints therefore induce capacity constraints.
To study this setting, we consider reductions in the size of the neural network so that the SGD step of each iteration can be executed within tighter computation budgets. Concretely, in addition to a hidden size of \(1000\), we use smaller hidden sizes of \(100\), \(25\), and \(10\). With these smaller network architectures, we evaluate the No Reset and Reset agents where each task has a duration of \(20,000\) time steps (Figure 10). We find that as the capacity decreases, No Reset begins to outperform Reset. We see that when the hidden size is \(10\), the performance of No Reset decreases over time as there is insufficient capacity to both remember everything and continue updating on new data. In particular, No Reset suffers from loss of plasticity, a characteristic of neural networks that has been studied in recent work (Dohare et al., 2021; Lyle et al., 2023; Nikishin et al., 2023). In contrast, because Reset re-initializes the neural network and replay buffer periodically, it retains high plasticity and therefore outperforms No Reset.
Figure 9: (a) Average accuracy when each task duration is short. When each task duration is short there is relatively little time to exploit information after it has been learned, and the performance of Reset suffers relative to No Reset, which doesn’t forget. (b) Average accuracy when each task duration is long. When each task duration is long, the time to learn is short relative to the task duration. Therefore, Reset and No Reset perform similarly.
### Continual Exploration
In supervised learning environments, such as those studied in Section 6.1, an agent's actions do not influence future observations. As a result, the information available to an agent does not depend on its agent policy. However, in more general classes of environments, actions _do_ influence observations. In such environments, active exploration may be required to attain strong performance, as different actions can expose the agent to different information.
As an example, when a recommendation system recommends items to its users, the user behavior the system observes will vary depending on the recommendations it makes. To improve the quality of recommendations in the long term, a recommendation system may benefit from suggesting a diverse range of items to new users, enabling it to learn more about user preferences.
While seeking out new information can be helpful in the long run, exploration often induces a cost in the short term. Specifically, an agent may sacrifice immediate reward. For instance, a recommendation system may have to recommend multiple different items to a user until it identifies the full range of item types that the user likes. While this is great for the user (and the recommendation system) in the long term, in the short term, the user ends up recommended a host of items they may not like. This trade-off between seeking out information and optimizing immediate reward is commonly known as the exploration-exploitation trade-off. Agents must strike a balance between exploring to seek out information and exploiting existing information to optimize immediate performance.
This problem of balancing exploration and exploitation has primarily been studied in the context of vanishing-regret learning. In this section, we will instead study exploration in the context of continual learning. Specifically, we investigate the implications that nonstationarity has for intelligent exploration. We will argue through didactic examples and simulations that in order to perform well in a nonstationary environment, an agent should (1) continuously engage in exploration, (2) prioritize seeking out information that remains useful over an extended period, and (3) learn about the environment dynamics to guide exploration.
#### 6.2.1 Exploration in Stationary Environments
Before considering the implications that nonstationarity has for exploration, let us first consider exploration in a stationary setting. In a typical stationary environment, the degree to which a performant agent explores typically decreases over time. In other words, an agent acquires progressively less information as time goes on. For instance, in some stationary environments, the total amount an agent needs to learn to attain optimal or near-optimal average reward is bounded. In such an environment, the amount of information acquired per time step vanishes as time progresses.
To illustrate the trade-off between exploration and exploitation, as well as the typical decrease in exploration over time in a stationary environment, let us examine a special case of the coin tossing example previously discussed in
Figure 10: (a) Average accuracy at the end of evaluation when the neural network has different hidden layer widths. As the agent becomes increasingly capacity constrained (smaller hidden layer width), forgetting becomes more beneficial. (b) Average accuracy over time when the hidden layer width is 10. When the agent is severely capacity constrained, resetting prevents loss of plasticity.
Example 1. Suppose that the bias \(p_{1}=0.8\) of coin 1 is known, and the prior distribution over the bias of coin 2 is uniform _dyadic_ over the set \(\{0,1\}\). Consequently, at each time \(t\) before coin 2 is tossed, the bias \(p_{2}\) is distributed according to a uniform distribution over \(\{0,1\}\); once coin 2 is tossed, the belief distribution of \(p_{2}\) is updated to concentrate on the outcome of the toss. These environment dynamics are characterized by a function \(\rho\), defined as follows. For \(a=1\), \(\rho(1|h,a)=0.8\). For \(a=2\), \(\rho(1|h,a)=p_{2}\) if coin 2 was previously tossed according to \(h\), and otherwise, \(\rho(1|h,a)=0.5\).
In this example, an agent can benefit from learning about the bias \(p_{2}\) associated with coin 2. Once the coin is tossed and its bias is revealed, the agent can consistently select the better coin. However, the act of exploring and learning about \(p_{2}\) comes at a cost of sacrificing immediate reward; the expected reward of tossing coin 2 is only 0.5, significantly lower than the expected reward of 0.8 associated with coin 1. It is worth mentioning that this example also illustrates the typical decrease in exploration over time in a stationary environment. Indeed, this example presents an extreme case where an agent is "done" with exploring and learning about \(p_{2}\) after the first toss of coin 2.
#### 6.2.2 Exploration in Nonstationary Environments
In nonstationary environments, the nature of exploration differs from that of in stationary environments. Here, we outline three key implications of nonstationarity on exploration:
1. **Never stop exploring.** In a typical nonstationary environment, new information continually arrives. Crucially, there is usually a non-diminishing supply of new and valuable information. As a result, it is common for an optimal agent to engage in continuous exploration to learn about such information. This is in direct contrast to the stationary setting, where agents tend to reduce their exploration over time. This idea has been explicitly or implicitly discussed in prior nonstationary bandit learning or nonstationary reinforcement learning literature. For example, many nonstationary learning algorithms are designed to learn a different latent variable at each time, e.g. a different mean reward or a different MDP. For this purpose, many nonstationary bandit learning algorithms estimate a mean reward and then adopt a stationary bandit learning algorithm as a subroutine (Besbes et al., 2019, Besson and Kaufmann, 2019, Cheung et al., 2019, Garivier and Moulines, 2008, Ghatak, 2021, Gupta et al., 2011, Hartland et al., 2006, Kocsis and Szepesvari, 2006, Mellor and Shapiro, 2013, Raj and Kalyani, 2017, Trovo et al., 2020, Viappiani, 2013].
2. **Seek out durable information.** While new information continually arrives in a typical nonstationary environment, it is important to recognize that some information may be transient and lose its relevance over time. In order to succeed in a nonstationary environment, an agent must prioritize seeking out information that remains valuable and relevant for a longer duration. We refer to this characteristic as _durability_, which represents the degree to which an agent's acquired information remains useful over time. More precisely, an agent should deprioritize acquiring information that is less durable. The concept of information durability was introduced by (Liu et al., 2023b). This work also emphasizes the importance of an agent intelligently considering the durability of information when selecting actions, through coin tossing games, theoretical results, and simulation experiments.
3. **Learn about environment dynamics to guide exploration.** In order to seek out information that is more durable, an agent needs to determine the durability of information. To achieve this, an agent can benefit from dedicating a portion of its computational budget to learning about aspects of the environment dynamics that determine the durability of information.
#### 6.2.3 Coin Swapping Games
We examine three coin swapping examples to illustrate these three implications that nonstationarity has on exploration. These examples are variants of the coin tossing example described in Section 6.2.1. Recall that the bias \(p_{1}=0.8\) of coin 1 is known, and the prior distribution over the bias of coin 2 is uniform dyadic over the set \(\{0,1\}\). The key difference is that now in the coin swapping examples, the second coin is replaced at each time step with probability \(q_{2}\). The coin replacement probability \(q_{2}\) varies across the three examples. Note that these games also serve as specific instances of Example 2, where the prior distributions over the bias of each coin are provided in Section 6.2.1.
**Small replacement probability.** Let us first consider a game where the coin replacement probability \(q_{2}\) for coin 2 is known to the agent and is small, for instance \(q_{2}=0.001\). In this game, before coin 2 is tossed for the first
time, the expected reward from selecting coin 2 is 0.5. If the agent has selected coin 2, and the latest outcome is tails, then the expected reward from selecting coin 2 is 0.0005. In both of these cases, the expected reward from selecting coin 2 is much smaller than the expected reward of 0.8 associated with coin 1. In addition, the bias of coin 1 is known, so selecting coin 2 in this context exemplifies exploration.
Despite that coin 2 is associated with a lower expected reward, an optimal agent should eventually select coin 2, because it is very likely that the coin has eventually been replaced, possibly with a new coin of bias 1. If the new coin does indeed have a bias of 1, selecting coin 2 allows the agent to learn about this bias and then continue selecting the same coin, resulting in a reward of 1 for a long time--an average of 1000 consecutive time steps. Since selecting coin 2 exemplifies exploration, this game serves as an illustration that an optimal agent may need to continuously explore, unlike in stationary environments.
**Large replacement probability.** Next, suppose that the coin replacement probability \(q_{2}\) for coin 2 is instead large, say, \(q_{2}=0.999\). Because coin 2 is likely to be replaced at each time, the information associated with it quickly becomes obsolete. In other words, the information is not very durable. Therefore, unlike in the previous game, an agent does not benefit from learning about the bias of coin 2 anymore in this game. Indeed, an optimal agent will only ever select coin 1 throughout the entire game. This variation highlights the importance of only seeking out information to the extent that the information is durable.
**Unknown Replacement Probability** Now suppose that the coin replacement probability \(q_{2}\) for coin 2 is unknown. This presents a typical scenario where an agent does not know the environment dynamics _a priori_. Recalling the previous two variations of the coin-swapping game, we observe that the optimal behaviors differ significantly based on this coin replacement probability. This indicates that understanding and learning about the coin replacement probability is crucial for determining the durability of information and selecting actions accordingly. This variation of the game emphasizes the importance of learning about the dynamics of the environment in order to guide exploration. By acquiring knowledge about the coin replacement probability, an agent can make informed decisions on how to explore and seek out durable information in a nonstationary environment.
#### 6.2.4 Experiments in AR(1) Bandits
While the coin tossing games serve as a model of environments with abrupt changes and bounded rewards, our insights on continual exploration extend beyond such settings. To demonstrate this, we conduct experiments in a class of Gaussian bandits that capture continuous or smooth changes in environments with unbounded rewards. These bandits are known as AR(1) Gaussian bandits, and Example 4 in Section 2.3 serves as one specific instance of an AR(1) Gaussian bandit. The AR(1) bandits or similarly constructed nonstationary Gaussian bandits have been studied by Gupta et al. (2011); Kuhn et al. (2015); Kuhn and Nazarathy (2015); Liu et al. (2023); Slivkins and Upfal (2008).
**Environment**
We consider a two-armed Gaussian bandit described in Example 4. Recall that the environment is characterized by latent random variables \(\theta_{0,a}\) independently and initially distributed \(\mathcal{N}(\mu_{0,a},\Sigma_{0,a}^{2})\), and updated according to
\[\theta_{t+1,a}=\eta\theta_{t,a}+Z_{t+1,a},\]
with each \(Z_{t+1,a}\) independently sampled from \(\mathcal{N}(0,\zeta^{2})\). Each reward \(R_{t+1}=O_{t+1}=\theta_{t,A_{t}}+W_{t+1,A_{t}}\), where each \(W_{t+1,a}\) is sampled independently from \(\mathcal{N}(0,\sigma^{2})\). The parameters of the environment include the prior mean \(\mu_{0,a}\in\mathbb{R}\), the prior variance \(\Sigma_{0,a}\in\mathbb{R}_{+}\), the AR(1) parameter \(\eta\in\mathbb{R}\), the \(Z_{t,a}\) variance \(\zeta^{2}\in\mathbb{R}_{+}\), and the observation noise variance \(\sigma^{2}\in\mathbb{R}_{+}\).
In our experiments, we let \(\mu_{0,a}=0\), \(\Sigma_{0,a}=1\), \(\sigma=1\), and \(\zeta\) be such that each sequence \((\theta_{t,a}:t\in\mathbb{N})\) is a stationary stochastic process. The AR(1) parameter \(\eta\) is the remaining parameter that we vary across experiments. It determines the degree to which information about \(\theta_{t,a}\) is durable; if \(\eta=1\), then the information about \(\theta_{t,a}\) remains useful forever, and if \(\eta=0\), then the information immediately lose its relevance at the next time step.
**Agents**
We consider two agents: Thompson sampling (Thompson, 1933), which does not take into account the durability of information when selecting actions, and predictive sampling (Liu et al., 2023), which does. Both agents have privileged access to the environment parameters.
**Thompson sampling.** First, we consider the Thompson sampling agent, which was introduced in Example 4. This agent is representative of those which do not account for the durability of information. Recall that the agent maintains a posterior distribution of \(\theta_{t,a}\) for each action \(a\), parameterized by \(\mu_{t,a}\) and \(\Sigma_{t,a}\). Thompson sampling updates these parameters according to the equations presented in Section 2.3.2, Example 4. We restate the equations below:
\[\mu_{t+1,a}=\left\{\begin{array}{ll}\eta\mu_{t,a}+\alpha_{t+1}(O_{t+1}-\eta \mu_{t,a})&\text{if }a=A_{t}\\ \eta\mu_{t,a}&\text{otherwise},\end{array}\right.\qquad\Sigma_{t+1,a}=\left\{ \begin{array}{ll}\frac{1}{\eta^{2}\Sigma_{t,a}+\zeta^{2}}&\text{if }a=A_{t}\\ \eta^{2}\Sigma_{t,a}+\zeta^{2}&\text{otherwise},\end{array}\right.\]
where \(\alpha_{t+1}=\Sigma_{t+1,A_{t}}/\sigma^{2}\). The agent then estimates mean rewards and selects the action corresponding to the largest estimate by sampling each \(\hat{\theta}_{t,a}\) independently from \(\mathcal{N}(\mu_{t,a},\Sigma_{t,a})\), and selecting an action uniformly at random from the set \(\arg\max_{a\in\mathcal{A}}\hat{\theta}_{t,a}\).
**Predictive sampling.** The other agent that we consider is a predictive sampling agent. Predictive sampling can be viewed as a modified version of Thompson sampling that de-prioritizes transient information. In an AR(1) bandit, specifically, similar to the Thompson sampling agent, this agent maintains the same set of hyperparameters, and also updates parameters \(\mu_{t+1,a}\) and \(\Sigma_{t+1,a}\) according the equations above. The parameters determine the posterior belief of \(\theta_{t,a}\). Different from Thompson sampling, the agent then samples each \(\hat{\theta}_{t,a}\) independently from a different distribution, i.e., \(\mathcal{N}(\tilde{\mu}_{t,a},\tilde{\Sigma}_{t,a})\), and selects an action uniformly from the set \(\arg\max_{a\in\mathcal{A}}\hat{\theta}_{t,a}\).
Specifically, predictive sampling updates the parameters \(\tilde{\mu}_{t,a}\) and \(\tilde{\Sigma}_{t,a}\) of its sampling distribution as follows:
\[\tilde{\mu}_{t,a}=\mu_{t,a}\text{ and }\tilde{\Sigma}_{t,a}=\frac{\eta_{a}^{ 2}\Sigma_{t,a}^{2}}{\eta_{a}^{2}\Sigma_{t,a}+x_{a}^{*}},\]
where \(x_{a}^{*}=\frac{1}{2}\left(\zeta_{a}^{2}+\sigma^{2}-\eta_{a}^{2}\sigma^{2}+ \sqrt{(\zeta_{a}^{2}+\sigma^{2}-\eta_{a}^{2}\sigma^{2})^{2}+4\eta_{a}^{2} \zeta_{a}^{2}\sigma^{2}}\right)\).
The update expression for \(\tilde{\Sigma}_{t,a}\) is quite involved, so we provide some intuition for how predictive sampling behaves relative to Thompson sampling. We first consider the case where \(\eta=1\). This corresponds to a stationary environment. In this setting, predictive sampling executes the same policy as Thompson sampling because \(\tilde{\Sigma}_{t,a}=\Sigma_{t,a}\).
On the other extreme, if \(\eta=0\), \(\theta_{t,a}\) is completely determined by the process noise. This corresponds to an environment where information about \(\theta_{t,a}\) is completely not durable. In this setting, predictive sampling's sampling variance \(\tilde{\Sigma}_{t,a}\) for each arm \(a\) is \(0\), and this agent therefore executes a greedy policy.
More generally, the predictive sampling agent's sampling variance \(\tilde{\Sigma}_{t,a}\) lies between \(0\) and \(\Sigma_{t,a}\), i.e., \(0\leq\tilde{\Sigma}_{t,a}\leq\Sigma_{t,a}\). In addition, the ratio \(\tilde{\Sigma}_{t,a}/\Sigma_{t,a}\) is monotonically increasing in \(\eta\). Therefore, predictive sampling samples actions from distributions with smaller variances as compared to that of Thompson sampling. This corresponds to more exploitation versus exploration. In summary, in an AR(1) bandit, predictive sampling can be viewed as a variant of Thompson sampling that adjusts how it balances exploration and exploitation according to \(\eta\), which determines the durability of information. The behavior of PS is motivated by the fact that when information about \(\theta_{t,a}\) is less durable, meaning that \(\theta_{t,a}\) is less informative in predicting (\(\theta_{k,a}:k\geq t+1\)), an agent should deprioritize acquiring information about \(\theta_{t,a}\).
#### Results
We conduct two experiments to investigate the impact of information durability on agent performance. In the first experiment, we compare the performance of predictive sampling and Thompson sampling over time. The results demonstrate that predictive sampling consistently outperforms Thompson sampling. Furthermore, we observe that predictive sampling tends to take greedy actions more frequently; here, greedy actions refer to actions with higher mean reward estimates. This finding supports our hypothesis that de-prioritizing transient information leads to improved performance.
In the second experiment, we examine the performance gap between the two agents in environments with varying levels of information durability. The results reveal that the performance gap between the two agents is more significant in environments where information is less durable. This suggests that the choice of exploration strategy becomes even more critical when information is less durable.
Recall that the environments we examine are parameterized by \(\eta\), which determines the degree to which information about \(\theta_{t,a}\) is durable. In such environments, the information durability associated with different actions are comparable, and deprioritizing transient information translates to deprioritizing exploration relative to exploitation as information durability decreases.
**De-prioritizing transient information is beneficial.** Figure 11(a) plots the action selection frequencies against time, and the band corresponds to 95% confidence band. This figure shows that predictive sampling consistently selects greedy actions more than Thompson sampling. Figure 11(b) plots the average reward collected by each agent against time (\(t=1,...200\)). This figure shows that predictive sampling consistently outperforms Thompson sampling across time. The two plots suggest that de-prioritizing transient information, which in this environment corresponds to exploiting more by taking more greedy actions, leads to better performance in nonstationary environments.
**The benefit is larger in environments with less durable information.** Figure 12(a) plots the action selection frequency of each agent over 200 time steps against the AR(1) parameter. The figure shows that the gaps between the action selection frequences also increase as the AR(1) parameter decreases, i.e., as information becomes less durable. Figure 12(b) plots the average reward collected by each agent over 200 time steps against the AR(1) parameter. The figure shows that the performance gap between the agents increase as the AR(1) parameter decreases, i.e., as information becomes less durable. The two plots suggest that the performance gain from deprioritizing less durable information is larger in environments where information is less durable.
These experiments provide valuable insights into the relationship between information durability and agent performance, further reinforcing the importance of intelligent exploration in nonstationary environments.
### Continual Learning from Delayed Consequences
So far, we have considered supervised learning and bandit learning, both of which involve no delayed consequences. Indeed, the reward is immediate, and the agent's actions have no influence on future situational states. In contrast, in this section we will consider the case in which actions have delayed consequences. Common examples of this include robotics, game playing (Go, chess, Hanabi, etc), and of course, regular life. In these cases, the agent's actions affect which situations it ends up in, and the reward is often delayed. In our framing, we say that an agent's actions affect future situational states, and the agent experiences delayed rewards. While our framing of "reinforcement learning" is very broad, we should note that it is precisely this problem formulation with delayed consequences that typically goes under the reinforcement learning rubric.
Most commonly, this problem is framed as learning in a Markov Decision Process (MDP). Typically, the MDP is stationary and not changing over time. To study continual learning, we will instead consider an MDP that is changing over time. In summary, we consider learning from delayed consequences in a nonstationary Markov
Figure 11: Experiment where \(\eta=0.9\), and \(t\in\{1,2,...,200\}\): (a) The frequencies at which Thompson sampling (TS) and predictive sampling (PS) agents select greedy actions. PS consistently selects greedy actions more than TS throughout time. (b) Average reward collected by TS and PS agents. PS consistently attains average reward higher than TS throughout time.
Decision Process as a special case of our computationally constrained RL framework for continual learning. This type of environment has been extensively studied in prior work, surveyed by Padaladla (2021).
#### Environment
Consider an environment \(\mathcal{E}=(\mathcal{A},\mathcal{O},\rho)\) in which observations are generated by a Markov decision process (MDP) with time-varying transition probabilities. Each observation \(O_{t}\) is of the current state \(S_{t}\) of the MDP. Hence, the state space of the MDP is \(\mathcal{S}=\mathcal{O}\). We consider a small MDP where \(\mathcal{S}=10\) and \(\mathcal{A}=3\).
For each state-action pair \((s,a)\), the vector \((P_{t,s,a,s^{\prime}}:s^{\prime}\in\mathcal{S})\) represents the transition probabilities at time \(t\). The initial transition probability vectors are independently sampled from a Dirichlet\((1/\mathcal{S},\ldots,1/\mathcal{S})\) distribution. At every time step \(t\), the MDP is updated with some small probability. Specifically, for each state-action pair \((s,a)\), with some probability \(\eta\in(0,1)\), the vector \((P_{t,s,a,s^{\prime}}:s^{\prime}\in\mathcal{S})\) is replaced by a new, independent Dirichlet\((1/\mathcal{S},\ldots,1/\mathcal{S})\) sample.
After the action \(A_{t}\) is executed, the environment produces as an observation the next state \(S_{t+1}=O_{t+1}\). This state is generated as though the environment samples from the state distribution \(P_{t,S_{t},A_{t},\cdots}\). Note that this probability vector is random, and the deterministic observation probability function \(\rho\) that characterizes environment dynamics is defined by
\[\rho(o|H_{t},A_{t})=\mathbb{E}[P_{t,S_{t},A_{t},o}|H_{t},A_{t}]\]
for all \(o\in\mathcal{O}\).
There is a known distinguished state \(s_{*}\in\mathcal{S}\), which we will refer to as _the goal state_. A nonzero reward is received only upon arrival at the goal state:
\[R_{t+1}=r(H_{t},A_{t},O_{t+1})=\left\{\begin{array}{ll}r_{t+1}>0&\quad\text{ if }S_{t+1}=s_{*}\\ 0&\quad\text{otherwise.}\end{array}\right.\]
Because the MDP is changing over time, we can think of the agent as encountering different stationary MDPs (determined by \(P_{t}\)) throughout time. We scale the reward \(r_{t+1}\) such that the optimal long-term average reward in each MDP is the same (.5). This ensures low variance across time and seeds. See details in Appendix C.2.
Figure 12: Experiment where \(\eta\in\{0.1,0.3,0.5,0.7,0.9\}\), and \(t=200\): (a) The frequencies at which Thompson sampling (TS) and predictive sampling (PS) agents select greedy actions. PS consistently selects greedy actions more than TS for varying \(\eta\), and the gap between the greedy action selection frequencies increases as \(\eta\) decreases. (b) Average reward collected by TS and PS agents. PS consistently attains average reward higher than TS for varying \(\eta\), and the gap between the average rewards increases as \(\eta\) decreases.
#### Agent
We consider the version of Q-learning introduced in Example 5 known as optimistic Q-learning. The agent is initialized with \(Q_{0}(s,a)=0\) for all \((s,a)\) and updates action values according to
\[Q_{t+1}(s,a)=\left\{\begin{array}{ll}Q_{t}(S_{t},A_{t})+\alpha\left(R_{t+1}+ \gamma\max_{a\in\mathcal{A}}Q_{t}(S_{t+1},a)-Q_{t}(S_{t},A_{t})\right)+\zeta& \text{if }(s,a)=(S_{t},A_{t})\\ Q_{t}(s,a)+\zeta&\text{otherwise.}\end{array}\right.\]
There are three hyperparameters: the stepsize \(\alpha\), the discount factor \(\gamma\), and the optimistic boost \(\zeta\). The formula is identical to the regular Q-learning formula, with one addition: all action values are incremented with an optimistic boost \(\zeta\) at each time step. This ensures that all actions will eventually be taken upon a revisit to a state. A higher \(\zeta\) can be interpreted as leading to more exploration since it leads to all actions being revisited more often.
At each time step, the agent acts greedily with respect to \(Q_{t}\) and selects the action with the highest action value. If multiple actions have the same action value the agent samples uniformly from those. Thus, \(Q_{t}\) induces a mapping from situational state to action.
In the stationary setting with a constant MDP, there is an optimal action-value function \(Q_{t}^{*}\) that does not change over time. Therefore, by annealing the stepsize and optimistic boost appropriately, the agent can converge on the optimal action-value function. The difference \(|Q_{t}-Q_{t}^{*}|\) vanishes. By acting greedily to the optimal action-value function, the agent can perform optimally. In contrast, in our environment the MDP is changing at a constant rate and so is the associated optimal action-value function \(Q_{t}^{*}\). In this case, annealing the stepsize and optimistic boost will result in premature convergence and hurt performance. By annealing the stepsize and optimistic boost, it will eventually update \(Q_{t}\) at a much slower rate than the optimal action-value function \(Q_{t}^{*}\) is changing. Indeed, the agent will update at a slower and slower rate, such that it effectively stops learning information that is useful: by the time the agent has made even a single update to \(Q_{t}\), the optimal action-value function has changed many times over. Because the objective is the infinite-horizon average reward, any performance up until a finite time will have no influence on the overall long-term average reward. Thus, since an annealing agent will effectively stop learning after a certain point in time, it will underperform under the long-term average reward objective. This is generally true in the continual learning setting where the optimal mapping from situational state to action keeps changing indefinitely.
Consequently, in our experiments, we only consider constant \(\alpha\) and \(\zeta\) values. In reinforcement learning, the discount factor \(\gamma\) is typically presented as a component of the MDP. In our framing of continual learning, \(\gamma\) is instead a hyperparameter of the agent that controls the effective planning horizon. For all experiments, we set \(\gamma=0.9\).
#### Results
On each environment, we perform a sweep over stepsize \(\alpha\) and optimistic boost \(\zeta\) to select the optimal values. On each environment, we plot how the average reward depends on the stepsize \(\alpha\) and optimistic boost \(\zeta\), respectively. For each value of \(\alpha\), in Figure 13, we plot the largest average reward achievable when sweeping over \(\zeta\). Similarly,
Figure 13: Average reward versus the stepsize \(\alpha\). Interestingly, optimal stepsize is the same (.2) in both environments.
in Figure 14, we plot the largest average reward achievable for each \(\zeta\). We find that the optimal optimistic boost increases with the degree of nonstationarity in the environment. These results are intuitive. When the environment is changing, old knowledge becomes obsolete, and it is imperative to seek out new knowledge. Higher values of optimistic boost lead to more exploratory behavior needed to accomplish this. In contrast, however, we note that it is a little surprising that the optimal stepsize was the same in both environments.
In more complex environments, it is cumbersome or infeasible to perform a hyperparameter search to find optimal values. We conjecture that a more sophisticated agent, without being initialized with the optimal hyperparameters, could automatically learn them over time, fully online. This is an example of meta-learning which has been explored extensively in previous literature (see e.g Duan et al. (2016), Flennerhag et al. (2021), Thrun and Pratt (1998)). Because our objective considers an infinite horizon, the sophisticated agent would therefore be able to reach the same asymptotic performance as an agent that is initialized with the optimal hyperparameters.
### Continual Auxiliary Learning
While an agent's goal is to maximize average reward, a complex environment can offer enormous amounts of feedback beyond reward. This feedback can be used to accelerate the agent's learning and thus increase its average reward. For example, sensors of a self-driving car ingest visual and auditory feedback far beyond what is required to determine reward, which, for example, might simply indicate safe arrival to a destination. By predicting future trajectories of its own vehicle, learning from realizations to improve these predictions, and using these predictions to plan, an agent can learn to maximize reward much more quickly than if it learned only from reward feedback.
Auxiliary tasks that are those distinct from, though possibly helpful to, the primary task of maximizing average reward. Prediction of future outcomes other than reward, as we considered in our self-driving car example, serves as an example of an auxiliary task. Learning to perform auxiliary tasks can accelerate learning for the primary task.
In a complex environment, it is often unclear what auxiliary tasks are helpful and how they relate to the primary task. However, a long-lived agent can learn that over time. We refer to this _continual auxiliary learning_. In the remainder of this section, we illustrate benefits of _continual auxiliary learning_ through the following didactic example:
**Example 12**.: **(continual auxiliary learning)** _Consider a modified version of continual SL where the input set is the singleton \(\mathcal{X}=\{\emptyset\}\), and the label set consists of \(k\) dimensional binary vectors \(\mathcal{Y}=\{0,1\}^{k}\). The labels are generated via_
\[Y_{t+1}\sim\sigma(\phi_{t}),\]
_where \(\sigma\) denotes the sigmoid function \(\exp(x)/(1+\exp(x))\) applied element-wise. We assume that \(\phi_{t}=A\theta_{t}\), where \(A\) is a sparse vector with only \(K\) non-zero components (including the first), and \(\theta_{t}\in\Re\) evolves according to the AR(1) process_
\[\theta_{t}=\eta\theta_{t-1}+W_{t},\]
Figure 14: Average reward versus the optimistic boost \(\zeta\). As the nonstationarity parameter \(\eta\) increases, the optimal optimistic boost \(\zeta^{*}\) increases.
_where \(W_{t}\stackrel{{ iid}}{{\sim}}\mathcal{N}(0,1-\eta^{2})\) and \(W_{t}\perp\theta_{t-1}\). The actions of the agent are predictions about the first component \(Y_{t+1,1}\) of the label. Hence, \(\mathcal{A}=\Delta_{\{0,1\}}\). We take reward to be the negative log-loss:_
\[r(H_{t},A_{t},O_{t+1})=\ln P_{t}(Y_{t+1,1}).\]
In this example, prediction of \(Y_{t+1,1}\) constitutes the primary task. Prediction of the remaining label components \(Y_{t+1,2:k}\) are possible auxiliary tasks. These components may offer information that allows an agent to more quickly learn about \(\theta_{t}\), which in turns improves its ability to perform its primary task. If there the number of components \(k\) is large and most components are not relevant to the agent's primary task, a long-lived agent can improve its performance by learning which components _are_ relevant. In particular, an agent can learn over time about the vector \(A\). Learning about \(A\) enhances the agent's ability to quickly learn about \(\theta_{t}\). Learning about \(A\) can be thought of as _meta-learning_, as it aims to learn about something that can guide the agent's learning about its primary task. In the following sections, we will investigate performance gains that this affords.
#### 6.4.1 Methods
No auxiliary learningAs a baseline, we consider an agent that ignores all auxiliary components\(Y_{t+1,2:k}\). The agent maintains a vector \(\hat{\phi}_{t}\in\Re\). At each time, it first scales \(\hat{\phi}_{t}\) by \(\mu\) and then updates the vector via gradient descent to maximize reward:
\[\hat{\phi}^{\prime}_{t} =\mu\hat{\phi}_{t}\] \[L_{t} =-Y_{t+1,1}\log(\sigma(\hat{\phi}^{\prime}_{t}))-(1-Y_{t+1,1}) \log(1-\sigma(\hat{\phi}^{\prime}_{t}))\] \[g_{t} =\frac{\partial}{\partial\hat{\phi}^{\prime}_{t}}L_{t}\] \[=\frac{-Y_{t+1,1}+(1-Y_{t+1,1})\exp(\hat{\phi}^{\prime}_{t})}{1+ \exp(\hat{\phi}^{\prime}_{t})}\] \[\hat{\phi}_{t+1} =\hat{\phi}^{\prime}_{t}-\alpha g_{t},\]
where the best learning rate \(\alpha\) is found by grid search.
Auxiliary learning with \(A\) knownWe also consider the agent that perfectly learns from the auxiliary information. In particular, consider the agent that is given the value of \(A\), and maintains \(\hat{\theta}_{t}\in\Re\). At each time step, it first decays \(\hat{\theta}_{t}\) by \(\mu\), and then updates it by gradient descent to minimize the loss with respect to all components of \(Y\):
\[\hat{\theta}^{\prime}_{t} =\mu\hat{\theta}_{t}\] \[L_{t} =-Y^{T}_{t+1}\log(\sigma(A\hat{\theta}^{\prime}_{t}))-(1-Y_{t+1} )^{T}\log(1-\sigma(A\hat{\theta}^{\prime}_{t}))\] \[g_{t} =\frac{\partial}{\partial\hat{\theta}^{\prime}_{t}}L_{t}\] \[=A^{T}\left[\left(-Y_{t+1}+(1-Y_{t+1})\odot\exp(A\hat{\theta}^{ \prime}_{t})\right)\odot\left(1+\exp(A\hat{\theta}^{\prime}_{t})\right)\right]\] \[\hat{\theta}_{t+1} =\hat{\theta}^{\prime}_{t}-\alpha g_{t},\]
where we use \(\odot\) and \(\oslash\) to denote element-wise multiplication and division, respectively, and the best learning rate \(\alpha\) is found by grid search.
Auxiliary learning with \(A\) learnedThis agent internally maintains both \(\hat{A}_{t}\) and \(\hat{\theta}_{t}\). It uses vanilla gradient descent on the loss with respect to all components to update \(\hat{\theta}_{t}\), and learns \(\hat{A}_{t}\) via meta gradient descent. The
update rules for \(\theta_{t}\) are
\[\hat{\theta}^{\prime}_{t} =\mu\hat{\theta}_{t}\] \[L_{t} =-Y_{t+1}^{T}\log(\sigma(\hat{A}_{t}\hat{\theta}^{\prime}_{t}))-(1- Y_{t+1})^{T}\log(1-\sigma(\hat{A}_{t}\hat{\theta}^{\prime}_{t}))\] \[g_{t} =\frac{\partial}{\partial\hat{\theta}^{\prime}_{t}}L_{t}\] \[=\hat{A}_{t}^{T}\left[\left(-Y_{t+1}+(1-Y_{t+1})\odot\exp(\hat{A }_{t}\hat{\theta}^{\prime}_{t})\right)\oslash\left(1+\exp(\hat{A}_{t}\hat{ \theta}^{\prime}_{t})\right)\right]\] \[\hat{\theta}_{t+1} =\hat{\theta}^{\prime}_{t}-\alpha g_{t},\]
and the update rules for \(\hat{A}_{t}\) are
\[h_{t} =\mu h_{t-1}-\alpha\mu\left(-Y_{t}+(1-Y_{t})\odot\exp(\hat{A}_{t -1}\hat{\theta}^{\prime}_{t-1})\right)\oslash\left(1+\exp(\hat{A}_{t-1}\hat{ \theta}^{\prime}_{t-1})\right)\] \[\quad-\alpha\mu\hat{A}_{t-1}^{T}\left[\exp(\hat{A}_{t-1}\hat{ \theta}^{\prime}_{t-1})\odot\hat{A}_{t-1}\oslash\left(1+\exp(\hat{A}_{t-1}\hat {\theta}^{\prime}_{t-1})\right)^{\circ 2}\right]h_{t-1}\] \[\hat{A}_{t+1} =\hat{A}_{t}-\beta g_{t}h_{t},\]
where \(A^{\circ 2}\) denotes element-wise square of \(A\), and \(\beta\) is the meta-learning rate. Readers are referred to Appendix C.3 for derivation of these update rules.
#### 6.4.2 Results
Figure 15 plots reward versus time. The results are generated with autoregressive model coefficient \(\mu=0.99\) and label dimension \(k=100\), \(K=10\) useful components. For the agent with no auxiliary learning and the agent performing auxiliary learning with \(A\) known, the best learning rate \(\alpha\) found by grid search is used (0.202 and 0.092, respectively). The agent performing auxiliary learning with \(A\) learned uses a learning rate \(\alpha=0.01\), which is far from optimal, and a meta-learning rate \(\beta=0.05\).
From Figure 15 we can see that learning \(A\) eventually yields performance at the level achieved with \(A\) known. Either of these outperform an agent that does not engage in auxiliary learning. This demonstrates how continual auxiliary learning can help an agent perform well in the long run, even if the agent does not initially know what auxiliary tasks are useful.
However, the performance improvement comes at the cost of extra computation. In this particular example, the agent that does not engage in auxiliary learning requires \(O(1)\) FLOPs per time step, while the others require \(O(k)\) FLOPs per time step. Hence, the performance improvement relies on roughly \(k\) times more compute. In the general case, implementing continual auxiliary learning in a scalable manner, with only a modest increase in compute, remains an interesting topic for future research.
Figure 15: Learning \(A\) eventually yields performance at the level attained if \(A\) were known. This exceeds the performance of an agent that does not engage in auxiliary learning.
### Summary
**Continual Supervised Learning**
* We consider continual supervised learning with an objective of **maximizing average online accuracy**: \[\max_{\pi} \liminf_{T\rightarrow\infty}\mathbb{E}_{\pi}\left[\frac{1}{T}\sum_{ t=0}^{T-1}\mathbb{1}\left(Y_{t+1}=\hat{Y}_{t+1}\right)\right]\] s.t. computational constraint
* **Evaluation Protocol**: we first tune hyperparameters on a development sequence and then train the agent with the best hyperparameters on multiple evaluation sequences. The agent's performance is averaged across all evaluation sequences.
* Experiments with a variant of permuted MNIST indicate that:
* A performant agent can **forget non-recurring information**.
* A performant agent can **forget recurring information if it can relearn that information quickly*
* relative to the duration of the information's utility.
* When computation is constraining, **forgetting can increase average reward**.
**Continual Exploration**
* Via experiments with a coin swapping game, we identify three properties of effective exploration in the face of nonstationarity:
* An optimal agent **never stops exploring.*
* An agent should **prioritize acquisition of more durable information*
* - **that is, information that will remain valuable and relevant over a longer duration.*
* To assess durability of information, an agent can benefit from **learning about environment dynamics**.
* Via experiments with a two-armed Gaussian bandit, we demonstrate that **prioritizing acquisition of more durable information is beneficial** for maximizing average reward and that the benefit of this is greater in environments with less durable information.
**Continual Learning with Delayed Consequences**
* Optimistic Q-learning can learn from delayed consequences in a nonstationary Markov decision processes.
* As the degree of nonstationarity increase, more intense exploration increases average reward.
* A more sophisticated agent may be able to adapt stepsize, discount, and optimism hyperparameters online to increase average reward.
**Continual Auxiliary Learning**
* **Auxiliary tasks** are tasks that are those distinct from, though possibly helpful to, the primary task of maximizing average reward.
* **Auxiliary learning** refers to identifying useful auxiliary tasks and their relationship to the primary task.
* A long-lived agent has time to learn not only how to perform auxiliary tasks but also what auxiliary tasks are useful to learn about and how they relate to the primary task.
Conclusion
**Summary.** In this paper, we framed continual learning as computationally constrained reinforcement learning. Under this perspective, we formally introduced an objective for continual learning. This objective is to maximize the infinite-horizon average reward under computational constraints. We formalized the concepts of agent state, information capacity, and learning targets, which play key roles for thinking about continual learning and distinguishing it from traditional vanishing-regret learning. Leveraging information theory, we decomposed prediction error into forgetting and implasticity components. We concluded with case studies that studied implications of our objective on behaviors of performant agents.
**Future Research.** In our case studies, we discussed how different agent capabilities contribute to performance under our objective. These capabilities include balancing forgetting with fast relearning, seeking out durable information when exploring, modeling environment dynamics, and meta-learning hyperparameters. We hope that this work inspires researchers to design agents that exhibit such capabilities and to study the trade-offs between them under computational constraints. In particular, we hope our holistic objective helps researchers reason about these trade-offs and reveal additional desired capabilities.
## Acknowledgements
Financial support from the Stanford Knight Hennessy Fellowship, the Stanford MS&E fellowship, and the Army Research Office (ARO) Grant W911NF2010055 is gratefully acknowledged.
We thank participants of the 2023 Barbados Lifelong Reinforcement Learning Workshop, Stanford University students of the 2022 offering of Reinforcement Learning: Frontiers, Dave Abel, Andre Baretto, Dimitri Bertsekas, Shibhansh Dohare, Clare Lyle, Sanjoy Mitter, Razvan Pascanu, Doina Precup, Marc'Aurelio Ranzato, Mark Ring, Satinder Singh, Rich Sutton, John Tsitsiklis, Hado van Hasselt, and Zheng Wen for stimulating discussions and feedback that greatly benefited this monograph.
|
2308.05806 | Gain-compensated metal cavity modes and a million-fold improvement of
Purcell factors | Using a rigorous mode theory for gain-compensated plasmonic dimers, we
demonstrate how quality factors and Purcell factors can be dramatically
increased, improving the quality factors from 10 to over 26,000 and the peak
Purcell factors from around 3000 to over 10 billion. Full three-dimensional
calculations are presented for gold dimers in a finite-size gain medium, which
allows one to easily surpass fundamental Purcell factor limits of lossy media.
Within a regime of linear system response, we show how the Purcell factors are
modified from the contributions from the projected local density of states as
well as a non-local gain. Further, we show that the effective mode volume and
radiative beta factors remain relatively constant, despite the significant
enhancement of the Purcell factors. | Becca VanDrunen, Juanjuan Ren, Sebastian Franke, Stephen Hughes | 2023-08-10T18:02:19Z | http://arxiv.org/abs/2308.05806v2 | # Gain-compensated metal cavity modes and a million-fold improvement of Purcell factors
###### Abstract
Using a rigorous mode theory for gain-compensated plasmonic dimers, we demonstrate how quality factors and Purcell factors can be dramatically increased, improving the quality factors from 10 to over 26,000 and the peak Purcell factors from around 3000 to over 10 billion. Full three-dimensional calculations are presented for gold dimers in a finite-size gain medium, which allows one to easily surpass fundamental Purcell factor limits of lossy media. Within a regime of linear system response, we show how the Purcell factors are modified from the contributions from the projected local density of states as well as a non-local gain. Further, we show that the effective mode volume and radiative beta factors remain relatively constant, despite the significant enhancement of the Purcell factors.
_Introduction._--Plasmonic resonators, formed by metal nanoparticles (MNPs), have become a prominent topic in nanophotonics, due in part to their unique abilities for enhancing light-matter interactions in extremely small spatial scales [1; 2]. Plasmonic resonators exploit surface plasmons, which occur at the interface of a metal and a dielectric material, and are the result of mixed electronic-optical excitations on the surface of the metal [3]. Plasmonic resonators yield electromagnetic modes that are well below the diffraction limit [1; 4], leading to improved sensing and fast generation of single photons [5; 6].
Metal based cavity modes have been explored theoretically [7; 8] and experimentally [9; 10], for a wide range of photonics applications. However, plasmonic resonators have significant decay rates, since metal is inherently a lossy material. Thus, an important goal in optical plasmonics is to find methods for alleviating this loss. Historically, optical mode theories of plasmonics were thought to be problematic [11], but this is largely caused by the use of ill-defined mode models, treating such systems like regular "normal modes", without much consideration for losses.
Recently, accurate cavity mode theories used to describe the optical response of MNPs have been formulated, which are based on "quasinormal modes" (QNMs)--the formal solution for open cavity modes [12; 13; 14; 15]. Similar to _normal modes_, QNMs are solutions to the source-free Helmholtz equation, but with _open_ boundary conditions, where the solutions have complex eigenfrequencies and spatially diverging fields (due to temporal losses) [12; 13]. By exploiting QNM theory, it has become clear that cavity physics applies to plasmonic resonator structures [15; 16]. Theories based on QNMs offer a significant advantage over common all-numerical approaches, which can be tedious, limited in scope, and often do not even explain the basic mechanisms of field enhancement; direct numerical approaches also have to be repeated many times, e.g., for studying the emission decay of oscillating dipoles as a function of dipole position. In contrast, a mode theory provides many physical insights, is efficient, has wide applications, and lends itself to mode quantization [17]. Moreover, the modes form a basis for computing the photonic Green function (GF), which can be used to describe a wide range of optical phenomena in both classical and quantum optics [18; 19; 20; 21; 14].
Accounting for both material loss and radiative loss, the mode quality factor is defined from \(Q=\omega_{c}/(2\gamma_{c})\) (\(2\gamma_{c}\) is the energy loss rate of the mode \(c\)), which for plasmonic resonators is much smaller than typical dielectric cavities [22]. Thus, the MNP resonances typically generate quality factors of around \(Q\approx 10-20\)[23; 24; 25; 26], which manifest in a very short cavity mode lifetime, \(\tau_{c}=1/\gamma_{c}\), e.g., a resonance of \(\hbar\omega_{c}=1.780-i0.068\) eV [27], corresponds to \(\tau_{c}\approx 0.01\) ps. Such losses prohibit many applications in coherent optics, including surface plasmon lasing and spasing [28; 29].
One potential method for mitigating this significant loss is through gain compensation, which uses material gain to suppress some of the dissipation, using a linear amplification regime [30; 31; 32]. It is important to note that the total cavity structure must be overall lossy for the properties of cavity physics and QNM theory to apply, and also to maintain a _linear medium_ response. More specifically, the entire GF is only allowed to have complex poles in the lower complex half plane. To model such a structure, both the metal and the gain are defined by a _complex_ dielectric constant, where the imaginary part describes loss or gain.
Gain media has been utilized in several applications to suppress metallic losses [33; 34; 35]. For example, loss suppression in MNPs has been studied by doping them in gain [36; 37] or by adding gain media to plasmonic resonators [38; 39] to observe the impact on the response of an emitter. While there exist some studies on how the quality factor changes as a result of gain-compensation [37; 38; 40; 41], little has been done to quantify how enhanced spontaneous emission (SE) and Purcell factors change from coupled dipole emitters. Furthermore, the concept of spasing has been an enticing area of study for decades [42; 43; 44], yet many approaches lack the robustness of a rigorous mode theory, which is a requirement in quantum optics.
A particularly important metric in nanophotonics is the Purcell factor [45], which describes the enhanced SE rate of a dipole emitter, \(\Gamma(\mathbf{r}_{0},\omega)\), normalized to the rate from a background homogeneous medium, \(\Gamma_{\text{B}}(\mathbf{r}_{0},\omega)\):
\[F_{\text{P}}\equiv\frac{3}{4\pi^{2}}\left(\frac{\lambda_{0}}{n_{\text{B}}} \right)^{3}\frac{Q}{V_{\text{eff}}}, \tag{1}\]
where \(Q\) and \(V_{\text{eff}}\) are the mode quality factor and effective
mode volume, respectively, \(\lambda_{0}\) is the free space wavelength, and \(n_{\rm B}\) is the background refractive index. This well known formula assumes the emitter is on resonance with the cavity mode and at a field maximum position, with the same polarization as the cavity mode. Traditionally, the effective mode volume describes the volume associated with mode localization, but that is no longer correct using QNMs (as the modes diverge in space), and the generalized mode volume can be both complex and position dependent [46]. Moreover, when gain is added to the cavity, this formula no longer applies (even when using QNMs), and one must include a non-local correction from gain [47; 48].
In this work, we show how material gain can significantly improve the MNP cavity mode properties, using a rigorous QNM theory with linear amplification. The key findings are: (i) the effective mode volume of the mode profiles stays nearly constant as gain is introduced to the system; (ii) the Purcell factor yields a million-fold improvement when material gain is added; and (iii) the radiative beta factor also remains relatively constant. Our gain theory has a wide range of applications, including quantum sensing, as well as lasing and fundamental topics in quantum topics.
_Theory._--The QNMs, \(\mathbf{\tilde{f}}_{\mu}\), are the mode solutions to the Helmholtz equation, with open boundary conditions [14]:
\[\mathbf{\nabla}\mathbf{\times}\mathbf{\nabla}\mathbf{\times}\mathbf{\tilde{f}}_{\mu}(\mathbf{ r})-\left(\frac{\tilde{\omega}_{\mu}}{c}\right)^{2}\epsilon(\mathbf{r},\tilde{ \omega}_{\mu})\mathbf{\tilde{f}}_{\mu}(\mathbf{r})=0, \tag{2}\]
where \(c\) is the speed of light in a vacuum, \(\tilde{\omega}_{\mu}\) is the QNM complex eigenfrequency \(\tilde{\omega}_{\mu}=\omega_{\mu}-i\gamma_{\mu}\), and \(\epsilon(\mathbf{r},\tilde{\omega}_{\mu})\) is the complex dielectric function. The inhomogeneous Helmholtz equation for an arbitrary polarization source can be used to define the GF, from \(c^{2}\nabla\times\nabla\times\mathbf{G}(\mathbf{r},\mathbf{r}^{\prime}, \omega)-\omega^{2}\epsilon(\mathbf{r},\omega)\mathbf{G}(\mathbf{r},\mathbf{r}^ {\prime},\omega)=\omega^{2}\mathbf{1}\delta(\mathbf{r}-\mathbf{r}^{\prime})\), where the electric field solution is at \(\mathbf{r}\), when a source field dipole is at \(\mathbf{r}^{\prime}\). Within or near the cavity region, the GF can be expressed as a sum of normalized QNMs [19; 27; 49]:
\[\mathbf{G}(\mathbf{r},\mathbf{r}_{0},\omega)=\sum_{\mu}A_{\mu}(\omega) \mathbf{\tilde{f}}_{\mu}(\mathbf{r})\mathbf{\tilde{f}}_{\mu}(\mathbf{r}_{0}), \tag{3}\]
where \(A_{\mu}(\omega)=\omega/[2(\tilde{\omega}_{\mu}-\omega)]\)[7], and we note the vector product is unconjugated (i.e., not \(\mathbf{\tilde{f}}_{\mu}\mathbf{\tilde{f}}_{\mu}^{*}\)), which is a consequence of using a non-Hermitian theory. When a single QNM dominates, \(\mu=c\), then the GF is simply
\[\mathbf{G}_{c}(\mathbf{r},\mathbf{r}_{0},\omega)\approx A_{c}(\omega) \mathbf{\tilde{f}}_{c}(\mathbf{r})\mathbf{\tilde{f}}_{c}(\mathbf{r}_{0}). \tag{4}\]
In a lossy material system with no gain, the SE rate for a dipole emitter at a location \(\mathbf{r}_{0}\) is determined from the (projected) local density of states (LDOS) [12; 27]:
\[\Gamma_{\rm LDOS}^{\rm SE}(\mathbf{r}_{0},\omega)=\frac{2}{\hbar\epsilon_{0}} \mathbf{d}\cdot{\rm Im}[\mathbf{G}(\mathbf{r}_{0},\mathbf{r}_{0},\omega)] \cdot\mathbf{d}, \tag{5}\]
and using a single QNM expansion approximation, \(\mathbf{G}\) becomes \(\mathbf{G}_{c}\), as shown in Eq. (4). The SE rate for a dipole in a homogeneous medium, \(\Gamma_{\rm B}(\mathbf{r}_{0},\omega)\), is similarly obtained by replacing \(\mathbf{G}\) by \(\mathbf{G}_{\rm B}\) (the GF of a homogeneous medium), which is known analytically. The LDOS Purcell factor is then given by [12; 27; 50]
\[F_{\rm P}^{\rm LDOS}(\mathbf{r}_{0},\omega)=1+\frac{6\pi c^{3}}{\omega^{3}n_{ \rm B}}\mathbf{n}_{\rm d}\cdot{\rm Im}[\mathbf{G}(\mathbf{r}_{0},\mathbf{r}_{0 },\omega)]\cdot\mathbf{n}_{\rm d}, \tag{6}\]
and \(F_{\rm P}^{\rm QNM,LDOS}\) is obtained by using \(\rm G\to G_{c}\).
However, this classical LDOS formalism for the Purcell factor is not correct in the presence of a linear gain medium. Instead, the total SE rate can be written as [47; 48]
\[\Gamma_{\rm tot}^{\rm SE}(\mathbf{r}_{0},\omega)=\Gamma_{\rm LDOS}^{\rm SE}( \mathbf{r}_{0},\omega)+\Gamma_{\rm gain}^{\rm SE}(\mathbf{r}_{0},\omega), \tag{7}\]
which notably contains an extra net-positive term related to gain region added to the traditional LDOS term; this correction can be derived quantum mechanically [14; 47] or classically [48]. The total Purcell factor is then
\[F_{\rm P}(\mathbf{r}_{0},\omega)=1+\frac{\Gamma_{\rm tot}^{\rm SE}(\mathbf{r}_ {0},\omega)}{\Gamma_{\rm B}(\mathbf{r}_{0},\omega)}. \tag{8}\]
In origin, the well known LDOS-SE formula is linked to the GF identity, \(\int_{\rm F3}\mathrm{d}\mathbf{s}\epsilon_{I}(\mathbf{s})\mathbf{G}(\mathbf{s },\mathbf{r})\cdot\mathbf{G}^{*}(\mathbf{s},\mathbf{r}^{\prime})={\rm Im}[ \mathbf{G}(\mathbf{r},\mathbf{r}^{\prime})]\), which involves an integration over all space. However, from this identity, one must subtract the contribution from the gain, and this results in _adding_ the separate gain contribution term, which is given by [47]
\[\Gamma_{\rm gain}^{\rm SE}(\mathbf{r}_{0},\omega)=\frac{2}{\hbar\epsilon_{0}} \mathbf{d}\cdot\mathbf{K}(\mathbf{r}_{0},\mathbf{r}_{0},\omega)\cdot\mathbf{d}, \tag{9}\]
where
\[\mathbf{K}(\mathbf{r}_{0},\mathbf{r}_{0},\omega)=\!\int_{V_{\rm gain}}\!d \mathbf{s}\left|{\rm Im}[\epsilon^{\rm gain}(\mathbf{s})]\right|\mathbf{G}( \mathbf{r}_{0},\mathbf{s},\omega)\!\cdot\!\mathbf{G}^{*}(\mathbf{s},\mathbf{r}_ {0},\omega). \tag{10}\]
In the case of a single dominant QNM, \(\mu=c\), then
\[\begin{split}\Gamma_{\rm gain}^{\rm SE}(\mathbf{r}_{0},\omega)& =\frac{2|\mathbf{d}|^{2}}{\hbar\epsilon_{0}}\Big{|}A_{c}(\omega) \Big{|}^{2}\Big{|}\mathbf{n}_{\rm d}\cdot\mathbf{\tilde{f}}_{c}(\mathbf{r}_{0}) \Big{|}^{2}\!\times\\ &\int_{V_{\rm gain}}\!d\mathbf{s}\Big{|}{\rm Im}[\epsilon^{\rm gain }(\mathbf{s})]\Big{|}\Big{|}\mathbf{\tilde{f}}_{c}(\mathbf{s})\Big{|}^{2}. \end{split} \tag{11}\]
From the perspective of classical power flow arguments [48], the total Purcell factor can be written as
\[F_{\rm P}(\mathbf{r}_{0},\omega)=\frac{P_{\rm LDOS}(\mathbf{r}_{0},\omega)+P_{ \rm gain}(\mathbf{r}_{0},\omega)}{P_{0}(\omega)}, \tag{12}\]
where \(P_{\rm LDOS}\) and \(P_{0}\) are the power flow from the point dipole with and without the cavity structure (background medium), respectively, and \(P_{\rm gain}\) is the power flowing out from the gain region (net-positive). By removing the \(P_{\rm gain}\) term, this gives the LDOS Purcell factor. Equation (12) can be solved using QNMs or numerically, which is useful to justify the QNM solutions. The total Purcell factors can also be written as
\[F_{\rm P}(\mathbf{r}_{0},\omega)=\frac{P_{\rm far}(\mathbf{r}_{0},\omega)+P_{ \rm loss}(\mathbf{r}_{0},\omega)}{P_{0}(\omega)}, \tag{13}\]
where \(P_{\rm far}\) and \(P_{\rm loss}\) are power radiated to far field region and dissipated within lossy region, respectively.
From this general SE decay theory, one can also determine the radiative beta factor (\(\beta\)-factor), which represents the probability that an emitted photon will decay radiatively to the far field [27]. Generally, there are both radiative and non-radiative \(\beta\)-factors, and the sum of these must equal one [8]. The radiative beta factor is [48]
\[\beta^{\rm rad}({\bf r}_{0},\omega)=\frac{P_{\rm far}({\bf r}_{0},\omega)}{P_ {\rm far}({\bf r}_{0},\omega)+P_{\rm loss}({\bf r}_{0},\omega)}, \tag{14}\]
which can also be written equivalently in terms of \(P_{\rm LDOS}\) and \(P_{\rm gain}\) based on the power conservation law [48].
_Results._--The main MNP cavity structure we model consists of a gold dimer enclosed in a finite-sized cylindrical region of gain, as shown in Fig. 1. The dielectric constant for the gold is described by the Drude model, \(\epsilon^{\rm Drude}(\omega)=1-\frac{\omega_{\rm p}^{2}}{\omega^{2}+i\omega \gamma_{\rm p}}\), with \(\hbar\omega_{\rm p}=8.2934\) eV and \(\hbar\gamma_{\rm p}=0.0928\) eV. The gap region between the nanorods (where the dipole sits) is considered to have a real permittivity. The gain region has a permittivity of \(\epsilon^{\rm gain}=2.25-i\alpha_{g}\), so the gap region simply takes the real component of this. We assume the gain is dispersionless, but gain dispersion results are discussed in the Supplementary Material (SM) [51], and do not affect our general findings. For all calculations below, we obtain the QNMs numerically using a complex frequency approach, implemented in COMSOL [7; 27].
When gain is introduced to the cavity system, it is important to first analyze the gain-modified mode profiles of these systems to determine how the dominant QNM behaves in the presence of material gain. The vector-field QNM is dominated by \(z\)-polarization, that peaks in frequency near \(\hbar\omega\approx 1.2\) eV. As can be seen in Fig. 2, the spatial profile of the QNM is similar with and without the gain medium. Furthermore, this can be verified quantitatively, by calculating the effective mode volume, \(V_{\rm eff}^{-1}({\bf r}_{0})=\epsilon({\bf r}_{0}){\rm Re}[\hat{\bf f}_{\rm z }^{2}({\bf r}_{0})]\) from QNM theory. See SM [51] for the computed effective mode volume at gap center.
For the two cases shown in Fig. 2, the complex QNM eigenfrequencies are \(\hbar\tilde{\omega}_{c}=1.198-i5.934\cdot 10^{-2}\) eV and \(\hbar\tilde{\omega}_{c}=1.195-i2.238\cdot 10^{-5}\) eV, respectively. This leads to a QNM quality factor \(Q\) of 10 for the case with no gain and 26,698 for the case with the largest amount of gain, showing that a significant improvement in \(Q\) is possible.
In the presence of gain, the LDOS Purcell factor of a dipole placed at the center of the dimer gap can be calculated again using Eq. (6) and checked against a full numerical dipole calculation [27; 48]. By comparing the LDOS Purcell factor over a range of frequencies for both approaches, we can determine if the analytical single QNM expansion method is accurate. Additionally, the LDOS Purcell factor gives insights into how the gain medium impacts the SE rates. We can then add in the non-local gain contribution to obtain the (correct) total Purcell factors.
A summary of the computed Purcell factors with and without gain is shown in Fig. 3. In Fig. 3a, we show the LDOS Purcell factors for \(\alpha_{g}\) ranging from 0 to \(2.2\cdot 10^{-1}\); the agreement between the QNM method (curves) and the full dipole method (symbols) is excellent, and we stress there are no fitting parameters used in this model.
Moreover, both Figs. 3a and 3b (which increases the gain further to \(\alpha_{g}=2.5\cdot 10^{-1}\sim 2.54\cdot 10^{-1}\), like in Fig. 3c) demonstrate that a substantial increase in the LDOS Purcell factor is possible as gain is added to the system. Specifically, the LDOS Purcell factor experiences an increase by a factor of nearly 3000, when comparing the LDOS Purcell factors with no gain and with the largest amount of gain. Since the effective mode volume is similar with and without gain, this increase is primarily caused by _gain compensating_ the loss. As anticipated, there are values of the LDOS Purcell factor that are negative for certain frequency values (as the LDOS becomes negative), and it is necessary to look to the total Purcell factors. We note again that there is nothing unphysical about a negative LDOS; it means that there is more local power flow back to the dipole location than out of the dipole, but the entire cavity system is still net
Figure 1: (a) A 2D view of the resonator system. The dielectric functions for each material are labelled, where the background medium has \(\epsilon_{\rm B}=2.25\) (\(\epsilon_{\rm B}=n_{\rm B}^{2}\)), the gain region has \(\epsilon^{\rm gain}=2.25-i\alpha_{g}\) (where \(\alpha_{g}\) is the gain parameter), the gold nanorods have \(\epsilon^{\rm Drude}\) which is governed by the Drude model (see text), and the gap region has \(\epsilon^{\rm gap}=2.25\). The gold nanorods have a length of 80 nm, a radius of 10 nm, and the gap distance is 20 nm; \(h\) represents the height of the gain region, and is 400 nm, and \(r\) represents the radius of the gain region, which is 200 nm. (b) 3D version of the system (as used in our model).
Figure 2: Surface plots of the computed QNM profile (using the dominant field component) for the plasmonic resonator system (a) without gain, and (b) with gain, using \(\alpha_{g}=2.54\cdot 10^{-1}\).
lossy, as rigorously quantified by the QNM complex poles.
Next, we show the total Purcell factor in Fig. 3c, for three different values of gain. There are no longer any negative values for the total Purcell factor, and the (correct) enhanced SE has been further increased, by many orders of magnitude, and peaks at a value of \(2.08\cdot 10^{10}\) when the gain value is \(2.54\cdot 10^{-1}\), yielding an increase by a factor of \(7.1\cdot 10^{6}\) from the case with no gain. As anticipated by our analytical QNM theory, the lineshapes also deviate significantly from Lorentzian and become similar to Lorentzian-squared, as can be seen by the change in line-shape between the LDOS and total Purcell factor curves, sharing some features and applications of resonances near exceptional points [52, 53, 14], such as enhanced sensing.
As mentioned before, it is crucial for the region of _linear amplification_ that we ensure the QNM peak eigenfrequency \(\tilde{\omega}_{\rm c}=\omega_{\rm c}-i\gamma_{\rm c}\) has a positive value for \(\gamma_{\rm c}\). This threshold has been found heuristically in other works [37], however the QNM method provides a more rigorous definition of the linear amplification regime through the resonant eigenfrequency, and quantifies the physics in terms of the resonant modes. The highest peak in Figs. 3b and 3c, corresponding to \(\alpha_{g}=2.54\cdot 10^{-1}\), has a peak frequency of \(\hbar\tilde{\omega}_{\rm c}=1.195-2.238\cdot 10^{-5}i\) eV, where indeed \(\gamma_{\rm c}\) is positive.
Finally, we examine the modified beta factors. Using Eq. (14), the \(\beta^{\rm rad}\) factor can be calculated for the model with and without gain. In the linear regime, the \(\beta\)-factor must have an upper limit of 1, which is the \(\beta\)-factor for an ideal lossless dielectric system, and values surpassing this limit are indicative of the lasing regime [54]. When there is no gain, the beta factor from the metal dimer cavity (on resonance) is 0.33, and when \(\alpha_{g}=0.254\), the beta factor is 0.376, which is only slightly higher than the case with no gain. Further insights about the beta factor over a wider frequency range can be found in the SM [51]. Having a large radiative beta factor is useful for many applications, such as lasing/spasing and sensing applications. [55, 56, 57, 58].
In summary, using a rigorous and powerful QNM theory, the enhanced SE rates of a dipole emitter in a plasmonic resonator system were studied, with and without gain. Using material gain to compensate for the lossy nature of the gold dimer, the LDOS and total Purcell factors were shown to be substantially increased; the Purcell factor with no gain peaks around 2900, but with a maximum gain value of \(\alpha_{g}=2.54\cdot 10^{-1}\) (for linear amplification) the LDOS and total Purcell factors peak at \(7.7\cdot 10^{6}\) and \(2.08\cdot 10^{10}\), respectively. We also discussed how we ensure a regime of linear amplification, and demonstrated that a single QNM theory worked quantitatively well by comparing with numerically exact simulations (subject to numerical limitations). Studying these systems and determining how gain-compensation of loss impacts properties, such as the enhanced SE, are important for developing accurate models of plasmonic lasers, quantum sensors, and lossy cavity systems that can possibly achieve the regime of ultrastrong light-matter coupling (\(g/\omega_{c}>0.1\)[59, 60, 61], where \(g\) is the cavity-dipole coupling rate).
We acknowledge funding from Queen's University, Canada, the Canadian Foundation for Innovation (CFI), the Natural Sciences and Engineering Research Council of Canada (NSERC), and CMC Microsystems for the provision of COMSOL Multiphysics. We also acknowledge support from the Alexander von Humboldt Foundation through a Humboldt Research Award.
Figure 3: (a) LDOS Purcell factors for various gain coefficients, \(\alpha_{g}\), corresponding to the model in Fig. 1. The curves show the QNM method, calculated with Eq. (6), and the symbols represent the full dipole numerical solution with Eq. (12). The orange dashed line/symbols are for the case with no gain (\(\alpha_{g}=0\)), the green line/points represent \(\alpha_{g}=1\cdot 10^{-1}\), the magenta line/points represent \(\alpha_{g}=2\cdot 10^{-1}\), and the black line/points represent \(\alpha_{g}=2.2\cdot 10^{-1}\). The \(\gamma_{\rm c}\) values for each curve in increasing order of \(\alpha_{g}\) are: \(\gamma_{0},0.61\gamma_{0}\), \(0.21\gamma_{0}\), \(0.13\gamma_{0}\), where \(\gamma_{0}=9.016\cdot 10^{13}\) rads/s. (b) LDOS Purcell factors for larger values of \(\alpha_{g}\). The blue line/points represent the case where \(\alpha_{g}=2.5\cdot 10^{-1}\), the red line/point represents the case where \(\alpha_{g}=2.53\cdot 10^{-1}\), and finally the gold line/point represents the case where \(\alpha_{g}=2.54\cdot 10^{-1}\). The \(\gamma_{\rm c}\) values for each curve in increasing order of \(\alpha_{g}\) are: \(0.06\gamma_{0}\), \(0.004\gamma_{0}\), and \(0.0004\gamma_{0}\). (c) _Total_ Purcell factor over a range of energies for three different values of \(\alpha_{g}\) for the model in Fig. 1, calculated with Eq. (8) for the QNM method, and a full numerical dipole method to confirm the results, using Eq. (12). The blue line/points are for \(\alpha_{g}=2.5\cdot 10^{-1}\), the red line/points are for \(\alpha_{g}=2.53\cdot 10^{-1}\), and the gold line and points are for \(\alpha_{g}=2.54\cdot 10^{-1}\), all plotted on a logarithmic \(y\)-axis. The gold curve peaks at \(2.08\cdot 10^{10}\). The dashed blue line represents the LDOS Purcell factor when \(\alpha_{g}=2.5\cdot 10^{-1}\), which shows a significant difference (and becomes negative at larger frequencies). |
2306.15175 | Error analyses of Sinc-collocation methods for exponential decay initial
value problems | Nurmuhammad et al. developed the Sinc-Nystr\"{o}m methods for initial value
problems in which the solutions exhibit exponential decay end behavior. In
these methods, the Single-Exponential (SE) transformation or the
Double-Exponential (DE) transformation is combined with the Sinc approximation.
Hara and Okayama improved on these transformations to attain a better
convergence rate, which was later supported by theoretical error analyses.
However, these methods have a computational drawback owing to the inclusion of
a special function in the basis functions. To address this issue, Okayama and
Hara proposed Sinc-collocation methods, which do not include any special
function in the basis functions. This study conducts error analyses of these
methods. | Tomoaki Okayama, Ryota Hara, Shun'ichi Goto | 2023-06-27T03:22:19Z | http://arxiv.org/abs/2306.15175v3 | # Error analyses of Sinc-collocation methods for exponential decay initial value problems1
###### Abstract
Nurmuhammad et al. developed the Sinc-Nystrom methods for initial value problems in which the solutions exhibit exponential decay end behavior. In these methods, the Single-Exponential (SE) transformation or the Double-Exponential (DE) transformation is combined with the Sinc approximation. Hara and Okayama improved on these transformations to attain a better convergence rate, which was later supported by theoretical error analyses. However, these methods have a computational drawback owing to the inclusion of a special function in the basis functions. To address this issue, Okayama and Hara proposed Sinc-collocation methods, which do not include any special function in the basis functions. This study conducts error analyses of these methods.
keywords: Ordinary differential equations, Initial value problems, Volterra integral equations, Sinc numerical methods, SE transformation, DE transformation Msc: [2010] 65L04, 65L05, 65R20, 65D30 +
Footnote †: journal: Elsevier
## 1 Introduction and summary
This study focuses on numerical solution for systems of initial value problems of the following form:
\[\begin{cases}\mathbf{y}^{\prime}(t)=K(t)\mathbf{y}(t)+\mathbf{g}(t),\quad t\geq 0,\\ \mathbf{y}(0)=\mathbf{r},\end{cases} \tag{1.1}\]
where \(K(t)\) is an \(m\times m\) matrix whose \((i,j)\) elements are \(k_{ij}(t)\), and \(\mathbf{y}(t)\), \(\mathbf{g}(t)\), and \(\mathbf{r}\) are \(m\)-dimensional vectors. In this study, the solution \(\mathbf{y}(t)\) is assumed to decay exponentially as \(t\to\infty\). For such a case, Nurmuammad et al. [3] proposed the Sinc-Nystrom methods by means of the Sinc indefinite integration and two types of variable transformations: Single-Exponential (SE) transformation and the Double-Exponential (DE) transformation. In their numerical experiments, these methods exhibited exponential convergence with respect to the number of sampling points \(l\), which is much faster than polynomial convergence, such as O(\(l^{-4}\)). It should be noted that such a fast convergence was also observed for a stiff problem.
Theoretical error analyses for the Sinc-Nystrom methods were provided by Hara and Okayama [2]. They improved the SE and DE transformations in the Sinc-Nystrom methods, and theoretically showed that the Sinc-Nystrom method
combined with the SE transformation (called SE-Sinc-Nystrom method) can attain \(\mathrm{O}(\exp(-c\,\sqrt{l}))\), and the Sinc-Nystrom method combined with the DE transformation (called DE-Sinc-Nystrom method) can attain \(\mathrm{O}(\exp(-cl/\,\log l))\). These convergence rates were derived with the assumptions that \(\|A_{lm}^{-1}\|_{\infty}\) and \(\|B_{lm}^{-1}\|_{\infty}\) do not diverge exponentially with respect to \(l\), where \(A_{lm}\) and \(B_{lm}\) denote the coefficient matrices of the system of linear equations for the SE- and DE-Sinc-Nystrom methods, respectively. These assumptions seem reasonable in view of their numerical observations.
However, the Sinc-Nystrom methods have a disadvantage in terms of computational cost. This is because the basis functions of these methods include the sine integral, which is a special function defined by
\[\mathrm{Si}(x)=\int_{0}^{x}\frac{\sin t}{t}\,\mathrm{d}t. \tag{1.2}\]
To eliminate this disadvantage, Okayama and Hara [7] proposed Sinc-collocation methods so that basis functions do not include any special function. Their methods were derived by means of the Sinc approximation with a boundary treatment, combined with the SE/DE transformations. We note that such methods were already derived for initial value problems over a finite interval [5], but not derived for the present case, i.e., initial value problems with exponential decay end behavior over the semi-infinite interval \((0,\infty)\). Based on the results of numerical experiments, they reported that the Sinc-collocation methods achieved the same precision as the Sinc-Nystrom methods, but with significantly lower computational costs.
The objective of this study is to provide theoretical explanation for their report. The key to this project is error analysis of the Sinc approximation with a boundary treatment. In the case where the SE transformation is used with the Sinc approximation, the error analysis was provided [6], whereas that was not provided in the case where the DE transformation is used. Therefore, we present the error analysis of the latter case. Then, with the aid of the error analyses of the Sinc-Nystrom methods, we analyze the error of the Sinc-collocation methods. As a result, it is shown that the convergence rate of the Sinc-collocation methods is slightly worse than that of the Sinc-Nystrom methods. This is not a negative but positive result for the Sinc-collocation methods; we can conclude that without using any special function, the Sinc-collocation methods can achieve almost the same convergence rate as the Sinc-Nystrom methods.
The remainder of this paper is organized as follows. As a preliminary, we describe the Sinc approximation and the Sinc indefinite integration and their convergence theorems in Sect. 2. The Sinc-Nystrom methods and their error analyses are described in Sect. 3. The Sinc-collocation methods and their error analyses, the main results of this study, are described in Sect. 4. The proofs of the new theorems in Sect. 2 are provided in Sect. 5. The proofs of the new theorems in Sect. 4 are provided in Sect. 6.
## 2 Sinc approximation and Sinc indefinite integration
### Sinc approximation
The Sinc approximation is a function approximation formula over the real axis \(\mathbb{R}\), which is expressed as
\[F(x)\approx\sum_{j=-M}^{N}F(jh)S\,(j,h)(x),\quad x\in\mathbb{R}, \tag{2.1}\]
where \(S\,(k,h)(x)\) is the so-called "Sinc function" defined by
\[S\,(j,h)(x)=\frac{\sin[\pi(x-jh)/h]}{\pi(x-jh)/h},\]
and \(h\), \(M\), and \(N\) are suitably selected depending on a given positive integer \(n\).
#### 2.1.1 SE-Sinc approximation and its convergence theorem
To apply the Sinc approximation (2.1), \(F\) should be defined on the entire real axis \(\mathbb{R}\). If the function to be approximated, denoted by \(f(t)\), is defined for \(t\geq 0\), we should employ a variable transformation that maps \(\mathbb{R}\) onto
\((0,\infty)\). Especially in the case where \(f(t)\) decays exponentially as \(t\to\infty\), such as \(f(t)=\sqrt{t}\,\mathrm{e}^{-t}\), the following variable transformation
\[t=\psi(x)=\log(1+\mathrm{e}^{x})\]
was proposed [11]. We refer to this transformation as the Single-Exponential (SE) transformation. With the SE transformation, putting \(F(x)=f(\psi(x))\), we can apply (2.1) as
\[f(\psi(x))\approx\sum_{j=-M}^{N}f(\psi(jh))S(j,h)(x),\quad x\in\mathbb{R},\]
which is equivalent to
\[f(t)\approx\sum_{j=-M}^{N}f(\psi(jh))S(j,h)(\psi^{-1}(t)),\quad t\in(0,\infty). \tag{2.2}\]
We refer to this approximation as the SE-Sinc approximation.
For efficient approximation through (2.1), \(F\) should be analytic on a strip domain
\[\mathcal{D}_{d}=\{\zeta\in\mathbb{C}:|\operatorname{Im}\zeta|<d\}\]
for a positive constant \(d\). Therefore, for efficient approximation through (2.2), \(f\) should be analytic on a translated domain
\[\psi(\mathcal{D}_{d})=\{z=\psi(\zeta):\zeta\in\mathcal{D}_{d}\}.\]
Actually, the convergence of the SE-Sinc approximation was analyzed as follows.
**Theorem 2.1** (Okayama et al. [11, Theorem 2.2]).: _Assume that \(f\) is analytic in \(\psi(\mathcal{D}_{d})\) with \(0<d<\pi\). Furthermore, assume that there exist positive constants \(C_{\ddagger}\), \(\alpha\), and \(\beta\) such that_
\[|f(z)|\leq C_{\ddagger}\left\lvert\frac{z}{1+z}\right\rvert^{\alpha}|\, \mathrm{e}^{-z}\,|^{\beta} \tag{2.3}\]
_holds for all \(z\in\psi(\mathcal{D}_{d})\). Let \(\mu=\min\{\alpha,\beta\}\), let \(M\) and \(N\) be defined as_
\[\left\{\begin{aligned} M=n,&\quad N=\left\lceil \frac{\alpha}{\beta}n\right\rceil\quad(\text{if }\,\mu=\alpha),\\ N=n,&\quad M=\left\lceil\frac{\beta}{\alpha}n \right\rceil\quad(\text{if }\,\mu=\beta),\end{aligned}\right. \tag{2.4}\]
_and let \(h\) be defined as_
\[h=\sqrt{\frac{\pi d}{\mu n}}. \tag{2.5}\]
_Then, there exists a positive constant \(C\) independent of \(n\) such that_
\[\sup_{n\in(0,\infty)}\left\lvert f(t)-\sum_{j=-M}^{N}f(\psi(jh))S(j,h)(\psi^{ -1}(t))\right\rvert\leq C\sqrt{n}\,\mathrm{e}^{-\sqrt{\pi d\mu n}}\,.\]
#### 2.1.2 SE-Sinc approximation with a boundary treatment and its convergence theorem
According to Theorem 2.1, \(f\) should satisfy (2.3), which requires \(f\) to be zero at the boundary of \((0,\infty)\). Here, let \(\tilde{f}\) be a function with general boundary values
\[\left\{\begin{aligned} \lim_{t\to\infty}\tilde{f}(t)& =p,\\ \lim_{t\to 0}\tilde{f}(t)&=q.\end{aligned}\right. \tag{2.6}\]
Okayama and Hamada [6] considered the following function
\[f(t)=\tilde{f}(t)-\frac{q+p(\mathrm{e}^{t}-1)}{\mathrm{e}^{t}}, \tag{2.7}\]
which is zero at the boundary of \((0,\infty)\). Then, they considered application of (2.2), which is equivalent to
\[\tilde{f}(t)\approx\frac{q+p(\mathrm{e}^{t}-1)}{\mathrm{e}^{t}}+\sum_{j=-M}^{ N}\left(\tilde{f}(\psi(jh))-\frac{q+p\,\mathrm{e}^{jh}}{1+\mathrm{e}^{jh}}\right)S(j,h)(\psi^{-1}(t)),\quad t\in(0,\infty). \tag{2.8}\]
We refer to this approximation as the SE-Sinc approximation with a boundary treatment. Its convergence was analyzed as follows.
**Theorem 2.2** (Okayama and Hamada [6, Theorem 3]): _For a given function \(\tilde{f}\), let \(p\) and \(q\) be defined by (2.6), and let \(f\) be defined by (2.7). Assume that \(f\) is analytic in \(\psi(\mathcal{Q}_{d})\) with \(0<d<\pi\). Furthermore, assume that there exist positive constants \(C_{\ddagger}\), \(\alpha\), and \(\beta\) such that (2.3) holds for all \(z\in\psi(\mathcal{D}_{d})\). Let \(\mu=\min[\alpha,\beta]\), let \(M\) and \(N\) be defined as (2.4), and let \(h\) be defined as (2.5). Then, there exists a positive constant \(C\) independent of \(n\) such that_
\[\sup_{n\in(0,\infty)}\left|\tilde{f}(t)-\left[\frac{q+p(\mathrm{e}^{t}-1)}{ \mathrm{e}^{t}}+\sum_{j=-M}^{N}\left\{\tilde{f}(\psi(jh))-\frac{q+p\,\mathrm{e }^{jh}}{1+\mathrm{e}^{jh}}\right\}S(j,h)(\psi^{-1}(t))\right]\right|\leq C \sqrt{n}\,\mathrm{e}^{-\sqrt{n\mu m}}\,.\]
#### 2.1.3 DE-Sinc approximation and its convergence theorem (new result)
The SE transformation is not the only variable transformation that maps \(\mathbb{R}\) onto \((0,\infty)\). In fact, another variable transformation
\[t=\phi(x)=\log(1+\mathrm{e}^{\pi\sinh x})\]
was proposed [4]. We refer to this transformation as the Double-Exponential (DE) transformation. With the DE transformation, putting \(F(x)=f(\phi(x))\), we can apply (2.1) as
\[f(\phi(x))\approx\sum_{j=-n}^{n}f(\phi(jh))S(j,h)(x),\quad x\in\mathbb{R},\]
where we choose \(M=N=n\) in this case. This is equivalent to
\[f(t)\approx\sum_{j=-n}^{n}f(\phi(jh))S(j,h)(\phi^{-1}(t)),\quad t\in(0,\infty). \tag{2.9}\]
We refer to this approximation as the DE-Sinc approximation.
For efficient approximation through (2.9), \(f\) should be analytic on a translated domain
\[\phi(\mathcal{D}_{d})=\{z=\phi(\zeta):\zeta\in\mathcal{D}_{d}\}.\]
We provide its convergence theorem as follows. The proof is given in Sect. 5.
**Theorem 2.3**: _Assume that \(f\) is analytic in \(\phi(\mathcal{D}_{d})\) with \(0<d<\pi/2\). Furthermore, assume that there exist positive constants \(C_{\ddagger}\) and \(\mu\) with \(\mu\leq 1\) such that_
\[\left|f(z)\right|\leq C_{\ddagger}\left|z\right|^{\mu}\mathrm{e}^{-z}\left|^{\mu}\right. \tag{2.10}\]
_holds for all \(z\in\phi(\mathcal{D}_{d})\). Let \(h\) be defined as_
\[h=\frac{\operatorname{arsinh}(dn/\mu)}{n}. \tag{2.11}\]
_Then, there exists a positive constant \(C\) independent of \(n\) such that_
\[\sup_{n\in(0,\infty)}\left|f(t)-\sum_{j=-n}^{n}f(\phi(jh))S(j,h)(\phi^{-1}(t ))\right|\leq C\,\mathrm{e}^{-ndn/\operatorname{arsinh}(dn/\mu)}\,.\]
**Remark 2.1**.: _The error of (2.9) was also analyzed by Okayama [4, Theorem 2.9]. However, in the existing theorem, the formula of \(h\) is set as_
\[h=\frac{\log(2dn/\mu)}{n},\]
_which is different from (2.11). In the present paper, we set \(h\) as (2.11), and therefore we establish another theorem here._
#### 2.1.4 DE-Sinc approximation with a boundary treatment and its convergence theorem (new result)
By the condition (2.10), Theorem 2.3 also requires \(f\) to be zero at the boundary of \((0,\infty)\). In the case of \(\tilde{f}\) with general boundary values, we consider the same function \(f\) as (2.7). Then, we apply (2.9), which is equivalent to
\[\tilde{f}(t)\approx\frac{q+p(\mathrm{e}^{t}-1)}{\mathrm{e}^{t}}+\sum_{j=-n}^{ n}\left(\tilde{f}(\phi(jh))-\frac{q+p\,\mathrm{e}^{\pi\sinh(jh)}}{1+\mathrm{e}^{ \pi\sinh(jh)}}\right)S(j,h)(\phi^{-1}(t)),\quad t\in(0,\infty). \tag{2.12}\]
We refer to this approximation as the DE-Sinc approximation with a boundary treatment. We provide its convergence theorem as follows. The proof is given in Sect. 5.
**Theorem 2.4**.: _For a given function \(\tilde{f}\), let \(p\) and \(q\) be defined by (2.6), and let \(f\) be defined by (2.7). Assume that \(f\) is analytic in \(\phi(\mathcal{D}_{d})\) with \(0<d<\pi/2\). Furthermore, assume that there exist positive constants \(C_{\ddagger}\) and \(\mu\) with \(\mu\leq 1\) such that (2.10) holds for all \(z\in\phi(\mathcal{D}_{d})\). Let \(h\) be defined as (2.11). Then, there exists a positive constant \(C\) independent of \(n\) such that_
\[\sup_{\varepsilon\in(0,\infty)}\left|\tilde{f}(t)-\left[\frac{q+p(\mathrm{e}^ {t}-1)}{\mathrm{e}^{t}}+\sum_{j=-n}^{n}\left\{\tilde{f}(\phi(jh))-\frac{q+p\, \mathrm{e}^{\pi\sinh(jh)}}{1+\mathrm{e}^{\pi\sinh(jh)}}\right\}S(j,h)(\phi^{- 1}(t))\right]\right|\leq C\,\mathrm{e}^{-\pi\mathrm{d}n/\,\mathrm{a}\sinh(jh /\mu)}\,.\]
### Sinc indefinite integration
Integrating both sides of (2.1), we have
\[\int_{-\infty}^{\xi}F(x)\,\mathrm{d}x\approx\sum_{j=-M}^{N}F(jh)\int_{- \infty}^{\xi}S(j,h)(x)\,\mathrm{d}x=\sum_{j=-M}^{N}F(jh)J(j,h)(\xi),\quad\xi \in\mathbb{R}, \tag{2.13}\]
where
\[J(j,h)(x)=h\left\{\frac{1}{2}+\frac{1}{\pi}\,\mathrm{Si}\left(\frac{\pi(x-jh) }{h}\right)\right\}.\]
Here, \(\mathrm{Si}(x)\) is the sine integral defined by (1.2). The approximation (2.13) is called the Sinc indefinite integration. Similar to the Sinc approximation, the Sinc indefinite integration is frequently combined with a variable transformation.
#### 2.2.1 SE-Sinc indefinite integration and its convergence theorem
In the case of the following integral
\[\int_{0}^{t}f(s)\,\mathrm{d}s,\quad t\in(0,\infty),\]
where \(f(s)\) decays exponentially as \(s\to\infty\), the SE transformation \(s=\psi(x)\) allows us to apply the Sinc indefinite integration (2.13) as
\[\int_{0}^{t}f(s)\,\mathrm{d}s=\int_{-\infty}^{\psi^{-1}(t)}f(\psi(x))\psi^{ \prime}(x)\,\mathrm{d}x\approx\sum_{j=-M}^{N}f(\psi(jh))\psi^{\prime}(jh)J(j, h)(\psi^{-1}(t)),\quad t\in(0,\infty).\]
We refer to this approximation as the SE-Sinc indefinite integration. Its convergence was analyzed as follows.
**Theorem 2.5** (Hara and Okayama [1, Theorem 2]): _Assume that \(f\) is analytic in \(\psi(\mathcal{D}_{d})\) with \(0<d<\pi\). Furthermore, assume that there exists positive constants \(C_{\ddagger}\), \(\beta\) and \(\alpha\) with \(0<\alpha\leq 1\) such that_
\[|f(z)|\leq C_{\ddagger}\left\lfloor\frac{z}{1+z}\right\rfloor^{ \alpha-1}|\,\mathrm{e}^{-z}\,\psi^{\beta} \tag{2.14}\]
_holds for all \(z\in\psi(\mathcal{D}_{d})\). Let \(\mu=\min\{\alpha,\beta\}\), let \(M\) and \(N\) be defined as (2.4), and let \(h\) be defined as (2.5). Then, there exists a positive constant \(C\) independent of \(n\) such that_
\[\sup_{t\in(0,\infty)}\left|\int_{0}^{t}f(s)\,\mathrm{d}s-\sum_{j=-M}^{N}f(\psi (jh))\psi^{\prime}(jh)J(j,h)(\psi^{-1}(t))\right|\leq C\,\mathrm{e}^{-\sqrt{ \alpha_{dj\mu}}}\,.\]
#### 2.2.2 DE-Sinc indefinite integration and its convergence theorem
In the case of the Sinc indefinite integration as well, we may use the DE transformation instead of the SE transformation. If the DE transformation is employed, we have
\[\int_{0}^{t}f(s)\,\mathrm{d}s=\int_{-\infty}^{\phi^{-1}(t)}f(\phi( x))\phi^{\prime}(x)\,\mathrm{d}x\approx\sum_{j=-M}^{N}f(\phi(jh))\phi^{\prime}( jh)J(j,h)(\phi^{-1}(t)),\quad t\in(0,\infty).\]
We refer to this approximation as the DE-Sinc indefinite integration. Its convergence was analyzed as follows.
**Theorem 2.6** (Hara and Okayama [2, Theorem 2]): _Assume that \(f\) is analytic in \(\phi(\mathcal{D}_{d})\) with \(0<d<\pi/2\). Furthermore, assume that there exists positive constants \(C_{\ddagger}\), \(\beta\) and \(\alpha\) with \(0<\alpha\leq 1\) such that (2.14) holds for all \(z\in\psi(\mathcal{D}_{d})\). Let \(\mu=\min\{\alpha,\beta\}\), let \(M\) and \(N\) be defined as_
\[\begin{cases}M=n,&N=\left[\frac{1}{h}\operatorname{arsinh}\left( \frac{\alpha}{\beta}\operatorname{sinh}(nh)\right)\right]\\ N=n,&M=\left[\frac{1}{h}\operatorname{arsinh}\left(\frac{\beta}{\alpha} \operatorname{sinh}(nh)\right)\right]\end{cases}\quad(\text{if }\,\mu=\beta), \tag{2.15}\]
_and let \(h\) be defined as (2.11). Then, there exists a positive constant \(C\) independent of \(n\) such that_
\[\sup_{t\in(0,\infty)}\left|\int_{0}^{t}f(s)\,\mathrm{d}s-\sum_{j=-M}^{N}f( \phi(jh))\phi^{\prime}(jh)J(j,h)(\phi^{-1}(t))\right|\leq C\frac{\operatorname {arsinh}(dn/\mu)}{n}\,\mathrm{e}^{-\pi\alpha h/\operatorname{arsinh}(dn/\mu )}\,.\]
## 3 Sinc-Nystrom methods
In this section, we describe two Sinc-Nystrom methods and their error analyses presented in Hara and Okayama [2]. The derivation of the methods is through the following two steps: (i) by integration, rewrite the given problem (1.1) as
\[\mathbf{y}(t)=\mathbf{r}+\int_{0}^{t}\{K(s)\mathbf{y}(s)+\mathbf{g}(s)\}\,\mathrm{d}s, \tag{3.1}\]
and (ii) apply the SE-Sinc indefinite integration or the DE-Sinc indefinite integration to the integral. We explain these methods individually.
### SE-Sinc-Nystrom method and its error analysis
Let \(l=M+N+1\) and let \(\mathbf{y}^{(l)}(t)\) be an approximate solution of \(\mathbf{y}(t)\). Approximating the integral in (3.1) based on Theorem 2.5, we can derive
\[\mathbf{y}^{(l)}(t)=\mathbf{r}+\sum_{j=-M}^{N}\Bigl{\{}K(\psi(jh))\mathbf{y}^ {(l)}(\psi(jh))+\mathbf{g}(\psi(jh))\Bigr{\}}\psi^{\prime}(jh)J(j,h)(\psi^{-1}(t)). \tag{3.2}\]
To determine the unknown coefficients \(\mathbf{y}^{(0)}(\psi(jh))\), we set sampling points at \(t=\psi(ih)\) (\(i=-M,\,-M+1,\,\dots,\,N\)). We then obtain a system of linear equations given by
\[(I_{m}\otimes I_{l}-\{I_{m}\otimes(hI_{l}^{(-1)}D_{l}^{(\phi)})\}[K_{ij}^{( \phi)}])\mathbf{Y}^{(\phi)}=\mathbf{R}+\{I_{m}\otimes(hI_{l}^{(-1)}D_{l}^{(\phi)})\} \mathbf{G}^{(\phi)}, \tag{3.3}\]
where \(I_{l}\) and \(I_{m}\) are identity matrices, \(\otimes\) denotes the Kronecker product, and \(I_{l}^{(-1)}\) is an \(l\times l\) matrix whose \((i,j)\) entries are defined as
\[(I_{l}^{(-1)})_{ij}=\frac{1}{2}+\frac{1}{\pi}\operatorname{Si}\left(\pi(i-j) \right)\quad(i,j=-M,\,-M+1,\,\dots,\,N).\]
Moreover, \(D_{l}^{(\phi)}\) and \(K_{ij}^{(\phi)}\) are \(l\times l\) diagonal matrices defined as
\[D_{l}^{(\phi)} =\operatorname{diag}[\psi^{\prime}(-Mh),\,\dots,\,\psi^{\prime}( Nh)],\] \[K_{ij}^{(\phi)} =\operatorname{diag}[k_{ij}(\psi(-Mh)),\,\dots,\,k_{ij}(\psi(Nh))],\]
and \([K_{ij}^{(\phi)}]\) is a block matrix whose \((i,j)\) entry is \(K_{ij}^{(\phi)}\) (\(i,j=1,\,\dots,\,m\)). Furthermore, \(\mathbf{R}\), \(\mathbf{Y}^{(\phi)}\), and \(\mathbf{G}^{(\psi)}\) are \(lm\)-dimensional vectors defined as follows:
\[\mathbf{R} =[r_{1},\,\dots,\,r_{1},\,r_{2},\,\dots,\,r_{2},\,\dots,\,r_{m}, \,\dots,\,r_{m}]^{\mathrm{T}},\] \[\mathbf{Y}^{(\psi)} =[y_{1}^{(0)}(\psi(-Mh)),\,\dots,\,y_{1}^{(b)}(\psi(Nh)),y_{2}^{( b)}(\psi(-Mh)),\,\dots,\,y_{2}^{(l)}(\psi(Nh)),\,\dots,\,y_{m}^{(b)}(\psi(-Mh)),\, \dots,\,y_{m}^{(l)}(\psi(Nh))]^{\mathrm{T}},\] \[\mathbf{G}^{(\psi)} =[g_{1}(\psi(-Mh)),\,\dots,\,g_{1}(\psi(Nh)),g_{2}(\psi(-Mh)),\, \dots,\,g_{2}(\psi(Nh)),\,\dots,\,g_{m}(\psi(-Mh)),\,\dots,\,g_{m}(\psi(Nh)) ]^{\mathrm{T}}.\]
By solving (3.3), we can obtain the value of \(\mathbf{y}^{(l)}(\psi(jh))\), from which \(\mathbf{y}^{(l)}(t)\) is determined through (3.2). This procedure is the SE-Sinc-Nystrom method. Its error was analyzed as follows.
**Theorem 3.1** (Hara and Okayama [2, Theorem 3]): _Let \(\beta\) be a positive constant, and let \(\alpha\) and \(d\) be constants with \(0<\alpha\leq 1\) and \(0<d<\pi\). Assume that the function \(k_{ij}\) (\(i,j=1,\,\dots,\,m\)) is analytic and bounded on \(\psi(\mathcal{D}_{d})\), and \(y_{i}\) and \(g_{i}\) (\(i=1,\,\dots,\,m\)) satisfy the assumption of Theorem 2.5. Let \(h\) be set as (2.5), and let \(M\) and \(N\) be set as (2.4). Let \(A_{lm}\) be a coefficient matrix of the system of linear equation (3.3), i.e.,_
\[A_{lm}=(I_{m}\otimes I_{l}-\{I_{m}\otimes(hI_{l}^{(-1)}D_{l}^{(\phi)})\}[K_{ij }^{(\phi)}]),\]
_and assume that the inverse matrix of \(A_{lm}\) exists. Then, the error of the approximate solution \(\mathbf{y}^{(l)}(t)\) in (3.2) is estimated as_
\[\max_{1\leq i\leq m}\left\{\sup_{t\in(0,\infty)}\Big{|}y_{i}(t)-y_{i}^{(l)}(t) \Big{|}\right\}\leq\left(C+\hat{C}\|A_{lm}^{-1}\|_{\infty}\right)\sqrt{n} \operatorname{e}^{-\sqrt{\pi d\mu}},\]
_where \(C\) and \(\hat{C}\) are positive constants independent of \(n\)._
### DE-Sinc-Nystrom method and its error analysis
From a comparison of Theorem 2.5 with Theorem 2.6, we can expect that replacement of \(\psi\) by \(\phi\) may accelerate the convergence rate. Approximating the integral in (3.1) based on Theorem 2.6, we can derive
\[\mathbf{y}^{(l)}(t)=\mathbf{r}+\sum_{j=-M}^{N}\{K(\phi(jh))\mathbf{y}^{(l)}(\phi(jh))+\mathbf{g} (\phi(jh))\Big{|}\phi^{\prime}(jh)J(j,h)(\phi^{-1}(t)). \tag{3.4}\]
Setting sampling points at \(t=\phi(ih)\) (\(i=-M,\,-M+1,\,\dots,\,N\)), we obtain the following:
\[(I_{m}\otimes I_{l}-\{I_{m}\otimes(hI_{l}^{(-1)}D_{l}^{(\phi)})\}[K_{ij}^{( \phi)}])\mathbf{Y}^{(\phi)}=\mathbf{R}+\{I_{m}\otimes(hI_{l}^{(-1)}D_{l}^{(\phi)})\mathbf{G} ^{(\phi)}, \tag{3.5}\]
for which \(\phi\) is used instead of \(\psi\). By solving (3.5), we can obtain the value of \(\mathbf{y}^{(l)}(\phi(jh))\), from which \(\mathbf{y}^{(l)}(t)\) is determined through (3.4). This procedure is the DE-Sinc-Nystrom method. Its error was analyzed as follows.
**Theorem 3.2** (Hara and Okayama [2, Theorem 4]).: _Let \(\beta\) be a positive constant, and let \(\alpha\) and \(d\) be constants with \(0<\alpha\leq 1\) and \(0<d<\pi/2\). Assume that the function \(k_{ij}\) (\(i,j=1,\,\ldots,\,m\)) is analytic and bounded on \(\phi(\mathcal{D}_{d})\), and \(y_{i}\) and \(g_{i}\) (\(i=1,\,\ldots,\,m\)) satisfy the assumption of Theorem 2.6. Let \(h\) be set as (2.11), and let \(M\) and \(N\) be set as (2.15). Let \(B_{lm}\) be a coefficient matrix of the system of linear equation (3.5), i.e.,_
\[B_{lm}=(I_{m}\otimes I_{l}-\{I_{m}\otimes(hI_{l}^{(-1)}D_{l}^{(\phi)})|K_{ij}^ {(\phi)}\}),\]
_and assume that the inverse matrix of \(B_{lm}\) exists. Then, the error of the approximate solution \(\mathbf{y}^{(l)}(t)\) in (3.4) is estimated as_
\[\max_{1\leq i\leq m}\left\{\sup_{t\in(0,\infty)}\left|y_{i}(t)-y_{i}^{(l)}(t) \right|\right\}\leq\left(C+\hat{C}\|B_{lm}^{-1}\|_{\infty}\right)\mathrm{arsinh }(dn/\mu)\,\mathrm{e}^{-\pi dny\,\mathrm{arsinh}(dn/\mu)},\]
_where \(C\) and \(\hat{C}\) are positive constants independent of \(n\)._
## 4 Sinc-collocation methods
In this section, we describe two Sinc-collocation methods presented in Okayama and Hara [7], and their error analyses presented in this paper. The derivation of the methods is through applying the SE- or DE-Sinc approximation with a boundary treatment to the approximate solution \(\mathbf{y}^{(l)}(t)\).
### SE-Sinc-collocation method and its error analysis (new result)
Here, we approximate \(\mathbf{y}^{(l)}\) in (3.2) based on Theorem 2.2. Then, we obtain a new approximate solution \(\hat{\mathbf{y}}^{(l)}\) defined as
\[\hat{\mathbf{y}}^{(l)}(t)=\frac{\mathbf{r}+\mathbf{p}^{(\phi)}(\mathrm{e}^{t} -1)}{\mathrm{e}^{t}}+\sum_{k=-M}^{N}\left\{\mathbf{y}^{(l)}(\psi(kh))-\frac{ \mathbf{r}+\mathbf{p}^{(\phi)}\,\mathrm{e}^{kh}}{1+\mathrm{e}^{kh}}\right\}S(k,h)(\phi^{-1}(t)), \tag{4.1}\]
where \(\mathbf{r}\) is the initial value in (1.1), and \(\mathbf{p}^{(\psi)}\) is given by
\[\mathbf{p}^{(\phi)}=\mathbf{r}+h\sum_{j=-M}^{N}\left\{K(\psi(jh))\mathbf{y}^{( l)}(\psi(jh))+\mathbf{g}(\psi(jh))\right\}\psi^{\prime}(jh).\]
Here, \(J(j,h)(\psi^{-1}(0))=0\) and \(\lim_{t\to\infty}J(j,h)(\psi^{-1}(t))=h\) are used to calculate \(\mathbf{y}^{(l)}(0)\) and \(\lim_{t\to\infty}\mathbf{y}^{(l)}(t)\), respectively, in accordance with the definition in (3.2). In summary, by solving (3.3), we can obtain the value of \(\mathbf{y}^{(l)}(\psi(jh))\), from which \(\hat{\mathbf{y}}^{(l)}(t)\) is determined through (4.1). This procedure is the SE-Sinc-collocation method. We provide its error analysis as follows. The proof is given in Sect. 6.
**Theorem 4.1**.: _Assume that the assumptions of Theorem 3.1 are fulfilled. Furthermore, assume that there exists a positive constant \(H\) such that_
\[\max_{i=1,\,\ldots,\,m}\left|y_{i}(z)-r_{i}\right| \leq H\left|\frac{z}{1+z}\right|^{\alpha},\] \[\max_{i=i,\,\ldots,\,m}\left|y_{i}(z)\right| \leq H\left|\mathrm{e}^{-z}\right|^{\beta}\]
_hold for all \(z\in\psi(\mathcal{D}_{d})\). Then, the error of the approximate solution \(\hat{\mathbf{y}}^{(l)}(t)\) in (4.1) is estimated as_
\[\max_{1\leq i\leq m}\left\{\sup_{t\in(0,\infty)}\left|y_{i}(t)-\hat{y}_{i}^{(l )}(t)\right|\right\}\leq\left(C+\hat{C}\|A_{lm}^{-1}\|_{\infty}\right)\sqrt{n} \log(n+1)\,\mathrm{e}^{-\sqrt{ndjm}},\]
_where \(C\) and \(\hat{C}\) are positive constants independent of \(n\)._
### DE-Sinc-collocation method and its error analysis (new result)
Here, we approximate \(\mathbf{y}^{(l)}\) in (3.4) based on Theorem 2.4. Then, we obtain a new approximate solution \(\hat{\mathbf{y}}^{(l)}\) defined as
\[\hat{\mathbf{y}}^{(l)}(t)=\frac{\mathbf{r}+\mathbf{p}^{(\phi)}(\mathrm{e}^{t} -1)}{\mathrm{e}^{t}}+\sum_{k=-M}^{N}\left\{\mathbf{y}^{(l)}(\phi(kh))-\frac{\mathbf{r} +\mathbf{p}^{(\phi)}\,\mathrm{e}^{\pi\sinh(kh)}}{1+\mathrm{e}^{\pi\sinh(kh)}} \right\}S(k,h)(\phi^{-1}(t)), \tag{4.2}\]
where \(\mathbf{r}\) is the initial value in (1.1), and \(\mathbf{p}^{(\phi)}\) is given by
\[\mathbf{p}^{(\phi)}=\mathbf{r}+h\sum_{j=-M}^{N}\left\{K(\phi(jh))\mathbf{y}^{(l)}(\phi(jh) )+\mathbf{g}(\phi(jh))\right\}\phi^{\prime}(jh).\]
In summary, by solving (3.5), we can obtain the value of \(\mathbf{y}^{(l)}(\phi(jh))\), from which \(\hat{\mathbf{y}}^{(l)}(t)\) is determined through (4.2). This procedure is the DE-Sinc-collocation method. We provide its error analysis as follows. The proof is given in Sect. 6.
**Theorem 4.2**.: _Assume that the assumptions of Theorem 3.2 are fulfilled. Furthermore, assume that there exists a positive constant \(H\) such that_
\[\max_{i=1,\,\ldots,m}|y_{i}(z)-r_{i}| \leq H\left|z\right|^{\alpha},\] \[\max_{i=i,\,\ldots,m}|y_{i}(z)| \leq H\left|\mathrm{e}^{-z}\right|^{\beta}\]
_hold for all \(z\in\phi(\mathcal{D}_{d})\). Then, the error of the approximate solution \(\hat{\mathbf{y}}^{(l)}(t)\) in (4.2) is estimated as_
\[\max_{1\leq i\leq m}\left\{\sup_{\varepsilon\in(0,\infty)}\left| y_{i}(t)-\hat{y}_{i}^{(l)}(t)\right|\right\}\leq\left(C+\hat{C}\|B_{lm}^{-1} \|_{\infty}\right)\mathrm{arsinh}(dn/\mu)\log(n+1)\,\mathrm{e}^{-\pi dnn/\, \mathrm{arsinh}(dn/\mu)},\]
_where \(C\) and \(\hat{C}\) are positive constants independent of \(n\)._
## 5 Proof of Theorems 2.3 and 2.4
In this section, we prove Theorems 2.3 and 2.4.
### Sketch of the proof of Theorem 2.3
We prove Theorem 2.3 with an explicit form of the constant \(C\), i.e., we prove the following theorem.
**Theorem 5.1**.: _Assume that \(f\) is analytic in \(\phi(\mathcal{D}_{d})\) with \(0<d<\pi/2\). Furthermore, assume that there exist positive constants \(C_{\ddagger}\) and \(\mu\) with \(\mu\leq 1\) such that (2.10) holds for all \(z\in\phi(\mathcal{D}_{d})\). Let \(h\) be defined as (2.11). Then, it holds that_
\[\sup_{\varepsilon\in(0,\infty)}\left|f(t)-\sum_{j=-n}^{n}f(\phi(jh))S(j,h)( \phi^{-1}(t))\right|\leq C_{\dagger}\,\mathrm{e}^{-\pi dnn/\,\mathrm{arsinh}(dn /\mu)},\]
_where \(C_{\dagger}\) is a positive constant defined as_
\[C_{\dagger}=\frac{2C_{\ddagger}}{\mu\pi^{1-\mu}}\left\{\frac{2} {\pi d(1-\mathrm{e}^{-2\pi d/\,\mathrm{arsinh}(d/\mu)})\cos^{2\mu}((\pi/2)\, \sin d)\cos^{1+\mu}d}+\frac{\mathrm{e}^{-\pi d(\mathrm{arsinh}(d/\mu)-1)/\, \mathrm{arsinh}(d/\mu)}}{\mathrm{arsinh}(d/\mu)(1+(d/\mu))^{(1-\mu)/2}}\right\}. \tag{5.1}\]
The proof is organized as follows. We divide the error into two terms as
\[\left|f(t)-\sum_{j=-n}^{n}f(\phi(jh))S(j,h)(\phi^{-1}(t))\right| \leq\left|f(t)-\sum_{j=-\infty}^{\infty}f(\phi(jh))S(j,h)(\phi^{-1} (t))\right|\] \[\quad+\left|\sum_{j=-\infty}^{-n-1}f(\phi(jh))S(j,h)(\phi^{-1}(t)) +\sum_{j=n+1}^{\infty}f(\phi(jh))S(j,h)(\phi^{-1}(t))\right|.\]
The first term is called the discretization error, and the second term is called the truncation error. The discretization error was estimated as follows.
**Lemma 5.2** (Okayama [4, Lemma 4.16]): _Assume that \(f\) is analytic in \(\phi(\mathcal{D}_{d})\) with \(0<d<\pi/2\). Furthermore, assume that there exist positive constants \(C_{\ddagger}\) and \(\mu\) with \(\mu\leq 1\) such that (2.10) holds for all \(z\in\phi(\mathcal{D}_{d})\). Then, we have_
\[\sup_{t\in(0,\infty)}\left|f(t)-\sum_{j=-\infty}^{\infty}f(\phi(jh))S(j,h)( \phi^{-1}(t))\right|\leq\frac{4C_{\ddagger}}{\pi^{2-\mu}d\mu(1-\mathrm{e}^{-2 \pi d/h})\cos^{2\mu}((\pi/2)\sin d)\cos^{1+\mu}d}\,\mathrm{e}^{-\pi d/h}\,.\]
We estimate the truncation error as follows. The proof is given in Sect. 5.2.
**Lemma 5.3**: _Assume that there exist positive constants \(C_{\ddagger}\) and \(\mu\) with \(\mu\leq 1\) such that (2.10) Then, we have_
\[\sup_{t\in(0,\infty)}\left|\sum_{j=-\infty}^{-n-1}f(\phi(jh))S(j,h)(\phi^{-1} (t))+\sum_{j=n+1}^{\infty}f(\phi(jh))S(j,h)(\phi^{-1}(t))\right|\leq\frac{2C_ {\ddagger}\pi^{\mu-1}}{\mu h\cosh^{1-\mu}(nh)}\,\mathrm{e}^{-\eta u\sinh(nh)}\,.\]
Combining these two lemmas, we prove Theorem 5.1 in Sect. 5.3. This completes the proof.
### Proof of Lemma 5.3
To prove Lemma 5.3, The following inequality is useful.
**Lemma 5.4** (Okayama [4, Lemma 4.15]): _For all real numbers \(x\) and \(y\) with \(|y|<\pi/2\), we have_
\[|\log(1+\mathrm{e}^{\pi\sinh(x+1)y})|\leq\frac{1}{\cos((\pi/2)\sin y)\cos y} \cdot\frac{\pi\cosh x}{1+\mathrm{e}^{-\pi\sinh(x)\cos y}}.\]
Using this lemma, we prove Lemma 5.3 as follows.
Put \(F(x)=f(\phi(x))\) for short. Because \(|S(j,h)(x)|\leq 1\) for all \(x\in\mathbb{R}\), it holds that
\[\left|\sum_{j=-\infty}^{-n-1}F(jh)S(j,h)(\phi^{-1}(t))+\sum_{j=n +1}^{\infty}F(jh)S(j,h)(\phi^{-1}(t))\right| \leq\sum_{j=-\infty}^{-n-1}|F(jh)||S(j,h)(\phi^{-1}(t))|+\sum_{j=n +1}^{\infty}|F(jh)||S(j,h)(\phi^{-1}(t))|\] \[\leq\sum_{j=-\infty}^{-n-1}|F(jh)|+\sum_{j=n+1}^{\infty}|F(jh)|. \tag{5.2}\]
Furthermore, from (2.10), using Lemma 5.4 with \(y=0\), we have
\[|F(x)|=|f(\phi(x))|\leq C_{\ddagger}|\log(1+\mathrm{e}^{\pi\sinh x})|^{\mu} \left|\frac{1}{1+\mathrm{e}^{\pi\sinh x}}\right|^{\mu}\leq C_{\ddagger}\frac{( \pi\cosh x)^{\mu}}{(1+\mathrm{e}^{-\pi\sinh x})^{\mu}(1+\mathrm{e}^{\pi\sinh x })^{\mu}}.\]
Therefore, the right-hand side of (5.2) is further bounded as
\[\sum_{j=-\infty}^{-n-1}|F(jh)|+\sum_{j=n+1}^{\infty}|F(jh)| \leq\sum_{j=-\infty}^{-n-1}C_{\frac{1}{2}}\frac{(\pi\cosh(jh))^{ \mu}}{(1+\mathrm{e}^{-\pi\sinh(jh)\mu})^{\mu}}+\sum_{j=n+1}^{\infty}C_{\frac{1 }{2}}\frac{(\pi\cosh(jh))^{\mu}}{(1+\mathrm{e}^{-\pi\sinh(jh)\mu})^{\mu}(1+ \mathrm{e}^{\pi\sinh(jh)})^{\mu}}\] \[=\frac{2C_{\frac{1}{2}}}{h}\cdot h\sum_{j=n+1}^{\infty}\frac{\pi^ {\mu}\cosh^{\mu}(jh)\,\mathrm{e}^{-\pi\mu\sinh(jh)}}{(1+\mathrm{e}^{-\pi\sinh (jh)})^{2\mu}}\] \[\leq\frac{2C_{\frac{1}{2}}}{h}\cdot h\sum_{j=n+1}^{\infty}\frac{ \pi^{\mu}\cosh^{\mu}(jh)\,\mathrm{e}^{-\pi\mu\sinh(jh)}}{(1+0)^{2\mu}}\] \[\leq\frac{2C_{\frac{1}{2}}}{h}\int_{nh}^{\infty}\pi^{\mu}\cosh^{ \mu}x\,\mathrm{e}^{-\pi\mu\sinh x}\,\mathrm{d}x.\]
Here, because \(0<\mu\leq 1\), it holds for \(x\geq nh\) that
\[\cosh^{\mu}x=\frac{\cosh x}{\cosh^{1-\mu}x}\leq\frac{\cosh x}{\cosh^{1-\mu}(nh)},\]
from which we have
\[\frac{2C_{\frac{1}{2}}}{h}\int_{nh}^{\infty}\pi^{\mu}\cosh^{\mu}x\,\mathrm{e}^ {-\pi\mu\sinh x}\,\mathrm{d}x\leq\frac{2C_{\frac{1}{2}}}{h}\int_{nh}^{\infty} \pi^{\mu}\frac{\cosh x}{\cosh^{1-\mu}(nh)}\,\mathrm{e}^{-\pi\mu\sinh x}\, \mathrm{d}x=\frac{2C_{\frac{1}{2}}\pi^{\mu-1}}{\mu h\cosh^{1-\mu}(nh)}\, \mathrm{e}^{-\pi\mu\sinh(nh)}\,.\]
This is the desired estimate.
### Proof of Theorem 5.1
To prove Theorem 5.1, we prepare the following three propositions.
**Proposition 5.5** (Okayama and Kawai [8, Proposition 7]): _Let \(\tilde{q}\) be a function defined by_
\[\tilde{q}(x)=\frac{x}{\operatorname{arsinh}x}.\]
_Then, \(\tilde{q}(x)\) monotonically increases for \(x\geq 0\)._
**Proposition 5.6**: _Let \(\tilde{p}\) be a function defined by_
\[\tilde{p}(x)=\frac{\operatorname{arsinh}x}{x}\sqrt{1+x^{2}}.\]
_Then, \(\tilde{p}(x)\) monotonically decreases for \(x\geq 0\)._
Putting \(t=\operatorname{arsinh}x\), we have \(\tilde{p}(x)=\tilde{r}(t)\), where
\[\tilde{r}(t)=\frac{t}{\tanh t}.\]
Differentiating \(\tilde{r}(t)\), we have
\[\tilde{r}^{\prime}(t)=\frac{\cosh t\sinh t-t}{\sinh^{2}t}\geq\frac{1\cdot \sinh t-t}{\sinh^{2}t}\geq 0\]
for \(t\geq 0\). Thus, it holds for \(x\geq 0\) that
\[\tilde{p}^{\prime}(x)=\tilde{r}^{\prime}(\operatorname{arsinh}x)\left( \operatorname{arsinh}x\right)^{\prime}=\tilde{r}^{\prime}(\operatorname{arsinh }x)\frac{1}{\sqrt{1+x^{2}}}\geq 0,\]
which is to be demonstrated.
**Proposition 5.7**.: _Let \(w\) be a function defined by_
\[w(x)=(1+x^{2})\exp\left\{-2\pi x\left(1-\frac{1}{\operatorname{arsinh}x}\right) \right\}.\]
_Then, \(w(x)\) monotonically decreases for \(x\geq 0\)._
Proof. Differentiating \(w(x)\), we have
\[w^{\prime}(x) =-2\left\{x+\frac{\pi(1+x^{2})}{\operatorname{arsinh}^{2}x}\left( \operatorname{arsinh}^{2}x-\operatorname{arsinh}x+\frac{x}{\sqrt{1+x^{2}}} \right)\right\}\exp\left\{-2\pi x\left(1-\frac{1}{\operatorname{arsinh}x} \right)\right\}\] \[=-2\left\{x+\frac{\pi(1+x^{2})}{\operatorname{arsinh}^{2}x}v( \operatorname{arsinh}x)\right\}\exp\left\{-2\pi x\left(1-\frac{1}{ \operatorname{arsinh}x}\right)\right\},\]
where \(v(t)=t^{2}-t+\tanh t\). Differentiating \(v(t)\), we have
\[v^{\prime}(t) =2t-1+\frac{1}{\cosh^{2}t},\] \[v^{\prime\prime}(t) =2\left(\frac{\cosh^{3}t-\sinh t}{\cosh^{3}t}\right)\geq 2 \left(\frac{\cosh t-\sinh t}{\cosh^{3}t}\right)=2\left(\frac{\operatorname{e} ^{-t}}{\cosh^{3}t}\right)>0.\]
Therefore, \(v^{\prime}(t)\) monotonically increases, from which we have \(v^{\prime}(t)\geq v^{\prime}(0)=0\) for \(t\geq 0\). Therefore, \(v(t)\) also monotonically increases for \(t\geq 0\), from which we have \(v(t)\geq v(0)=0\) for \(t\geq 0\). Thus, it holds for \(x\geq 0\) that
\[w^{\prime}(x)=-2\left\{x+\frac{\pi(1+x^{2})}{\operatorname{arsinh}^{2}x}v( \operatorname{arsinh}x)\right\}\exp\left\{-2\pi x\left(1-\frac{1}{ \operatorname{arsinh}x}\right)\right\}\leq 0,\]
which is to be demonstrated.
Using these propositions, we prove Theorem 5.1 as follows.
Proof. From Lemmas 5.2 and 5.3, we have
\[\left|f(t)-\sum_{j=-n}^{n}f(\phi(jh))S(j,h)(\phi^{-1}(t))\right|\] \[\leq\frac{4C_{\ddagger}}{\pi^{2-\mu}d\mu(1-\operatorname{e}^{-2 \pi d/h})\cos^{2\mu}((\pi/2)\sin d)\cos^{1+\mu}d}\operatorname{e}^{-\pi d/h}+ \frac{2C_{\ddagger}\pi^{\mu-1}}{\mu h\cosh^{1-\mu}(nh)}\operatorname{e}^{- \pi u\sinh(nh)}.\]
As for the first term, substituting (2.11) and using \(\tilde{q}(x)\) defined in Proposition 5.5, we have
\[\frac{4C_{\ddagger}}{\pi^{2-\mu}d\mu(1-\operatorname{e}^{-2\pi d/ h})\cos^{2\mu}((\pi/2)\sin d)\cos^{1+\mu}d}\operatorname{e}^{-\pi d/h}\] \[=\frac{4C_{\ddagger}}{\pi^{2-\mu}d\mu(1-\operatorname{e}^{-2u \tilde{q}(dn/\mu)})\cos^{2\mu}((\pi/2)\sin d)\cos^{1+\mu}d}\operatorname{e}^{- \pi dn/\operatorname{arsinh}(dn/\mu)}\] \[\leq\frac{4C_{\ddagger}}{\pi^{2-\mu}d\mu(1-\operatorname{e}^{-2u \tilde{q}(d\cdot 1/\mu)})\cos^{2\mu}((\pi/2)\sin d)\cos^{1+\mu}d}\operatorname{e}^{- \pi dn/\operatorname{arsinh}(dn/\mu)}.\]
As for the second term, substituting (2.11) and using \(\tilde{p}(x)\) and \(w(x)\) defined in Propositions 5.6 and 5.7, respectively, we have
\[\frac{2C_{\ddagger}\pi^{\mu-1}}{\mu h\cosh^{1-\mu}(nh)}\operatorname{e}^{- \pi u\sinh(nh)}=\frac{2C_{\ddagger}\pi^{\mu-1}}{d\tilde{p}(dn/\mu)}\left\{w( dn/\mu)\right\}^{\mu/2}\operatorname{e}^{-\pi dn/\operatorname{arsinh}(dn/\mu)} \leq\frac{2C_{\ddagger}\pi^{\mu-1}}{d\tilde{p}(d\cdot 1/\mu)}\left\{w(d\cdot 1/\mu) \right\}^{\mu/2}\operatorname{e}^{-\pi dna/\operatorname{arsinh}(dn/\mu)}.\]
Thus, the claim follows.
### Proof of Theorem 2.4 and its improvement
Note that the approximation (2.12) is equivalent to (2.9) if we define \(f\) as (2.7). Therefore, from Theorem 5.1, we can readily deduce the following theorem, which proves Theorem 2.4.
**Theorem 5.8**.: _For a given function \(\tilde{f}\), let \(p\) and \(q\) be defined by (2.6), and let \(f\) be defined by (2.7). Assume that \(f\) is analytic in \(\phi(\mathcal{D}_{d})\) with \(0<d<\pi/2\). Furthermore, assume that there exist positive constants \(C_{\dagger}\) and \(\mu\) with \(\mu\leq 1\) such that (2.10) holds for all \(z\in\phi(\mathcal{D}_{d})\). Let \(h\) be defined as (2.11). Then, it holds that_
\[\sup_{z\in\phi(\mathcal{D}_{d})}\left|\tilde{f}(t)-\left[\frac{q+p(\mathrm{e} ^{t}-1)}{\mathrm{e}^{t}}+\sum_{j=-n}^{n}\left\{\tilde{f}(\phi(jh))-\frac{q+p \,\mathrm{e}^{\pi\sinh(jh)}}{1+\mathrm{e}^{\pi\sinh(jh)}}\right\}S(j,h)(\phi^ {-1}(t))\right]\right|\leq C_{\dagger}\,\mathrm{e}^{-\pi dn/\,\mathrm{arsinh }(dn/\mu)},\]
_where \(C_{\dagger}\) is a positive constant defined as (5.1)._
However, in this theorem, the conditions to be satisfied are described not for a given function \(\tilde{f}\), but for a function \(f\) defined by (2.7). Such conditions are inconvenient to check whether those conditions are satisfied or not. Therefore, we provide a sufficient condition as follows.
**Lemma 5.9**.: _For a given function \(\tilde{f}\), let \(p\) and \(q\) be defined by (2.6). Assume that \(\tilde{f}\) is analytic in \(\phi(\mathcal{D}_{d})\) with \(0<d<\pi/2\). Furthermore, assume that there exists positive constants \(H\) and \(\mu\) with \(\mu\leq 1\) such that_
\[|\tilde{f}(z)-q| \leq H|z|^{\mu},\] \[|\tilde{f}(z)-p| \leq H|\,\mathrm{e}^{-z}\,|^{\mu}\]
_hold for all \(z\in\phi(\mathcal{D}_{d})\). Then, \(f\) defined by (2.7) satisfies the assumptions of Theorem 5.1._
To prove this lemma, we prepare the following four lemmas.
**Lemma 5.10** (Okayama et al. (2010, Lemma 4.22)).: _Let \(x\) and \(y\) be real numbers with \(|y|<\pi/2\), and let \(\zeta=x+\mathrm{i}y\). Then,_
\[\left|\frac{1}{1+\mathrm{e}^{\pi\sinh\zeta}}\right| \leq\frac{1}{(1+\mathrm{e}^{\pi\sinh(x)\cos y})\cos((\pi/2)\sin y )},\] \[\left|\frac{1}{1+\mathrm{e}^{-\pi\sinh\zeta}}\right| \leq\frac{1}{(1+\mathrm{e}^{-\pi\sinh(x)\cos y})\cos((\pi/2)\sin y )}.\]
**Lemma 5.11**.: _Let \(d\) be a positive constant with \(d<\pi/2\). Then,_
\[\sup_{z\in\phi(\mathcal{D}_{d})}\left|\mathrm{e}^{-z}\right| \leq\frac{1}{\cos((\pi/2)\sin d)},\] \[\sup_{z\in\phi(\mathcal{D}_{d})}\left|1-\mathrm{e}^{-z}\right| \leq\frac{1}{\cos((\pi/2)\sin d)}.\]
Proof.: Applying \(z=\phi(\zeta)\), we have
\[\sup_{z\in\phi(\mathcal{D}_{d})}\left|\mathrm{e}^{-z}\right| =\sup_{\zeta\in\mathcal{D}_{d}}\left|\mathrm{e}^{-\phi(\zeta)} \right|=\sup_{\zeta\in\mathcal{D}_{d}}\left|\frac{1}{1+\mathrm{e}^{\pi\sinh \zeta}}\right|,\] \[\sup_{z\in\phi(\mathcal{D}_{d})}\left|1-\mathrm{e}^{-z}\right| =\sup_{\zeta\in\mathcal{D}_{d}}\left|1-\mathrm{e}^{-\phi(\zeta)} \right|=\sup_{\zeta\in\mathcal{D}_{d}}\left|\frac{1}{1+\mathrm{e}^{-\pi\sinh \zeta}}\right|.\]
Thus, the claim follows from Lemma 5.10.
**Lemma 5.12**.: _Let \(d\) be a positive constant with \(d<\pi/2\). Then,_
\[\sup_{\zeta\in\mathcal{Q}_{d}}\left|\frac{1}{\log(1+\mathrm{e}^{\pi\sinh\zeta})} \cdot\frac{1}{1+\mathrm{e}^{-\pi\sinh\zeta}}\right|\leq\frac{c_{d}}{\log(1+c_{ d})},\]
_where \(c_{d}\) is a constant defined by_
\[c_{d}=1+\frac{1}{\cos((\pi/2)\sin d)}. \tag{5.3}\]
Proof.: Put a function \(P\) as
\[P(z)=\frac{1}{z}\cdot(1-\mathrm{e}^{-z}).\]
By the maximum modulus principle, \(|P(\log(1+\mathrm{e}^{\pi\sinh\zeta}))|\) has its maximum on the boundary of \(\mathcal{D}_{d}\), i.e., when \(\zeta=x+\mathrm{i}\,d\) or \(\zeta=x-\mathrm{i}\,d\). In what follows, we consider the case \(\zeta=x+\mathrm{i}\,d\), because we can handle the case \(\zeta=x-\mathrm{i}\,d\) in the same way.
Put \(\xi=\log(1+\mathrm{e}^{\pi\sinh(x+\mathrm{i}\,d)})\) and \(\gamma=-\log(\cos((\pi/2)\sin d))\). We consider two cases: a) \(|\xi|\leq\log(2+\mathrm{e}^{\gamma})\) and b) \(|\xi|>\log(2+\mathrm{e}^{\gamma})\), and for each case we show that
\[|P(\xi)|\leq\frac{1+\mathrm{e}^{\gamma}}{\log(2+\mathrm{e}^{\gamma})}=\frac{ c_{d}}{\log(1+c_{d})}.\]
In the case of a), we have
\[|P(\xi)|=\left|\sum_{k=1}^{\infty}\frac{(-\xi)^{k-1}}{k!}\right|\leq\sum_{k=1} ^{\infty}\frac{|\xi|^{k-1}}{k!}=\frac{1}{|\xi|}\cdot(\mathrm{e}^{|\xi|}-1)\leq \frac{1}{\log(2+\mathrm{e}^{\gamma})}(\mathrm{e}^{\log(2+\mathrm{e}^{\gamma} )}-1)=\frac{c_{d}}{\log(1+c_{d})},\]
because the function \((\mathrm{e}^{x}-1)/x\) is monotonically increasing. In the case of b), from Lemma 5.10, it holds that
\[\mathrm{Re}\,\xi=\log|1+\mathrm{e}^{\pi\sinh(x+\mathrm{i}\,d)}|\geq\log[(1+ \mathrm{e}^{\pi\sinh(x)\cos d})\cos((\pi/2)\sin d)]\geq-\gamma.\]
Using this inequality, we have
\[|P(\xi)|\leq\frac{1}{|\xi|}(1+|\mathrm{e}^{-\xi}\,|)=\frac{1}{|\xi|}(1+ \mathrm{e}^{-\mathrm{Re}\,\xi})\leq\frac{1}{|\xi|}(1+\mathrm{e}^{\gamma}).\]
Furthermore, because the function \(1/x\) is monotonically decreasing, we have
\[\frac{1}{|\xi|}(1+\mathrm{e}^{\gamma})\leq\frac{1}{\log(2+\mathrm{e}^{\gamma} )}(1+\mathrm{e}^{\gamma})=\frac{c_{d}}{\log(1+c_{d})}.\]
This completes the proof.
**Lemma 5.13**.: _Let \(d\) be a positive constant with \(d<\pi/2\). Then, it holds for \(z\in\phi(\mathcal{D}_{d})\) that_
\[|1-\mathrm{e}^{-z}|\leq\frac{c_{d}}{\log(1+c_{d})}|z|,\]
_where \(c_{d}\) is a constant defined by (5.3)._
Proof.: The claim immediately follows by putting \(z=\log(1+\mathrm{e}^{\pi\sinh\zeta})\) in Lemma 5.12.
Using these lemmas, we prove Lemma 5.9 as follows.
Proof.: From (2.7), using \(|\tilde{f}(z)-q|\leq H|z|^{\mu}\) and \(|\tilde{f}(z)-p|\leq H|\,\mathrm{e}^{-z}\,|^{\mu}\), we have
\[|f(z)|=|(\tilde{f}(z)-p)(1-\mathrm{e}^{-z})+(\tilde{f}(z)-q)\,\mathrm{e}^{-z} \,|\leq H|\,\mathrm{e}^{-z}\,|^{\mu}|1-\mathrm{e}^{-z}\,|+H|z|^{\mu}|\,\mathrm{ e}^{-z}\,|.\]
Using Lemmas 5.11 and 5.13, we have
\[H|\,\mathrm{e}^{-z}\,|^{\mu}|1-\mathrm{e}^{-z}\,|+H|z|^{\mu}|\, \mathrm{e}^{-z}\,| =H|\,\mathrm{e}^{-z}\,|^{\mu}|1-\mathrm{e}^{-z}\,|^{\mu}|1-\mathrm{ e}^{-z}\,|^{1-\mu}+H|z|^{\mu}|\,\mathrm{e}^{-z}\,|^{\mu}|\,\mathrm{e}^{-z}\,|^{1-\mu}\] \[\leq H|\,\mathrm{e}^{-z}\,|^{\mu}|1-\mathrm{e}^{-z}\,|^{\mu}\frac {1}{\cos^{1-\mu}((\pi/2)\sin d)}+H|z|^{\mu}|\,\mathrm{e}^{-z}\,|^{\mu}\frac{1}{ \cos^{1-\mu}((\pi/2)\sin d)}\] \[\leq H|\,\mathrm{e}^{-z}\,|^{\mu}\left(\frac{c_{d}}{\log(1+c_{d} )}\right)^{\mu}|z|^{\mu}\frac{1}{\cos^{1-\mu}((\pi/2)\sin d)}+H|z|^{\mu}|\, \mathrm{e}^{-z}\,|^{\mu}\frac{1}{\cos^{1-\mu}((\pi/2)\sin d)}\] \[=\frac{H}{\cos^{1-\mu}((\pi/2)\sin d)}\left\{\left(\frac{c_{d}}{ \log(1+c_{d})}\right)^{\mu}+1\right\}|z|^{\mu}|\,\mathrm{e}^{-z}\,|^{\mu},\]
from which the claim follows.
In summary, instead of Theorem 5.8, using Lemma 5.9, we establish the following theorem.
**Theorem 5.14**.: _For a given function \(\tilde{f}\), let \(p\) and \(q\) be defined by (2.6). Assume that \(\tilde{f}\) is analytic in \(\phi(\mathcal{D}_{d})\) with \(0<d<\pi/2\). Furthermore, assume that there exists positive constants \(H\) and \(\mu\) with \(\mu\leq 1\) such that_
\[|\tilde{f}(z)-q| \leq H|z|^{\mu},\] \[|\tilde{f}(z)-p| \leq H|\,\mathrm{e}^{-z}\,|^{\mu}\]
_hold for all \(z\in\phi(\mathcal{D}_{d})\). Let \(h\) be defined as (2.11). Then, it holds that_
\[\sup_{t\in(0,\infty)}\left|\tilde{f}(t)-\left[\frac{q+p(\mathrm{e }^{t}-1)}{\mathrm{e}^{t}}+\sum_{j=-n}^{n}\left\{\tilde{f}(\phi(jh))-\frac{q+p \,\mathrm{e}^{\pi\sinh(jh)}}{1+\mathrm{e}^{\pi\sinh(jh)}}\right\}S(j,h)(\phi^ {-1}(t))\right]\right|\] \[\leq\frac{H}{\cos^{1-\mu}((\pi/2)\sin d)}\left\{\left(\frac{c_{d }}{\log(1+c_{d})}\right)^{\mu}+1\right\}C_{\uparrow}\,\mathrm{e}^{-\pi dln/\, \mathrm{arsinh}(dn/\mu)},\]
_where \(C_{\uparrow}\) and \(c_{d}\) are positive constants defined as (5.1) and (5.3), respectively._
## 6 Proof of Theorems 4.1 and 4.2
In this section, we prove Theorems 4.1 and 4.2. For both proofs, the following lemma is useful.
**Lemma 6.1** (Stenger [12, p. 142]).: _It holds for \(x\in\mathbb{R}\) that_
\[\sup_{x\in\mathbb{R}}\sum_{j=-n}^{n}|S(j,h)(x)|\leq\frac{2}{\pi}\left\{\frac{3 }{2}+\gamma+\log(n+1)\right\},\]
_where \(\gamma\) is Euler's constant defined by_
\[\gamma=\lim_{n\to\infty}\left\{1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n-1} -\log n\right\}=0.5772\cdots.\]
### Proof of Theorem 4.1
We need the following lemma, which is similar to Lemma 5.9 in the case of the DE transformation.
**Lemma 6.2**.: _For a given function \(\tilde{f}\), let \(p\) and \(q\) be defined by (2.6). Assume that \(\tilde{f}\) is analytic in \(\psi(\mathcal{D}_{d})\) with \(0<d<\pi\). Furthermore, assume that there exists positive constants \(H\), \(\alpha\), and \(\beta\) with \(\alpha\leq 1\) and \(\beta\leq 1\) such that_
\[|\tilde{f}(z)-q| \leq H\left|\frac{z}{1+z}\right|^{\alpha},\] \[|\tilde{f}(z)-p| \leq H|\,\mathrm{e}^{-z}\,|^{\beta}\]
_hold for all \(z\in\phi(\mathcal{D}_{d})\). Then, \(f\) defined by (2.7) satisfies the assumptions of Theorem 2.1._
To prove this lemma, we prepare the following four lemmas.
**Lemma 6.3** (Okayama et al. [10, Lemma 4.21]).: _Let \(x\) and \(y\) be real numbers with \(|y|<\pi\), and let \(\zeta=x+\mathrm{i}\,y\). Then,_
\[\left|\frac{1}{1+\mathrm{e}^{\zeta}}\right| \leq\frac{1}{(1+\mathrm{e}^{x})\cos(y/2)},\] \[\left|\frac{1}{1+\mathrm{e}^{-\zeta}}\right| \leq\frac{1}{(1+\mathrm{e}^{-x})\cos(y/2)}.\]
**Lemma 6.4**.: _Let \(d\) be a positive constant with \(d<\pi\). Then,_
\[\sup_{z\in\phi(\mathcal{D}_{d})}\left|\mathrm{e}^{-z}\right| \leq\frac{1}{\cos(d/2)},\] \[\sup_{z\in\phi(\mathcal{D}_{d})}\left|1-\mathrm{e}^{-z}\right| \leq\frac{1}{\cos(d/2)}.\]
Proof.: Applying \(z=\psi(\zeta)\), we have
\[\sup_{z\in\phi(\mathcal{D}_{d})}\left|\mathrm{e}^{-z}\right| =\sup_{\zeta\in\mathcal{D}_{d}}\left|\mathrm{e}^{-\phi(\zeta)} \right|=\sup_{\zeta\in\mathcal{D}_{d}}\left|\frac{1}{1+\mathrm{e}^{\zeta}} \right|,\] \[\sup_{z\in\phi(\mathcal{D}_{d})}\left|1-\mathrm{e}^{-z}\right| =\sup_{\zeta\in\mathcal{D}_{d}}\left|1-\mathrm{e}^{-\phi(\zeta)} \right|=\sup_{\zeta\in\mathcal{D}_{d}}\left|\frac{1}{1+\mathrm{e}^{-\zeta}} \right|.\]
Thus, the claim follows from Lemma 6.3.
**Lemma 6.5** (Okayama and Machida [9, Lemma 7]).: _Let \(d\) be a positive constant with \(d<\pi\). Then,_
\[\sup_{\zeta\in\mathcal{D}_{d}}\left|\frac{1+\log(1+\mathrm{e}^{\zeta})}{\log (1+\mathrm{e}^{\zeta})}\cdot\frac{1}{1+\mathrm{e}^{-\zeta}}\right|\leq\frac{1 +\log(1+\tilde{c}_{d})}{\log(1+\tilde{c}_{d})}\tilde{c}_{d},\]
_where \(\tilde{c}_{d}\) is a constant defined by_
\[\tilde{c}_{d}=1+\frac{1}{\cos(d/2)}. \tag{6.1}\]
**Lemma 6.6**.: _Let \(d\) be a positive constant with \(d<\pi\). Then, it holds for \(z\in\psi(\mathcal{D}_{d})\) that_
\[\left|1-\mathrm{e}^{-z}\right|\leq\frac{1+\log(1+\tilde{c}_{d})}{\log(1+ \tilde{c}_{d})}\tilde{c}_{d}\left|\frac{z}{1+z}\right|,\]
_where \(\tilde{c}_{d}\) is a constant defined by (6.1)._
Proof.: The claim immediately follows by putting \(z=\log(1+\mathrm{e}^{\zeta})\) in Lemma 6.5.
Using these lemmas, we prove Lemma 6.2 as follows.
Proof.: From (2.7), using \(|\tilde{f}(z)-q|\leq H|z/(1+z)|^{\alpha}\) and \(|\tilde{f}(z)-p|\leq H|\,\mathrm{e}^{-z}|^{\beta}\), we have
\[|f(z)|=|(\tilde{f}(z)-p)(1-\mathrm{e}^{-z})+(\tilde{f}(z)-q)\,\mathrm{e}^{-z} |\leq H|\,\mathrm{e}^{-z}|^{\beta}|1-\mathrm{e}^{-z}\,|+H\left|\frac{z}{1+z} \right|^{\alpha}|\,\mathrm{e}^{-z}\,|.\]
Using Lemmas 6.4 and 6.6, we have
\[H|\,\mathrm{e}^{-z}\,|^{\beta}|1-\mathrm{e}^{-z}\,|+H\left|\frac{z} {1+z}\right|^{\alpha}|\,\mathrm{e}^{-z}| =H|\,\mathrm{e}^{-z}\,|^{\beta}|1-\mathrm{e}^{-z}\,|^{\alpha}|1- \mathrm{e}^{-z}\,|^{1-\alpha}+H\left|\frac{z}{1+z}\right|^{\alpha}|\, \mathrm{e}^{-z}\,|^{\beta}|\,\mathrm{e}^{-z}\,|^{1-\beta}\] \[\leq H|\,\mathrm{e}^{-z}\,|^{\beta}|1-\mathrm{e}^{-z}\,|^{\alpha} \frac{1}{\cos^{1-\alpha}(d/2)}+H\left|\frac{z}{1+z}\right|^{\alpha}|\,\mathrm{ e}^{-z}\,\beta\frac{1}{\cos^{1-\beta}(d/2)}\] \[\leq H|\,\mathrm{e}^{-z}\,|^{\beta}\left(\frac{1+\log(1+\tilde{c} _{d})}{\log(1+\tilde{c}_{d})}\tilde{c}_{d}\left|\frac{z}{1+z}\right|\right)^{ \alpha}\frac{1}{\cos^{1-\alpha}(d/2)}\] \[\quad+H\left|\frac{z}{1+z}\right|^{\alpha}|\,\mathrm{e}^{-z}\,| ^{\beta}\frac{1}{\cos^{1-\beta}(d/2)}\] \[=H\left\{\left(\frac{1+\log(1+\tilde{c}_{d})}{\log(1+\tilde{c}_{ d})}\tilde{c}_{d}\right)^{\alpha}\frac{1}{\cos^{1-\alpha}(d/2)}+\frac{1}{\cos^{1- \beta}(d/2)}\right\}\left|\frac{z}{1+z}\right|^{\alpha}|\,\mathrm{e}^{-z}\,|^{ \beta},\]
from which the claim follows.
Thanks to Lemma 6.2, we can apply Theorem 2.2 to the solution \(y_{i}\) under the assumptions of Theorem 4.1. Based on this idea, we prove Theorem 4.1 as follows.
Proof.: Let \(\mathbf{p}=\lim_{t\to\infty}\mathbf{y}(t)\), and let \(\hat{\mathbf{y}}\) be a function defined by
\[\hat{\mathbf{y}}(t)=\frac{\mathbf{r}+\mathbf{p}(\mathrm{e}^{t}-1)}{\mathrm{e}^{t}}+\sum_{j =-M}^{N}\left\{\mathbf{y}(\phi(jh))-\frac{\mathbf{r}+\mathbf{p}\,\mathrm{e}^{jh}}{1+ \mathrm{e}^{jh}}S(j,h)(\psi^{-1}(t))\right\},\]
which is derived by application of (2.8) to the solution \(\mathbf{y}(t)\). Note that the approximate solution \(\hat{\mathbf{y}}^{(l)}(t)\) is defined by (4.1), and \(\mathbf{p}^{(\psi)}\) satisfies \(\mathbf{p}^{(\psi)}=\lim_{t\to\infty}\mathbf{y}^{(l)}(t)\). To estimate the error of \(\hat{y}_{i}^{(l)}(t)\), we insert \(\hat{y}_{i}(t)\) as
\[|y_{i}(t)-\hat{y}_{i}^{(l)}(t)|\leq|y_{i}(t)-\hat{y}_{i}(t)|+|\hat{y}_{i}(t)- \hat{y}_{i}^{(l)}(t)|. \tag{6.2}\]
As for the first term, according to Theorem 2.2, there exists a positive constant \(\tilde{C}_{i}\) independent of \(n\) such that
\[|y_{i}(t)-\hat{y}_{i}(t)|\leq\tilde{C}_{i}\sqrt{n}\,\mathrm{e}^{-\sqrt{\alpha d \mu n}}\leq\frac{\tilde{C}_{i}}{\log 2}\log(n+1)\sqrt{n}\,\mathrm{e}^{-\sqrt{ \alpha d\mu n}}\leq\frac{\tilde{C}}{\log 2}\log(n+1)\sqrt{n}\,\mathrm{e}^{-\sqrt{ \alpha d\mu n}}, \tag{6.3}\]
where \(\tilde{C}=\max_{i=1,\ldots,m}\tilde{C}_{i}\). Next, we estimate the second term. First, it is rewritten as
\[|\hat{y}_{i}(t)-\hat{y}_{i}^{(l)}(t)|=\left|\frac{(p_{i}-p_{i}^{(\psi)})( \mathrm{e}^{t}-1)}{\mathrm{e}^{t}}+\sum_{j=-M}^{N}\left\{y_{i}(\psi(jh))-y_{i} ^{(l)}(\psi(jh))\right\}S(j,h)(\psi^{-1}(t))-\sum_{j=-M}^{N}\frac{(p_{i}-p_{i} ^{(\psi)})\,\mathrm{e}^{jh}}{1+\mathrm{e}^{jh}}S(j,h)(\psi^{-1}(t))\right|.\]
Noting
\[p_{i}-p_{i}^{(\psi)}=\lim_{t\to\infty}\left(y_{i}(t)-y_{i}^{(l)}(t)\right),\]
according to Theorem 3.1, there exist positive constants \(C\) and \(\hat{C}\) such that
\[|p_{i}-p_{i}^{(\psi)}|=\lim_{t\to\infty}\left|y_{i}(t)-y_{i}^{(l) }(t)\right|\leq\sup_{t\in(0,\infty)}\left|y_{i}(t)-y_{i}^{(l)}(t)\right|\leq \left(C+\hat{C}\|A_{im}^{-1}\|_{\infty}\right)\sqrt{n}\,\mathrm{e}^{-\sqrt{ \alpha d\mu n}},\] \[\left|y_{i}(\psi(jh))-y_{i}^{(l)}(\psi(jh))\right|\leq\sup_{t\in( 0,\infty)}\left|y_{i}(t)-y_{i}^{(l)}(t)\right|\leq\left(C+\hat{C}\|A_{im}^{-1} \|_{\infty}\right)\sqrt{n}\,\mathrm{e}^{-\sqrt{\alpha d\mu n}}\,.\]
Therefore, we have
\[\left|\frac{(p_{i}-p_{i}^{(\phi)})(\mathrm{e}^{t}-1)}{\mathrm{e}^{t} }+\sum_{j=-M}^{N}\left\{y_{i}(\psi(jh))-y_{i}^{(l)}(\psi(jh))\right\}S(j,h)(\psi ^{-1}(t))-\sum_{j=-M}^{N}\frac{(p_{i}-p_{i}^{(\phi)})\,\mathrm{e}^{jh}}{1+ \mathrm{e}^{jh}}S(j,h)(\psi^{-1}(t))\right|\] \[\leq \left(\left|\frac{\mathrm{e}^{t}-1}{\mathrm{e}^{t}}\right|+\sum_{ j=-M}^{N}|S(j,h)(\psi^{-1}(t))|+\sum_{j=-M}^{N}\frac{\mathrm{e}^{jh}}{1+ \mathrm{e}^{jh}}|S(j,h)(\psi^{-1}(t))|\right)\left(C+\hat{C}\|A_{lm}^{-1}\|_{ \infty}\right)\sqrt{n}\,\mathrm{e}^{-\sqrt{\alpha_{djm}}}\] \[\leq \left(1+\sum_{j=-M}^{N}|S(j,h)(\psi^{-1}(t))|+\sum_{j=-M}^{N}|S(j,h)(\psi^{-1}(t))|\right)\left(C+\hat{C}\|A_{lm}^{-1}\|_{\infty}\right)\sqrt{ n}\,\mathrm{e}^{-\sqrt{\alpha_{djm}}}\] \[\leq \left(1+2\sum_{j=-n}^{n}|S(j,h)(\psi^{-1}(t))|\right)\left(C+ \hat{C}\|A_{lm}^{-1}\|_{\infty}\right)\sqrt{n}\,\mathrm{e}^{-\sqrt{\alpha_{djm }}},\]
where \(n=\max\{M,N\}\) is used in the last inequality, because of (2.4). Furthermore, using Lemma 6.1, we have the final estimate for the second term as
\[|\hat{y}_{i}(t)-\hat{y}_{i}^{(l)}(t)| \leq\left(1+\frac{6+4\gamma+4\log(n+1)}{\pi}\right)\left(C+\hat{ C}\|A_{lm}^{-1}\|_{\infty}\right)\sqrt{n}\,\mathrm{e}^{-\sqrt{\alpha_{djm}}}\] \[\leq\left(\frac{\pi+6+4\gamma}{\pi\log 2}+\frac{4}{\pi}\right) \left(C+\hat{C}\|A_{lm}^{-1}\|_{\infty}\right)\log(n+1)\sqrt{n}\,\mathrm{e}^{ -\sqrt{\alpha_{djm}}}. \tag{6.4}\]
Combining the estimates (6.3) and (6.4), we obtain the claim.
### Proof of Theorem 4.2
Thanks to Lemma 5.9, we can apply Theorem 2.4 to the solution \(y_{i}\) under the assumptions of Theorem 4.2. Based on this idea, we prove Theorem 4.2 as follows.
Proof.: Let \(\mathbf{p}=\lim_{t\to\infty}\mathbf{y}(t)\), and let \(\hat{\mathbf{y}}\) be a function defined by
\[\hat{\mathbf{y}}(t)=\frac{\mathbf{r}+\mathbf{p}(\mathrm{e}^{t}-1)}{\mathrm{e}^{t}}+\sum_{j =-n}^{n}\left\{\mathbf{y}(\phi(jh))-\frac{\mathbf{r}+\mathbf{p}\,\mathrm{e}^{\pi\sinh(jh)}} {1+\mathrm{e}^{\pi\sinh(jh)}}S(j,h)(\phi^{-1}(t))\right\},\]
which is derived by application of (2.12) to the solution \(\mathbf{y}(t)\). Note that the approximate solution \(\hat{\mathbf{y}}^{(l)}(t)\) is defined by (4.2), and \(\mathbf{p}^{(\phi)}\) satisfies \(\mathbf{p}^{(\phi)}=\lim_{t\to\infty}\mathbf{y}^{(l)}(t)\). To estimate the error of \(\hat{y}_{i}^{(l)}(t)\), we insert \(\hat{y}_{i}(t)\) as (6.2). As for the first term of (6.2), according to Theorem 2.4, there exists a positive constant \(\hat{C}_{i}\) independent of \(n\) such that
\[|y_{i}(t)-\hat{y}_{i}(t)|\leq\tilde{C}_{i}\,\mathrm{e}^{-\pi dn/ \operatorname{arsinh}(dn/\mu)} \leq\frac{\tilde{C}_{i}}{\operatorname{arsinh}(d/\mu)\log 2}\operatorname{arsinh}( dn/\mu)\log(n+1)\,\mathrm{e}^{-\pi dn/\operatorname{arsinh}(dn/\mu)}\] \[\leq\frac{\hat{C}}{\operatorname{arsinh}(d/\mu)\log 2}\operatorname{arsinh}( dn/\mu)\log(n+1)\,\mathrm{e}^{-\pi dn/\operatorname{arsinh}(dn/\mu)}, \tag{6.5}\]
where \(\tilde{C}=\max_{i=1,\,\ldots,m}\tilde{C}_{i}\). Next, we estimate the second term of (6.2). First, it is rewritten as
\[|\hat{y}_{i}(t)-\hat{y}_{i}^{(l)}(t)|\] \[=\left|\frac{(p_{i}-p_{i}^{(\phi)})(\mathrm{e}^{t}-1)}{\mathrm{e}^ {t}}+\sum_{j=-n}^{n}\left\{y_{i}(\phi(jh))-y_{i}^{(l)}(\phi(jh))\right\}S(j,h)( \phi^{-1}(t))-\sum_{j=-n}^{n}\frac{(p_{i}-p_{i}^{(\phi)})\,\mathrm{e}^{\pi \sinh(jh)}}{1+\mathrm{e}^{\pi\sinh(jh)}}S(j,h)(\phi^{-1}(t))\right|.\]
Noting
\[p_{i}-p_{i}^{(\phi)}=\lim_{t\to\infty}\left(y_{i}(t)-y_{i}^{(l)}(t)\right),\]
according to Theorem 3.2, there exist positive constants \(C\) and \(\hat{C}\) such that
\[|p_{i}-p_{i}^{(\phi)}|=\lim_{t\to\infty}\left|y_{i}(t)-y_{i}^{(l)}(t) \right|\leq\sup_{t\in(0,\infty)}\left|y_{i}(t)-y_{i}^{(l)}(t)\right|\leq\left(C+ \hat{C}\|B_{lm}^{-1}\|_{\infty}\right)\operatorname{arsinh}(dn/\mu)\,\mathrm{e }^{-\pi dn/\operatorname{arsinh}(dn/\mu)}\,,\] \[\left|y_{i}(\phi(jh))-y_{i}^{(l)}(\phi(jh))\right|\leq\sup_{t\in(0,\infty)}\left|y_{i}(t)-y_{i}^{(l)}(t)\right|\leq\left(C+\hat{C}\|B_{lm}^{-1} \|_{\infty}\right)\operatorname{arsinh}(dn/\mu)\,\mathrm{e}^{-\pi dn/ \operatorname{arsinh}(dn/\mu)}\,.\]
Therefore, we have
\[\left|\frac{(p_{i}-p_{i}^{(\phi)})(\mathrm{e}^{t}-1)}{\mathrm{e} ^{t}}+\sum_{j=-n}^{n}\left\{y_{i}(\phi(jh))-y_{i}^{(l)}(\phi(jh))\right\}S(j, h)(\phi^{-1}(t))-\sum_{j=-n}^{n}\frac{(p_{i}-p_{i}^{(\phi)})\,\mathrm{e}^{\pi \operatorname{sinh}(jh)}}{1+\mathrm{e}^{\pi\operatorname{sinh}(jh)}}S(j,h)( \phi^{-1}(t))\right|\] \[\leq\left(1+\sum_{j=-n}^{n}|S(j,h)(\phi^{-1}(t))|+\sum_{j=-n}^{n }|S(j,h)(\phi^{-1}(t))|\right)\left(C+\hat{C}\|B_{lm}^{-1}\|_{\infty}\right) \operatorname{arsinh}(dn/\mu)\,\mathrm{e}^{-\pi dn/\operatorname{arsinh}(dn/ \mu)}\] \[=\left(1+2\sum_{j=-n}^{n}|S(j,h)(\phi^{-1}(t))|\right)\left(C+ \hat{C}\|B_{lm}^{-1}\|_{\infty}\right)\operatorname{arsinh}(dn/\mu)\,\mathrm{ e}^{-\pi dn/\operatorname{arsinh}(dn/\mu)}\,.\]
Furthermore, using Lemma 6.1, we have the final estimate for the second term as
\[|\hat{y}_{i}(t)-\hat{y}_{i}^{(l)}(t)|\leq \left(1+\frac{6+4\gamma+4\log(n+1)}{\pi}\right)\left(C+\hat{C} \|B_{lm}^{-1}\|_{\infty}\right)\operatorname{arsinh}(dn/\mu)\,\mathrm{e}^{- \pi dn/\operatorname{arsinh}(dn/\mu)}\] \[\leq \left(\frac{\pi+6+4\gamma}{\pi\log 2}+\frac{4}{\pi}\right)\left(C+ \hat{C}\|B_{lm}^{-1}\|_{\infty}\right)\log(n+1)\operatorname{arsinh}(dn/\mu) \,\mathrm{e}^{-\pi dn/\operatorname{arsinh}(dn/\mu)}\,. \tag{6.6}\]
Combining the estimates (6.5) and (6.6), we obtain the claim.
|
2306.02409 | Discrete time-dependent wave equation for the Schrödinger operator
with unbounded potential | In this article, we investigate the semiclassical version of the wave
equation for the discrete Schr\"{o}dinger operator,
$\mathcal{H}_{\hbar,V}:=-\hbar^{-2}\mathcal{L}_{\hbar}+V$ on the lattice
$\hbar\mathbb{Z}^{n},$ where $\mathcal{L}_{\hbar}$ is the discrete Laplacian,
and $V$ is a non-negative multiplication operator. We prove that
$\mathcal{H}_{\hbar,V}$ has a purely discrete spectrum when the potential $V$
satisfies the condition $|V(k)|\to \infty$ as $|k|\to\infty$. We also show that
the Cauchy problem with regular coefficients is well-posed in the associated
Sobolev type spaces and very weakly well-posed for distributional coefficients.
Finally, we recover the classical solution as well as the very weak solution in
certain Sobolev type spaces as the limit of the semiclassical parameter
$\hbar\to 0$. | Aparajita Dasgupta, Shyam Swarup Mondal, Michael Ruzhansky, Abhilash Tushir | 2023-06-04T17:04:18Z | http://arxiv.org/abs/2306.02409v1 | # Discrete time-dependent wave equation for the Schrodinger operator with unbounded potential
###### Abstract.
In this article, we investigate the semiclassical version of the wave equation for the discrete Schrodinger operator, \(\mathcal{H}_{h,V}:=-\hbar^{-2}\mathcal{L}_{\hbar}+V\) on the lattice \(\hbar\mathbb{Z}^{n}\), where \(\mathcal{L}_{\hbar}\) is the discrete Laplacian, and \(V\) is a non-negative multiplication operator. We prove that \(\mathcal{H}_{h,V}\) has a purely discrete spectrum when the potential \(V\) satisfies the condition \(|V(k)|\to\infty\) as \(|k|\to\infty\). We also show that the Cauchy problem with regular coefficients is well-posed in the associated Sobolev type spaces and very weakly well-posed for distributional coefficients. Finally, we recover the classical solution as well as the very weak solution in certain Sobolev type spaces as the limit of the semiclassical parameter \(\hbar\to 0\).
Key words and phrases:Schrodinger; lattice; well-posedness 2020 Mathematics Subject Classification: Primary 46F05; Secondary 58J40, 22E30 The first and second authors were supported by Core Research Grant, RP03890G, Science and Engineering Research Board (SERB), DST, India. The third author was supported by the EPSRC Grants EP/R003025 and EP/V005529/1, by the FWO Odysseus 1 grant G.0H94.18N: Analysis and Partial Differential Equations and by the Methusalem programme of the Ghent University Special Research Fund (BOF) (Grantnumber 01M01021). The last author is supported by the institute assistantship from Indian Institute of Technology Delhi, India.
## 1. Introduction
In this paper we consider the following problem of the following nonlinear Schrodinger operator
(1.1)
where \(\mathcal{H}_{V}\) is the usual Schrodinger operator on \(\mathbb{R}^{n}\) with potential \(V\) defined as
\[\mathcal{H}_{V}u(x):=\left(-\mathcal{L}+V\right)u(x),\quad x\in\mathbb{R}^{n}, \tag{1.2}\]
where \(\mathcal{L}\) is the usual Laplacian on \(\mathbb{R}^{n}\).
The Schrodinger operator \(\mathcal{H}_{V}\) is defined as
\[\mathcal{H}_{V}u(x):=\left(-\mathcal{L}+V\right)u(x),\quad x\in\mathbb{R}^{n}, \tag{1.3}\]
where \(\mathcal{L}\) is the usual Laplacian on \(\mathbb{R}^{n}\).
The Schrodinger operator \(\mathcal{H}_{V}\) is defined as
\[\mathcal{H}_{V}u(x):=\left(-\mathcal{L}+V\right)u(x),\quad x\in\mathbb{R}^{n}, \tag{1.4}\]
where \(\mathcal{L}\) is the usual Laplacian on \(\mathbb{R}^{n}\).
This family of linear operators for different potentials \(V\) characterizes different quantum systems. Let us look at some critical quantum systems and their corresponding spectra. The Schrodinger operator with free motion, (i.e., there is no force exerted on the electrons) is given by the Laplacian which has continuous spectrum contained in the positive real-axis. The Schrodinger operator of a hydrogen atom with an infinitely heavy nucleus placed at the origin is given by the Coulomb potential, i.e.,
\[\mathcal{H}=-\mathcal{L}+\frac{1}{|x|^{2}}. \tag{1.5}\]
It has the essential spectrum contained in the positive half real-axis, and the spectrum on the negative real-axis contains isolated eigenvalues of finite multiplicity. The Schrodinger operator with potential \(V(x)=|x|^{2}\) and \(V(x,y)=x^{2}y^{2}\), i.e., quantum harmonic oscillator and anharmonic oscillator, respectively, has a purely discrete spectrum. For more details about Schrodinger operators, one can refer to [11, 16, 15]. Moreover, we have the following characterization of the potential for the purely discrete spectrum:
**Theorem 1.1**.: _[_1_]__. Let \(V:\mathbb{R}^{n}\to\mathbb{R}\) be continuous and satisfies \(V(x)\geq 0\), and \(|V(x)|\to\infty\) as \(|x|\to\infty\). Then \(\sigma_{\rm ess}(\mathcal{H}_{V})=\emptyset\)._
There is limited literature available concerning the study of the spectrum of discrete Schrodinger operator on \(\hbar\mathbb{Z}^{n}\). In [14, 15, 16, 17, 18, 19], Rabinovich et. al. studied the Schrodinger operator on the discrete lattice with slowly oscillating potential, and a few of them are dedicated to the lattice \(\hbar\mathbb{Z}^{n}\). In [10], Swain and Krishna addressed the purely discrete spectrum of the Schrodinger operator on \(\ell^{2}(\mathbb{Z}^{n})\) with potential \(|k|^{\alpha}\), where \(\alpha\in(0,1)\).
In the following theorem we will give the characterization of the discrete spectrum for the discrete Schrodinger operator.
**Theorem 1.2**.: _Let \(\hbar>0\). Assume that \(V\geq 0\) and \(|V(k)|\to\infty\) as \(|k|\to\infty\). Then \(\mathcal{H}_{h,V}:=-\hbar^{-2}\mathcal{L}_{h}+V\) has a purely discrete spectrum._
We refer to Section 3 for the proof of above theorem and for more details about the spectrum of the discrete Schrodinger operator. Moreover, we will also review the crucial components of the global Fourier analysis that was developed in [14, 15] and later on applied for the wave equation associated with the Landau Hamiltonian in [14, 15].
Coming back to our main interest of this article, the Cauchy problems of the form (1.4) have been extensively studied by many researchers. For the case of regular coefficients and source term, one can refer to works [1, 16, 17, 18]. For distributional irregularities, for example, taking \(q\) to be the \(\delta\)-distribution when electric potential produces shocks, and one can also take \(a\) to be a Heaviside function when the propagation speed is discontinuous. Due to the impossibility of the product of distributions (see Schwartz impossibility result [10]), the formulation of the Cauchy problem (1.3) in this case might be impossible in the distributional sense but we are able to develop the well-posedness in very weak sense which was first introduced by Garetto and the third author in [14] and later on implemented in [1, 15, 16] for different physical models. The third author and Tokmagambetov in [14, 15, 16] studied the wave equations associated with operators having purely discrete spectrum and also allowing coefficients to have distributional irregularities. More precisely, in the case of regular coefficients, they have proved that the above Cauchy problem is well-posed in the Sobolev space \(\mathrm{H}^{s}_{\mathcal{H}_{V}}\) associated with the Schrodinger operator \(\mathcal{H}_{V}\), that is,
\[\mathrm{H}^{s}_{\mathcal{H}_{V}}:=\left\{f\in\mathcal{D}^{\prime}_{\mathcal{H} _{V}}\left(\mathbb{R}^{n}\right):\left(I+\mathcal{H}_{V}\right)^{s/2}f\in L^ {2}\left(\mathbb{R}^{n}\right)\right\},\quad s\in\mathbb{R}, \tag{1.7}\]
with the norm \(\|f\|_{\mathrm{H}^{s}_{\mathcal{H}_{V}}}:=\|\left(I+\mathcal{H}_{V}\right)^{s /2}f\|_{L^{2}(\mathbb{R}^{n})}\), where \(\mathcal{D}^{\prime}_{\mathcal{H}_{V}}\left(\mathbb{R}^{n}\right)\) is the global space of distributions associated to \(\mathcal{H}_{V}\). In particular for non-negative polynomial potentials, the relation between the Sobolev space associated with Schrodinger operator and usual Sobolev spaces can be understood using the inequalities obtained by Dziubanski and Glowacki in the following theorem:
**Theorem 1.3**.: _[_1_]__. Let \(P(x)\) be a nonnegative homogeneous elliptic polynomial on \(\mathbb{R}^{n}\) and \(V\) is a nonnegative polynomial potential. For \(1<p<\infty\) and \(\alpha>0\) there exist constants \(C_{1},C_{2}>0\) such that_
\[\left\|P(D)^{\alpha}f\right\|_{L^{p}}+\left\|V^{\alpha}f\right\|_{L^{p}}\leq C _{1}\left\|(P(D)+V)^{\alpha}f\right\|_{L^{p}}, \tag{1.8}\]
_and_
\[\left\|(P(D)+V)^{\alpha}f\right\|_{L^{p}}\leq C_{2}\left\|(P(D)^{\alpha}+V^{ \alpha})f\right\|_{L^{p}},\]
_for \(f\) in the Schwartz class \(\mathcal{S}(\mathbb{R}^{n})\)._
Now applying the inequality (1.8) for Schrodinger operator \(\mathcal{H}_{V}\) with non-negative polynomial potential and using the fact that the Schwartz space \(\mathcal{S}(\mathbb{R}^{n})\) is dense in \(L^{2}(\mathbb{R}^{n})\), we deduce the following:
\[\mathrm{H}^{s}_{\mathcal{H}_{V}}(\mathbb{R}^{n})\subseteq\mathrm{H}^{s}( \mathbb{R}^{n}),\quad s>0. \tag{1.9}\]
To simplify the notation, throughout the paper we will be writing \(A\lesssim B\) if there exists a constant \(C\) independent of the appearing parameters such that \(A\leq CB\) and we write that \(a\in L^{\infty}_{m}([0,T])\), if \(a\in L^{\infty}([0,T])\) is \(m\)-times differentiable with \(\partial_{t}^{j}a\in L^{\infty}([0,T])\), for all \(j=1,\dots,m\).
The well-posedness result for regular coefficients is given by the following theorem:
**Theorem 1.4**.: _Let \(s\in\mathbb{R}\) and \(f\in L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{V}})\). Assume that \(a\in L^{\infty}_{1}([0,T])\) and \(q\in L^{\infty}([0,T])\) are such that \(a(t)\geq a_{0}>0\) for some positive constant \(a_{0}\). If the initial Cauchy data \((u_{0},u_{1})\in\mathrm{H}^{1+s}_{\mathcal{H}_{V}}\times\mathrm{H}^{s}_{ \mathcal{H}_{V}}\), then the Cauchy problem (1.4) has a unique solution \(u\in C([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{V}})\cap C^{1}([0,T];\mathrm{H}^{s }_{\mathcal{H}_{V}})\) which satisfies the estimate_
\[\|u(t,\cdot)\|^{2}_{\mathrm{H}^{1+s}_{\mathcal{H}_{V}}}+\|\partial_{t}u(t, \cdot)\|^{2}_{\mathrm{H}^{s}_{\mathcal{H}_{V}}}\lesssim\|u_{0}\|^{2}_{\mathrm{ H}^{1+s}_{\mathcal{H}_{V}}}+\|u_{1}\|^{2}_{\mathrm{H}^{s}_{\mathcal{H}_{V}}}+\|f \|^{2}_{L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{V}})}, \tag{1.10}\]
_with the constant independent of \(t\in[0,T]\)._
Furthermore, the Cauchy problem (1.4) is very weakly well-posed in the case of distributional coefficients.
**Theorem 1.5**.: _Let \(a\) and \(q\) be distributions with supports included in \([0,T]\) such that \(a\geq a_{0}>0\) for some positive constant \(a_{0}\), and also let the source term \(f(\cdot,x)\) be a distribution with support included in \([0,T]\), for all \(x\in\mathbb{R}^{n}\). Let \(s\in\mathbb{R}\) and the initial Cauchy data \((u_{0},u_{1})\in\mathrm{H}^{1+s}_{\mathcal{H}_{V}}\times\mathrm{H}^{s}_{ \mathcal{H}_{V}}\). Then the Cauchy problem (1.4) has a very weak solution of order \(s\)._
## 2. Main results
First, we investigate the Cauchy problem (1.3) with regular coefficients \(a\in L^{\infty}_{1}([0,T])\) and \(q\in L^{\infty}([0,T])\). We obtain the well-posedness in the discrete Sobolev space \(\mathrm{H}^{s}_{\mathcal{H}_{h,V}}\) associated with the discrete Schrodinger operator \(\mathcal{H}_{h,V}\). Given \(s\in\mathbb{R}\), we define the Sobolev space
\[\mathrm{H}^{s}_{\mathcal{H}_{h,V}}:=\left\{u\in\mathrm{H}^{-\infty}_{\mathcal{ H}_{h,V}}:\left(I+\mathcal{H}_{h,V}\right)^{s/2}u\in\ell^{2}\left(\hbar \mathbb{Z}^{n}\right)\right\}, \tag{2.1}\]
with the norm \(\|f\|_{\mathrm{H}^{s}_{\mathcal{H}_{h,V}}}:=\|\left(I+\mathcal{H}_{h,V}\right) ^{s/2}f\|_{\ell^{2}(\hbar\mathbb{Z}^{n})}\), where \(\mathrm{H}^{-\infty}_{\mathcal{H}_{h,V}}\) is the space of \(\mathcal{H}_{h,V}\)-distributions given in (3.11). For a detailed study on the Fourier analysis of discrete Schrodinger operator and the associated Sobolev space, we refer to Section 3.
The situation of the wave equation with the discrete Schrodinger operator is in a striking difference with wave equation involving discrete Laplacian (fractional Laplacian) that was considered in [14, 14], respectively. The difference is in the sense that we achieve the well-posedness in certain discrete Sobolev type spaces in this case while it was well-posed in \(\ell^{2}(\hbar\mathbb{Z}^{n})\) in the later cases, and this difference arises because of the difference in boundedness behavior of the discrete Schrodinger operator and the discrete Laplacian (fractional Laplacian) operator. In the following
theorem, we obtain the classical solution for the Cauchy problem (1.3) with regular coefficients:
**Theorem 2.1** (Classical solution).: _Let \(T>0\). Let \(s\in\mathbb{R}\) and \(f\in L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})\). Assume that \(a\in L^{\infty}_{1}\left([0,T]\right)\) satisfies \(\inf\limits_{t\in[0,T]}a(t)=a_{0}>0\) and \(q\in L^{\infty}\left([0,T]\right)\). If the initial Cauchy data \((u_{0},u_{1})\in\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}}\times\mathrm{H}^{s}_{ \mathcal{H}_{h,V}}\), then the Cauchy problem (1.3) has a unique solution \(u\in C([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}})\cap C^{1}([0,T];\mathrm{H}^ {s}_{\mathcal{H}_{h,V}})\) satisfying the estimate_
\[\|u(t,\cdot)\|^{2}_{\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}}}+\|u_{t}(t,\cdot)\|^ {2}_{\mathrm{H}^{s}_{\mathcal{H}_{h,V}}}\leq C_{T}\left(\|u_{0}\|^{2}_{\mathrm{ H}^{1+s}_{\mathcal{H}_{h,V}}}+\|u_{1}\|^{2}_{\mathrm{H}^{s}_{\mathcal{H}_{h,V}}}+\|f \|^{2}_{L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})}\right), \tag{2.2}\]
_for all \(t\in[0,T]\), where the constant \(C_{T}\) is given by_
\[C_{T}=c_{0}^{-1}(1+\|a\|_{L^{\infty}})e^{c_{0}^{-1}\left(1+\|a^{\prime}\|_{L^{ \infty}}+\|q\|_{L^{\infty}}+2\|a\|_{L^{\infty}}\right)T}, \tag{2.3}\]
_with \(c_{0}=\min\{a_{0},1\}\)._
Furthermore, we will allow coefficients to be irregular and as we discussed earlier in Section 1, we have a notion of very weak solutions in order to handle equations that might not have a meaningful solution in the ordinary distributional sense. For the convenience of the readers, we will quickly recap the important details and state the corresponding results for \(a,q\in\mathcal{D}^{\prime}\left([0,T]\right)\). Using the Friedrichs-mollifier, i.e., \(\psi\in C_{0}^{\infty}\left(\mathbb{R}\right),\psi\geq 0\) and \(\int_{\mathbb{R}}\psi=1\), we first regularise the distributional coefficient \(a\) to obtain the families of smooth functions \(\left(a_{\varepsilon}\right)_{\varepsilon},\) namely
\[a_{\varepsilon}(t):=\left(a\ast\psi_{\omega(\varepsilon)}\right)(t),\quad \psi_{\omega(\varepsilon)}(t)=\frac{1}{\omega(\varepsilon)}\psi\left(\frac{t }{\omega(\varepsilon)}\right),\quad\varepsilon\in(0,1],\]
where \(\omega(\varepsilon)\geq 0\) and \(\omega(\varepsilon)\to 0\) as \(\varepsilon\to 0\).
**Definition 2.2**.: _(i) A net \(\left(a_{\varepsilon}\right)_{\varepsilon}\in L^{\infty}_{m}(\mathbb{R})^{(0,1]}\) is said to be \(L^{\infty}_{m}\)-moderate if for all \(K\Subset\mathbb{R}\), there exist \(N\in\mathbb{N}_{0}\) and \(c>0\) such that_
\[\left\|\partial^{k}a_{\varepsilon}\right\|_{L^{\infty}(K)}\leq c\varepsilon^ {-N-k},\quad\text{ for all }k=0,1,\ldots,m,\]
_for all \(\varepsilon\in(0,1]\). (ii) A net \(\left(a_{\varepsilon}\right)_{\varepsilon}\in L^{\infty}_{m}(\mathbb{R})^{(0,1]}\) is said to be \(L^{\infty}_{m}\)-negligible if for all \(K\Subset\mathbb{R}\) and \(q\in\mathbb{N}_{0}\), there exists \(c>0\) such that_
\[\left\|\partial^{k}a_{\varepsilon}\right\|_{L^{\infty}(K)}\leq c\varepsilon^{ q},\quad\text{ for all }k=0,1,\ldots,m,\]
_for all \(\varepsilon\in(0,1]\). (iii) A net \(\left(u_{\varepsilon}\right)_{\varepsilon}\in L^{2}([0,T];\mathrm{H}^{s}_{ \mathcal{H}_{h,V}})^{(0,1]}\) is said to be \(L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})\)-moderate if there exist \(N\in\mathbb{N}_{0}\) and \(c>0\) such that_
\[\|u_{\varepsilon}\|_{L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})}\leq c \varepsilon^{-N},\]
_for all \(\varepsilon\in(0,1]\). (iv) A net \(\left(u_{\varepsilon}\right)_{\varepsilon}\in L^{2}([0,T];\mathrm{H}^{s}_{ \mathcal{H}_{h,V}})^{(0,1]}\) is said to be \(L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})\)-negligible if for all \(q\in\mathbb{N}_{0}\) there exists \(c>0\) such that_
\[\|u_{\varepsilon}\|_{L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})}\leq c \varepsilon^{q},\]
_for all \(\varepsilon\in(0,1]\)._
We observe that the requirements of moderateness are natural in the sense that distributions are moderately regularised. Moreover, by the structure theorems for distributions, we have
\[\text{compactly supported distributions }\mathcal{E}^{\prime}(\mathbb{R})\subset\left\{L^{2} \text{-moderate families}\right\}. \tag{2.4}\]
Therefore, the Cauchy problem (1.3) may not have a solution in compactly supported distributions \(\mathcal{E}^{\prime}(\mathbb{R})\). However, it may exist in the space of \(L^{2}\)-moderate families in some suitable sense.
Now, the notion of a very weak solution for the Cauchy problem (1.3) can be introduced as follows:
**Definition 2.3**.: _Let \(s\in\mathbb{R},f\in L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})\), and \((u_{0},u_{1})\in\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}}\times\mathrm{H}^{s}_{ \mathcal{H}_{h,V}}.\) The net \(\left(u_{\varepsilon}\right)_{\varepsilon}\in L^{2}([0,T];\mathrm{H}^{1+s}_{ \mathcal{H}_{h,V}})^{(0,1]}\) is a very weak solution of order \(s\) of the Cauchy problem (1.3) if there exist_
1. \(L^{\infty}_{1}\)_-moderate regularisation_ \(a_{\varepsilon}\) _of the coefficient_ \(a\)_;_
2. \(L^{\infty}\)_-moderate regularisation_ \(q_{\varepsilon}\) _of the coefficient_ \(q\)_; and_
3. \(L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})\)_-moderate regularisation_ \(f_{\varepsilon}\) _of the source term_ \(f\)_,_
\[\left\{\begin{array}{l}\partial_{t}^{2}u_{\varepsilon}(t,k)+a_{\varepsilon}( t)\mathcal{H}_{h,V}u_{\varepsilon}(t,k)+q_{\varepsilon}(t)u_{\varepsilon}(t,k)=f_{ \varepsilon}(t,k),\quad(t,k)\in(0,T]\times\hbar\mathbb{Z}^{n},\\ u_{\varepsilon}(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\\ \partial_{t}u_{\varepsilon}(0,k)=u_{1}(k),\quad k\in\hbar\mathbb{Z}^{n}, \end{array}\right. \tag{2.5}\]
_for all \(\varepsilon\in(0,1]\), and is \(L^{2}([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}})\)-moderate._
It should be noted that Theorem 2.1 provides a unique solution to the regularised Cauchy problem (2.5) that satisfies estimate (2.2).
A distribution \(a\) is said to be positive distribution if \(\langle a,\psi\rangle\geq 0\), whenever \(\psi\in C_{0}^{\infty}(\mathbb{R})\) satisfying \(\psi\geq 0\), and distribution \(a\) is said to be a strictly positive distribution if there exists a positive constant \(\alpha\) such that \(a-\alpha\) is a positive distribution. In other words, \(a\geq\alpha>0\), where \(a\geq\alpha\) means that
\[\langle a-\alpha,\psi\rangle\geq 0,\quad\text{for all }\psi\in C_{0}^{\infty}( \mathbb{R}),\psi\geq 0.\]
Now we can state the existence theorem for the Cauchy problem (1.3) with distributional coefficients as follows:
**Theorem 2.4** (Existence).: _Let \(a\) and \(q\) be distributions with supports contained in \([0,T]\) such that \(a\geq a_{0}>0\) for some positive constant \(a_{0}\), and also let the source term \(f(\cdot,k)\) be a distribution with support contained in \([0,T]\), for all \(k\in\hbar\mathbb{Z}^{n}\). Let \(s\in\mathbb{R}\) and the initial Cauchy data \((u_{0},u_{1})\in\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}}\times\mathrm{H}^{s}_{ \mathcal{H}_{h,V}}\). Then the Cauchy problem (1.3) has a very weak solution of order \(s\)._
Furthermore, the uniqueness of the very weak solution can be interpreted as the negligible modification in the approximations of the coefficients \(a,q\), and the source term \(f\), has negligible impact on the family of very weak solutions. Formally the notion of uniqueness can be formulated as the authors did in [11, 12, 13, CRT22c].
**Definition 2.5**.: _We say that the Cauchy problem (1.3) has a \(L^{2}([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}})\)-unique very weak solution, if_
_(1) for all \(L^{\infty}_{1}\)-moderate nets \(a_{\varepsilon},\tilde{a}_{\varepsilon}\) such that \((a_{\varepsilon}-\tilde{a}_{\varepsilon})_{\varepsilon}\) is \(L^{\infty}_{1}\)-negligible,_
_(2) for all \(L^{\infty}\)-moderate nets \(q_{\varepsilon},\tilde{q}_{\varepsilon}\) such that \((q_{\varepsilon}-\tilde{q}_{\varepsilon})_{\varepsilon}\) is \(L^{\infty}\)-negligible; and_
_(3) for all \(L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})\)-moderate nets \(f_{\varepsilon},\tilde{f}_{\varepsilon}\) such that \((f_{\varepsilon}-\tilde{f}_{\varepsilon})_{\varepsilon}\) is_
\(L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})\)_-negligible,_
_the net \((u_{\varepsilon}-\tilde{u}_{\varepsilon})_{\varepsilon}\) is \(L^{2}([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}})\)-negligible, where \((u_{\varepsilon})_{\varepsilon}\) and \((\tilde{u}_{\varepsilon})_{\varepsilon}\) are the families of solutions corresponding to the \(\varepsilon\)-parametrised problems_
\[\left\{\begin{array}{l}\partial_{t}^{2}u_{\varepsilon}(t,k)+a_{\varepsilon }(t)\mathcal{H}_{h,V}u_{\varepsilon}(t,k)+q_{\varepsilon}(t)u_{\varepsilon}(t,k)=f_{\varepsilon}(t,k),\quad(t,k)\in(0,T]\times\hbar\mathbb{Z}^{n},\\ u_{\varepsilon}(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\\ \partial_{t}u_{\varepsilon}(0,k)=u_{1}(k),\quad k\in\hbar\mathbb{Z}^{n},\end{array}\right. \tag{2.6}\]
_and_
\[\left\{\begin{array}{l}\partial_{t}^{2}\tilde{u}_{\varepsilon}(t,k)+\tilde{ a}_{\varepsilon}(t)\mathcal{H}_{h,V}\tilde{u}_{\varepsilon}(t,k)+\tilde{q}_{ \varepsilon}(t)\tilde{u}_{\varepsilon}(t,k)=\tilde{f}_{\varepsilon}(t,k),\quad (t,k)\in(0,T]\times\hbar\mathbb{Z}^{n},\\ \tilde{u}_{\varepsilon}(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\\ \partial_{t}\tilde{u}_{\varepsilon}(0,k)=u_{1}(k),\quad k\in\hbar\mathbb{Z}^{ n},\end{array}\right. \tag{2.7}\]
_respectively._
**Remark 2.6**.: _The above notion of uniqueness can also be formulated in the sense of the Colombeau algebra \(\mathcal{G}(\mathbb{R})\) which is defined in the following quotient form:_
\[\mathcal{G}(\mathbb{R})=\frac{C^{\infty}\text{-moderate nets}}{C^{\infty}\text{- negligible nets}}. \tag{2.8}\]
_For more details about the Colombeau algebra, we refer to [10]. The uniqueness of very weak solutions in the sense of Colombeau algebra can be traced in various settings, see [1, 10, 11]._
The following theorem gives the uniqueness of the very weak solution to the Cauchy problem (1.3) in the sense of Definition 2.5.
**Theorem 2.7** (Uniqueness).: _Let \(a\) and \(q\) be distributions with supports contained in \([0,T]\) such that \(a\geq a_{0}>0\) for some positive constant \(a_{0}\), and also let the source term \(f(\cdot,k)\) be a distribution with support contained in \([0,T]\), for all \(k\in\hbar\mathbb{Z}^{n}\). Let \(s\in\mathbb{R}\) and the initial Cauchy data \((u_{0},u_{1})\in\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}}\times\mathrm{H}^{s}_{ \mathcal{H}_{h,V}}\). Then the very weak solution of the Cauchy problem (1.3) is \(L^{2}([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}})\)-unique._
Now we give the consistency result, which means that very weak solutions in Theorem 2.4 recapture the classical solutions given by Theorem 2.1, provided the latter exist.
**Theorem 2.8** (Consistency).: _Let \(s\in\mathbb{R}\) and \(f\in L^{2}([0,T],\mathrm{H}^{s}_{\mathcal{H}_{h,V}})\). Assume that \(a\in L^{\infty}_{1}\left([0,T]\right)\) satisfies \(\inf\limits_{t\in[0,T]}a(t)=a_{0}>0\) and \(q\in L^{\infty}\left([0,T]\right)\). If the initial
Cauchy data \((u_{0},u_{1})\in\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}}\times\mathrm{H}^{s}_{ \mathcal{H}_{h,V}}\), then for any regularising families \(a_{\varepsilon},q_{\varepsilon},f_{\varepsilon}\) in Definition 2.3, the very weak solution \(\left(u_{\varepsilon}\right)_{\varepsilon}\) converges to the classical solution of the Cauchy problem (1.3) in \(L^{2}([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}})\) as \(\varepsilon\to 0\)._
Furthermore, we are interested in approximating the classical solution in Euclidean settings by the solutions in discrete settings. The following theorem shows that under the assumption that the solutions in \(\mathbb{R}^{n}\) exist, they can be recovered in the limit as \(\hbar\to 0\). We require a little additional Sobolev regularity to ensure that the convergence results are global on the whole of \(\mathbb{R}^{n}\).
**Theorem 2.9**.: _Let \(V\) be a non-negative polynomial potential in (1.5). Let \(u\) and \(v\) be the solutions of the Cauchy problems (1.3) on \(\hbar\mathbb{Z}^{n}\) and (1.4) on \(\mathbb{R}^{n}\), respectively, with the same Cauchy data \(u_{0}\) and \(u_{1}\). Assume the initial Cauchy data \((u_{0},u_{1})\in\mathrm{H}^{1+s}_{\mathcal{H}_{V}}\times\mathrm{H}^{s}_{ \mathcal{H}_{V}}\) with \(s>4+\frac{n}{2}\) and also satisfying \((u_{0}^{(4v_{j})},u_{1}^{(4v_{j})})\in\mathrm{H}^{1+s}_{\mathcal{H}_{V}}\times \mathrm{H}^{s}_{\mathcal{H}_{V}}\) for all \(j=1,\ldots,n\). Then for every \(t\in[0,T]\), we have_
\[\left\|v(t)-u(t)\right\|_{\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}}}+\left\|\partial _{t}v(t)-\partial_{t}u(t)\right\|_{\mathrm{H}^{s}_{\mathcal{H}_{h,V}}}\to 0,\ \text{as}\ \hbar\to 0, \tag{2.9}\]
_and the convergence is uniform on \([0,T]\)._
Here the initial data and the source term of the Cauchy problem (1.3) is the evaluation of the initial data and the source term from (1.4) on the lattice \(\hbar\mathbb{Z}^{n}\).
Similarly, in the semiclassical limit \(\hbar\to 0\), the very weak solution of the Cauchy problem in the Euclidean setting can be approximated by the very weak solution in the discrete setting.
**Theorem 2.10**.: _Let \(V\) be a non-negative polynomial potential in (1.5). Let \((u_{\varepsilon})_{\varepsilon}\) and \((v_{\varepsilon})_{\varepsilon}\) be the very weak solutions of the Cauchy problems (1.3) on \(\hbar\mathbb{Z}^{n}\) and (1.4) on \(\mathbb{R}^{n}\), respectively, with the same Cauchy data \(u_{0}\) and \(u_{1}\). Assume the initial Cauchy data \((u_{0},u_{1})\in\mathrm{H}^{1+s}_{\mathcal{H}_{V}}\times\mathrm{H}^{s}_{ \mathcal{H}_{V}}\) with \(s>4+\frac{n}{2}\) and also satisfying \((u_{0}^{(4v_{j})},u_{1}^{(4v_{j})})\in\mathrm{H}^{1+s}_{\mathcal{H}_{V}}\times \mathrm{H}^{s}_{\mathcal{H}_{V}}\) for all \(j=1,\ldots,n\). Then for every \(\varepsilon\in(0,1]\) and \(t\in[0,T]\), we have_
\[\left\|v_{\varepsilon}(t)-u_{\varepsilon}(t)\right\|_{\mathrm{H}^{1+s}_{ \mathcal{H}_{h,V}}}+\left\|\partial_{t}v_{\varepsilon}(t)-\partial_{t}u_{ \varepsilon}(t)\right\|_{\mathrm{H}^{s}_{\mathcal{H}_{h,V}}}\to 0\ \text{as}\ \hbar\to 0, \tag{2.10}\]
_where the convergence is uniform on \([0,T]\) but pointwise for \(\varepsilon\in(0,1]\)._
## 3. Spectrum and Fourier analysis of \(\mathcal{H}_{h,V}\)
The discrete Schrodinger operator on \(\hbar\mathbb{Z}^{n}\) denoted by \(\mathcal{H}_{h,V}\) is defined by
\[\mathcal{H}_{h,V}u(k):=\left(-\hbar^{-2}\mathcal{L}_{h}+V\right)u(k),\quad k \in\hbar\mathbb{Z}^{n}, \tag{3.1}\]
where \(\mathcal{L}_{h}\) is the discrete Laplacian given by
\[\mathcal{L}_{h}u(k):=\sum_{j=1}^{n}\left(u\left(k+\hbar v_{j}\right)+u\left(k- \hbar v_{j}\right)\right)-2nu(k),\quad k\in\hbar\mathbb{Z}^{n}, \tag{3.2}\]
and the potential \(V\) is a non-negative multiplication operator by \(V(k)\). We also note that \(-\mathcal{L}_{h}\) and \(V\) are non-negative operators and so is \(\mathcal{H}_{h,V}\).
Further, the operator \(\mathcal{H}_{h,V}:\mathrm{Dom}\left(\mathcal{H}_{h,V}\right)\to\ell^{2}(\hbar \mathbb{Z}^{n})\) is a densely defined and self-adjoint linear operator in \(\ell^{2}(\hbar\mathbb{Z}^{n})\). The domain \(\mathrm{Dom}\left(\mathcal{H}_{h,V}\right)\) is given by
\[\mathrm{Dom}\left(\mathcal{H}_{h,V}\right):=\left\{u\in\ell^{2}(\hbar\mathbb{Z }^{n}):(I+\mathcal{H}_{h,V})u\in\ell^{2}(\hbar\mathbb{Z}^{n})\right\}. \tag{3.3}\]
We will now prove that \(\mathcal{H}_{h,V}\) has purely discrete spectrum; that is, \(\sigma_{\rm ess}(\mathcal{H}_{h,V})\) is an empty essential spectrum. It is well known to be equivalent to proving that \(\left(\mathcal{H}_{h,V}+I\right)^{-1}\) is compact, i.e., \(\mathcal{H}_{h,V}\) has a compact resolvent (see [18, 19]). Further, the set of eigenvectors will form a complete orthonormal basis of \(\ell^{2}(\hbar\mathbb{Z}^{n})\), since \(\left(\mathcal{H}_{h,V}+I\right)^{-1}\) is a compact operator. In order to prove the compactness, we need the following preparatory lemma:
**Lemma 3.1**.: _Let \(V\geq 0\) be a multiplication operator satisfying \(|V(k)|\to\infty\) as \(|k|\to\infty\). Then \(\left(V+I\right)^{-1}\) is compact._
Proof.: Define an operator \(Q\) by
\[Qu(k):=\frac{1}{V(k)+1}u(k),\quad u\in\ell^{2}(\hbar\mathbb{Z}^{n}). \tag{3.4}\]
It is obvious to verify that \(\left(V+I\right)Qu=u\) and \(Q(V+I)u=u\) which implies \(\left(V+I\right)^{-1}=Q\). Moreover, \(Q\) is a bounded linear operator on \(\ell^{2}(\hbar\mathbb{Z}^{n})\), since
\[\left\|Qu\right\|_{\ell^{2}(\hbar\mathbb{Z}^{n})}^{2}=\sum_{k\in\hbar\mathbb{ Z}^{n}}\frac{1}{|V(k)+1|^{2}}|u(k)|^{2}\leq\sum_{k\in\hbar\mathbb{Z}^{n}}|u(k)|^{2}= \left\|u\right\|_{\ell^{2}(\hbar\mathbb{Z}^{n})}^{2}. \tag{3.5}\]
The compactness of \(Q\) will be shown by using the fact that it can be approximated by a sequence of finite rank operators. Let \(Q_{m}:\ell^{2}(\hbar\mathbb{Z}^{n})\to\ell^{2}(\hbar\mathbb{Z}^{n})\) be defined by
\[Q_{m}u(k):=\left\{\begin{array}{cc}Qu(k),&|k|\leq m,\\ 0,&\text{otherwise}.\end{array}\right.\]
Then \(Q_{m}\) is bounded and a finite rank operator, implying that \(Q_{m}\) is compact. Additionally,
\[\left\|\left(Q-Q_{m}\right)u\right\|_{\ell^{2}(\hbar\mathbb{Z}^{ n})}^{2} = \sum_{|k|>m}\frac{1}{|V(k)+1|^{2}}|u(k)|^{2} \tag{3.6}\] \[\leq \Phi_{m}^{2}\left\|u\right\|_{\ell^{2}(\hbar\mathbb{Z}^{n})}^{2},\]
where \(\Phi_{m}:=\sup\left\{\frac{1}{|V(k)+1|}:|k|>m\right\}\) whence we have \(\left\|\left(Q-Q_{m}\right)\right\|\leq\Phi_{m}.\) Therefore \(\left\|\left(Q-Q_{m}\right)\right\|\to 0\), since \(\Phi_{m}\to 0\) as \(m\to\infty\). Hence \(Q_{m}\to Q\) in norm. Thus \(Q=\left(V+I\right)^{-1}\) is compact.
Now we are in position to prove that the discrete Schrodinger operator \(\mathcal{H}_{h,V}\) has a purely discrete spectrum.
Proof of Theorem 1.2.: Using the second resolvent identity for the discrete Schrodinger operator and the potential \(V\) at \(\lambda=-1\in\rho\left(\mathcal{H}_{h,V}\right)\cap\rho\left(V\right)\), we have
\[\left(\mathcal{H}_{h,V}+I\right)^{-1}=\left(\mathcal{H}_{h,V}+I\right)^{-1} \left(-\hbar^{-2}\mathcal{L}_{h}\right)\left(V+I\right)^{-1}+\left(V+I\right) ^{-1}. \tag{3.7}\]
The operators \(\left(\mathcal{H}_{h,V}+I\right)^{-1}\) and \(\left(V+I\right)^{-1}\) must be bounded, since \(\lambda=-1\) belongs to their resolvent set. The boundedness of the discrete Laplacian on \(\ell^{2}(\hbar\mathbb{Z}^{n})\) implies that \(\left(\mathcal{H}_{h,V}+I\right)^{-1}\left(-\hbar^{-2}\mathcal{L}_{h}\right)\) is bounded. Being the product of a bounded and a compact operator, the right-hand side of (3.7) is compact. Hence \(\left(\mathcal{H}_{h,V}+I\right)^{-1}\) is compact.
Let us denote by \(\sigma\left(\mathcal{H}_{\hbar,V}\right)=\{\lambda_{\xi}\geq 0:\xi\in\mathcal{I}_{ \hbar}\}\) the discrete spectrum of the Schrodinger operator \(\mathcal{H}_{\hbar,V}\), where \(\mathcal{I}_{\hbar}\) is a countable set and we arrange the eigenvalues in ascending order in accordance with the multiplicities that occur
\[|\lambda_{\xi}|\leq|\lambda_{\eta}|,\text{ whenever }|\xi|\leq|\eta|,\]
for all \(\xi,\eta\in\mathcal{I}_{\hbar}\). Furthermore, using the assumption \(|V(k)|\to\infty\) as \(|k|\to\infty\), it is easy to conclude that the eigenvalues \(\lambda_{\xi}\) of the discrete Schrodinger operator \(\mathcal{H}_{\hbar,V}\) tends to infinity as \(|\xi|\to\infty\).
Let \(u_{\xi}\) be the eigenfunction associated with the eigenvalue \(\lambda_{\xi}\) for each \(\xi\in\mathcal{I}_{\hbar}\), i.e.,
\[\mathcal{H}_{\hbar,V}u_{\xi}=\lambda_{\xi}u_{\xi},\quad\text{for all }\xi\in \mathcal{I}_{\hbar}. \tag{3.8}\]
Consequently, the set of eigenvectors \(\{u_{\xi}\}_{\xi\in\mathcal{I}_{\hbar}}\) forms an orthonormal basis for \(\ell^{2}(\hbar\mathbb{Z}^{n})\), i.e.,
\[(u_{\xi},u_{\eta}):=\left\{\begin{array}{ll}1,&\text{if }\xi=\eta,\\ 0,&\text{if }\xi\neq\eta,\end{array}\right. \tag{3.9}\]
where
\[(f,g):=\sum_{k\in\hbar\mathbb{Z}^{n}}f(k)\overline{g(k)}, \tag{3.10}\]
is the usual inner product of the Hilbert space \(\ell^{2}(\hbar\mathbb{Z}^{n})\).
Now, we will describe the spaces of distributions generated by \(\mathcal{H}_{\hbar,V}\) and its adjoint, and the related global Fourier analysis. In our settings, there is a considerable reduction in complexity since the discrete Schrodinger operator is self-adjoint.
The space \(\mathrm{H}^{\infty}_{\mathcal{H}_{\hbar,V}}:=\mathrm{Dom}\left(\mathcal{H}^{ \infty}_{\hbar,V}\right)\) is called the space of test functions for \(\mathcal{H}_{\hbar,V}\), defined by
\[\mathrm{Dom}\left(\mathcal{H}^{\infty}_{\hbar,V}\right):=\bigcap_{k=1}^{ \infty}\mathrm{Dom}\left(\mathcal{H}^{k}_{\hbar,V}\right),\]
where \(\mathrm{Dom}\left(\mathcal{H}^{k}_{\hbar,V}\right)\) is the domain of the operator \(\mathcal{H}^{k}_{\hbar,V}\) defined as
\[\mathrm{Dom}\left(\mathcal{H}^{k}_{\hbar,V}\right):=\left\{u\in\ell^{2}( \hbar\mathbb{Z}^{n}):(I+\mathcal{H}_{\hbar,V})^{j}u\in\mathrm{Dom}(\mathcal{ H}_{\hbar,V}),j=0,1,\ldots,k-1\right\}.\]
The Frechet topology of \(\mathrm{H}^{\infty}_{\mathcal{H}_{\hbar,V}}\) is given by the family of norms
\[\|u\|_{\mathrm{H}^{k}_{\mathcal{H}_{\hbar,V}}}:=\max_{j\leq k}\left\|(I+ \mathcal{H}_{\hbar,V})^{j}u\right\|_{\ell^{2}(\hbar\mathbb{Z}^{n})},\quad k \in\mathbb{N}_{0},u\in\mathrm{H}^{\infty}_{\mathcal{H}_{\hbar,V}}.\]
The space
\[\mathrm{H}^{-\infty}_{\mathcal{H}_{\hbar,V}}:=\mathcal{L}\left(\mathrm{H}^{ \infty}_{\mathcal{H}_{\hbar,V}},\mathbb{C}\right), \tag{3.11}\]
of all continuous linear functionals on \(\mathrm{H}^{\infty}_{\mathcal{H}_{\hbar,V}}\) is called the space of \(\mathcal{H}_{\hbar,V}\)-distributions. For \(w\in\mathrm{H}^{-\infty}_{\mathcal{H}_{\hbar,V}}\) and \(u\in\mathrm{H}^{\infty}_{\mathcal{H}_{\hbar,V}}\), we shall write
\[w(u)=(w,u)=\sum_{k\in\hbar\mathbb{Z}^{n}}w(k)\overline{u(k)}.\]
For any \(u\in\mathrm{H}^{\infty}_{\mathcal{H}_{\hbar,V}}\), the functional
\[\mathrm{H}^{\infty}_{\mathcal{H}_{\hbar,V}}\ni v\mapsto(v,u),\]
is a \(\mathcal{H}_{\hbar,V}\)-distribution, which gives an embedding \(u\in\mathrm{H}^{\infty}_{\mathcal{H}_{\hbar,V}}\hookrightarrow\mathrm{H}^{- \infty}_{\mathcal{H}_{\hbar,V}}\).
**Definition 3.2** (**Schwartz Space \(\mathcal{S}(\mathcal{I}_{\hbar})\)**).: _Let \(\mathcal{S}(\mathcal{I}_{\hbar})\) denote the space of rapidly decaying functions \(\varphi:\mathcal{I}_{\hbar}\to\mathbb{C}\), i.e., \(\varphi\in\mathcal{S}(\mathcal{I}_{\hbar})\) if for any \(M<\infty\), there exists a constant \(C_{\varphi,M}\) such that_
\[|\varphi(\xi)|\leq C_{\varphi,M}\langle\xi\rangle^{-M},\quad\text{for all }\xi\in \mathcal{I}_{\hbar},\]
_where we denote_
\[\langle\xi\rangle:=\left(1+\lambda_{\xi}\right)^{\frac{1}{2}}.\]
The topology on \(\mathcal{S}(\mathcal{I}_{\hbar})\) is given by the family of seminorms \(p_{k}\), where \(k\in\mathbb{N}_{0}\) and
\[p_{k}(\varphi):=\sup_{\xi\in\mathcal{I}_{\hbar}}\langle\xi\rangle^{k}|\varphi (\xi)|.\]
We now define the \(\mathcal{H}_{\hbar,V}\)-Fourier transform on \(\mathrm{H}^{\infty}_{\mathcal{H}_{\hbar,V}}\).
**Definition 3.3**.: _We define the \(\mathcal{H}_{\hbar,V}\)-Fourier transform \(\mathcal{F}_{\mathcal{H}_{\hbar,V}}:\mathrm{H}^{\infty}_{\mathcal{H}_{\hbar, V}}\to\mathcal{S}(\mathcal{I}_{\hbar})\) by the formula_
\[\left(\mathcal{F}_{\mathcal{H}_{\hbar,V}}f\right)(\xi)=\widehat{f}(\xi):= \sum_{k\in\hbar\mathbb{Z}^{n}}f(k)\overline{u_{\xi}(k)},\quad\xi\in\mathcal{I }_{\hbar}, \tag{3.12}\]
_where \(u_{\xi}\) satisfies (3.8)._
The above expression is well-defined by the Holder inequality
\[\left|\widehat{f}(\xi)\right|\leq\left\|f\right\|_{\ell^{2}(\hbar\mathbb{Z}^{ n})}\left\|u_{\xi}\right\|_{\ell^{2}(\hbar\mathbb{Z}^{n})}=\left\|f\right\|_{ \ell^{2}(\hbar\mathbb{Z}^{n})}<\infty,\]
and then can be extended to \(\mathrm{H}^{-\infty}_{\mathcal{H}_{\hbar,V}}\) in the usual way. The \(\mathcal{H}_{\hbar,V}\)-Fourier transform \(\mathcal{F}_{\mathcal{H}_{\hbar,V}}:\mathrm{H}^{\infty}_{\mathcal{H}_{\hbar, V}}\to\mathcal{S}(\mathcal{I}_{\hbar})\) is a homeomorphism and its inverse
\[\mathcal{F}^{-1}_{\mathcal{H}_{\hbar,V}}:\mathcal{S}\left(\mathcal{I}_{\hbar} \right)\to\mathrm{H}^{\infty}_{\mathcal{H}_{\hbar,V}},\]
is given by
\[\left(\mathcal{F}^{-1}_{\mathcal{H}_{\hbar,V}}g\right)(k):=\sum_{\xi\in \mathcal{I}_{\hbar}}g(\xi)u_{\xi}(k),\quad g\in\mathcal{S}(\mathcal{I}_{\hbar }), \tag{3.13}\]
so that the \(\mathcal{H}_{\hbar,V}\)-Fourier inversion formula becomes
\[f(k)=\sum_{\xi\in\mathcal{I}_{\hbar}}\widehat{f}(\xi)u_{\xi}(k),\quad\text{ for all }f\in\mathrm{H}^{\infty}_{\mathcal{H}_{\hbar,V}}. \tag{3.14}\]
Consequently, the \(\mathcal{H}_{\hbar,V}\)-Plancherel formula takes the form
\[\sum_{k\in\hbar\mathbb{Z}^{n}}|f(k)|^{2}=\left(\sum_{\xi\in\mathcal{I}_{\hbar }}\widehat{f}(\xi)u_{\xi},\sum_{\eta\in\mathcal{I}_{\hbar}}\widehat{f}(\eta)u _{\eta}\right)=\sum_{\xi,\eta\in\mathcal{I}_{\hbar}}\widehat{f}(\xi)\overline {\widehat{f}(\eta)}\,(u_{\xi},u_{\eta})=\sum_{\xi\in\mathcal{I}_{\hbar}}| \widehat{f}(\xi)|^{2}. \tag{3.15}\]
The \(\mathcal{H}_{\hbar,V}\)-Fourier transform of the discrete Schrodinger operator \(\mathcal{H}_{\hbar,V}\) is
\[\left(\mathcal{F}_{\mathcal{H}_{\hbar,V}}\mathcal{H}_{\hbar,V}f\right)(\xi)= \left(\mathcal{H}_{\hbar,V}f,u_{\xi}\right)=\left(f,\mathcal{H}_{\hbar,V}u_{ \xi}\right)=\left(f,\lambda_{\xi}u_{\xi}\right)=\lambda_{\xi}\widehat{f}(\xi), \quad\xi\in\mathcal{I}_{\hbar}, \tag{3.16}\]
since \(\mathcal{H}_{h,V}\) is a self-adjoint operator. Recalling (2.1), (3.16) and the Plancherel's identity (3.15), we have
\[\|f\|_{\mathbb{H}^{s}_{\mathcal{H}_{h,V}}}:=\left\|(I+\mathcal{H}_{h,V})^{s/2}f \right\|_{\ell^{2}(l\mathbb{Z}^{n})}=\left(\sum_{\xi\in\mathcal{I}_{h}}\langle \xi\rangle^{2s}\widehat{f}(\xi)|^{2}\right)^{\frac{1}{2}}=\left(\sum_{\xi\in \mathcal{I}_{h}}\left(1+\lambda_{\xi}\right)^{s}\widehat{f}(\xi)|^{2}\right)^ {\frac{1}{2}}. \tag{3.17}\]
## 4. Proofs of the main results
In this section, we will prove the existence of classical and very weak solutions of the Cauchy problem (1.3) with regular and irregular coefficients, respectively. Further, we will prove the uniqueness and consistency for the very weak solutions.
Proof of Theorem 2.1.: Taking the \(\mathcal{H}_{h,V}\)-Fourier transform of the Cauchy problem (1.3) with respect to \(k\in\hbar\mathbb{Z}^{n}\) and using (3.16), we obtain
\[\left\{\begin{array}{l}\partial_{t}^{2}\widehat{u}(t,\xi)+a(t)\lambda_{\xi} \widehat{u}(t,\xi)+q(t)\widehat{u}(t,\xi)=\widehat{f}(t,\xi),\quad(t,\xi)\in (0,T]\times\mathcal{I}_{h},\\ \widehat{u}(0,\xi)=\widehat{u}_{0}(\xi),\quad\xi\in\mathcal{I}_{h},\\ \partial_{t}\widehat{u}(0,\xi)=\widehat{u}_{1}(\xi),\quad\xi\in\mathcal{I}_{h }.\end{array}\right. \tag{4.1}\]
The basic idea of our further analysis is that we can investigate each equation in (4.1) separately and then collect the estimates using the \(\mathcal{H}_{h,V}\)-Plancherel formula (3.15). Thus, let us fix \(\xi\in\mathcal{I}_{h}\) and use the transformation
\[U(t,\xi):=\left(\begin{array}{c}i\langle\xi\rangle\widehat{u}(t,\xi)\\ \partial_{t}\widehat{u}(t,\xi)\end{array}\right),\quad U_{0}(\xi):=\left( \begin{array}{c}i\langle\xi\rangle\widehat{u}_{0}(\xi)\\ \widehat{u}_{1}(\xi)\end{array}\right), \tag{4.2}\]
where \(\langle\xi\rangle=(1+\lambda_{\xi})^{\frac{1}{2}}\), and the matrices
\[A(t):=\left(\begin{array}{cc}0&1\\ a(t)&0\end{array}\right),\quad Q(t):=\left(\begin{array}{cc}0&0\\ q(t)-a(t)&0\end{array}\right)\text{ and }F(t,\xi):=\left(\begin{array}{c}0\\ \widehat{f}(t,\xi)\end{array}\right). \tag{4.3}\]
This allows us to reformulate the given second order system (4.1) as the first order system
\[\left\{\begin{array}{l}\partial_{t}U(t,\xi)=i\langle\xi\rangle A(t)U(t,\xi) +i\langle\xi\rangle^{-1}Q(t)U(t,\xi)+F(t,\xi),\quad(t,\xi)\in(0,T]\times \mathcal{I}_{h},\\ U(0,\xi)=U_{0}(\xi),\quad\xi\in\mathcal{I}_{h}.\end{array}\right. \tag{4.4}\]
We observe that the eigenvalues of the matrix \(A(t)\) are given by \(\pm\sqrt{a(t)}\). The symmetriser \(S\) of matrix \(A\) is given by
\[S(t)=\left(\begin{array}{cc}a(t)&0\\ 0&1\end{array}\right), \tag{4.5}\]
i.e., we have
\[SA-A^{*}S=0. \tag{4.6}\]
Consider
\[\left(S(t)U(t,\xi),U(t,\xi)\right) = a(t)\langle\xi\rangle^{2}\left|\widehat{u}(t,\xi)\right|^{2}+| \partial_{t}\widehat{u}(t,\xi)|^{2} \tag{4.7}\] \[\leq \sup_{t\in[0,T]}\{a(t),1\}\left(\langle\xi\rangle^{2}\left| \widehat{u}(t,\xi)\right|^{2}+|\partial_{t}\widehat{u}(t,\xi)|^{2}\right)\] \[= \sup_{t\in[0,T]}\{a(t),1\}\left|U(t,\xi)\right|^{2},\]
where \((\cdot,\cdot)\), and \(|\cdot|\) denote the inner product and the norm in \(\mathbb{C}\), respectively. Similarly
\[\left(S(t)U(t,\xi),U(t,\xi)\right)\geq\inf_{t\in[0,T]}\{a(t),1\}\left|U(t,\xi) \right|^{2}. \tag{4.8}\]
If we now define the energy
\[E(t,\xi):=(S(t)U(t,\xi),U(t,\xi)), \tag{4.9}\]
then from (4.7), and (4.8), it follows that
\[\inf_{t\in[0,T]}\{a(t),1\}|U(t,\xi)|^{2}\leq E(t,\xi)\leq\sup_{t\in[0,T]}\{a(t ),1\}|U(t,\xi)|^{2}. \tag{4.10}\]
Since \(a\in L_{1}^{\infty}([0,T])\), there exist two positive constants \(a_{0}\) and \(a_{1}\) such that
\[\inf_{t\in[0,T]}a(t)=a_{0}\quad\mbox{and}\ \ \sup_{t\in[0,T]}a(t)=a_{1}. \tag{4.11}\]
Further, if we set \(c_{0}=\min\left\{a_{0},1\right\}\) and \(c_{1}=\max\left\{a_{1},1\right\}\), then the inequality (4.10) becomes
\[c_{0}|U(t,\xi)|^{2}\leq E(t,\xi)\leq c_{1}|U(t,\xi)|^{2}, \tag{4.12}\]
for all \(t\in[0,T]\) and \(\xi\in\mathcal{I}_{h}\). Then we can calculate
\[E_{t}(t,\xi) = (S_{t}(t)U(t,\xi),U(t,\xi))+(S(t)U_{t}(t,\xi),U(t,\xi))+(S(t)U(t, \xi),U_{t}(t,\xi)) \tag{4.13}\] \[= (S_{t}(t)U(t,\xi),U(t,\xi))+i\langle\xi\rangle(S(t)A(t)U(t,\xi),U (t,\xi))+\] \[i\langle\xi\rangle^{-1}(S(t)Q(t)U(t,\xi),U(t,\xi))+(S(t)F(t,\xi),U(t,\xi))-\] \[i\langle\xi\rangle(S(t)U(t,\xi),A(t)U(t,\xi))-i\langle\xi \rangle^{-1}(S(t)U(t,\xi),Q(t)U(t,\xi))+\] \[(S(t),F(t,\xi)U(t,\xi))\] \[= (S_{t}(t)U(t,\xi),U(t,\xi))+i\langle\xi\rangle\left((SA-A^{*}S) \left(t\right)U(t,\xi),U(t,\xi)\right)+\] \[i\langle\xi\rangle^{-1}\left((SQ-Q^{*}S)\left(t\right)U(t,\xi),U (t,\xi)\right)+2\mathrm{Re}\left(S(t)F(t,\xi),U(t,\xi)\right)\] \[= (S_{t}(t)U(t,\xi),U(t,\xi))+i\langle\xi\rangle^{-1}\left((SQ-Q^{ *}S)\left(t\right)U(t,\xi),U(t,\xi)\right)+\] \[2\mathrm{Re}\left(S(t)F(t,\xi),U(t,\xi)\right).\]
From the definition of \(S\) and \(Q\), we have
\[S_{t}(t):=\left(\begin{array}{cc}a^{\prime}(t)&0\\ 0&0\end{array}\right)\ \mbox{and}\ \left(SQ-Q^{*}S\right)(t):=\left(\begin{array}{cc}0&a(t)-q(t)\\ q(t)-a(t)&0\end{array}\right), \tag{4.14}\]
whence we get
\[\|S_{t}(t)\|\leq|a^{\prime}(t)|\quad\mbox{and}\quad\|\left(SQ-Q^{*}S\right)( t)\|\leq|q(t)|+|a(t)|,\quad\mbox{for all}\ t\in[0,T]. \tag{4.15}\]
Moreover, it is also obvious to observe that
\[\|S(t)\|\leq(1+|a(t)|),\quad\text{for all }t\in[0,T], \tag{4.16}\]
where \(\|\cdot\|\) is the max norm. Combining the estimates (4.12)-(4.16) with the hypothesis \(a\in L^{\infty}_{1}([0,T])\) and \(q\in L^{\infty}([0,T])\), we get
\[E_{t}(t,\xi) \leq \|S_{t}(t)\|\,|U(t,\xi)|^{2}+\|\left(SQ-Q^{*}S\right)(t)\|\|U(t, \xi)|^{2}+2\|S(t)\|\|F(t,\xi)\|\|U(t,\xi)|\] \[\leq \left(\|S_{t}(t)\|+\|\left(SQ-Q^{*}S\right)(t)\|+\|S(t)\|\right)| U(t,\xi)|^{2}+\|S(t)\|\|F(t,\xi)|^{2}\] \[\leq \left(1+|a^{\prime}(t)|+|q(t)|+2|a(t)|\right)|U(t,\xi)|^{2}+(1+|a (t)|)|F(t,\xi)|^{2}\] \[\leq \left(1+\|a^{\prime}\|_{L^{\infty}}+\|q\|_{L^{\infty}}+2\left\|a \right\|_{L^{\infty}}\right)|U(t,\xi)|^{2}+(1+\|a\|_{L^{\infty}})\,|F(t,\xi)| ^{2}\] \[\leq c_{0}^{-1}\left(1+\|a^{\prime}\|_{L^{\infty}}+\|q\|_{L^{\infty}}+2 \left\|a\right\|_{L^{\infty}}\right)E(t,\xi)+(1+\|a\|_{L^{\infty}})\,|F(t,\xi )|^{2}. \tag{4.17}\]
If we set \(\kappa_{1}=c_{0}^{-1}\left(1+\|a^{\prime}\|_{L^{\infty}}+\|q\|_{L^{\infty}}+2 \left\|a\right\|_{L^{\infty}}\right)\) and \(\kappa_{2}=1+\|a\|_{L^{\infty}}\), then we get
\[E_{t}(t,\xi)\leq\kappa_{1}E(t,\xi)+\kappa_{2}|F(t,\xi)|^{2}. \tag{4.18}\]
By using the Gronwall's lemma to the inequality (4.18), we deduce that
\[E(t,\xi)\leq e^{\int_{0}^{t}\kappa_{1}\mathrm{d}\tau}\left(E(0,\xi)+\int_{0}^ {t}\kappa_{2}|F(\tau,\xi)|^{2}\mathrm{d}\tau\right), \tag{4.19}\]
for all \(t\in[0,T]\) and \(\xi\in\mathcal{I}_{\hbar}\). Therefore by putting together (4.12) and (4.19), we obtain
\[c_{0}|U(t,\xi)|^{2}\leq E(t,\xi)\leq e^{\int_{0}^{t}\kappa_{1} \mathrm{d}\tau}\left(E(0,\xi)+\int_{0}^{t}\kappa_{2}|F(\tau,\xi)|^{2}\mathrm{ d}\tau\right)\\ \leq e^{\kappa_{1}T}\left(c_{1}|U(0,\xi)|^{2}+\kappa_{2}\int_{0}^ {T}|F(\tau,\xi)|^{2}\mathrm{d}\tau\right). \tag{4.20}\]
This gives
\[|U(t,\xi)|^{2}\leq C_{T}\left(|U(0,\xi)|^{2}+\int_{0}^{T}|F(\tau,\xi)|^{2} \mathrm{d}\tau\right),\quad(t,\xi)\in[0,T]\times\mathcal{I}_{\hbar}, \tag{4.21}\]
where \(C_{T}=c_{0}^{-1}e^{\kappa_{1}T}\max\{c_{1},\kappa_{2}\}\). Then using the definition of \(U\) and \(F\), we obtain the inequality
\[\langle\xi\rangle^{2}\left|\widehat{u}(t,\xi)\right|^{2}+|\partial_{t}\widehat {u}(t,\xi)|^{2}\leq C_{T}\left(\langle\xi\rangle^{2}\left|\widehat{u}_{0}(\xi )\right|^{2}+|\widehat{u}_{1}(\xi)|^{2}+\int_{0}^{T}|F(\tau,\xi)|^{2}\mathrm{ d}\tau\right), \tag{4.22}\]
with the constant independent of \(t\in[0,T]\) and \(\xi\in\mathcal{I}_{\hbar}\). More generally, multiplying (4.22) by powers of \(\langle\xi\rangle\), for any \(s\in\mathbb{R}\), we get
\[\langle\xi\rangle^{2+2s}\left|\widehat{u}(t,\xi)\right|^{2}+ \langle\xi\rangle^{2s}\left|\partial_{t}\widehat{u}(t,\xi)\right|^{2}\\ \leq C_{T}\left(\langle\xi\rangle^{2+2s}\left|\widehat{u}_{0}( \xi)\right|^{2}+\langle\xi\rangle^{2s}\left|\widehat{u}_{1}(\xi)\right|^{2}+ \langle\xi\rangle^{2s}\int_{0}^{T}|\widehat{f}(\tau,\xi)|^{2}\mathrm{d}\tau \right), \tag{4.23}\]
i.e.,
\[\left(1+\lambda_{\xi}\right)^{1+s}\left|\widehat{u}(t,\xi)\right|^{2 }+\left(1+\lambda_{\xi}\right)^{s}\left|\partial_{t}\widehat{u}(t,\xi)\right|^{2} \\ \leq C_{T}\left(\left(1+\lambda_{\xi}\right)^{1+s}\left|\widehat{u} _{0}(\xi)\right|^{2}+\left(1+\lambda_{\xi}\right)^{s}\left|\widehat{u}_{1}( \xi)\right|^{2}+\left(1+\lambda_{\xi}\right)^{s}\int_{0}^{T}|\widehat{f}(\tau, \xi)|^{2}\mathrm{d}\tau\right). \tag{4.24}\]
Now, by using the \(\mathcal{H}_{h,V}\)-Plancherel's formula (3.15) and (3.17), we have
\[\left\|(I+\mathcal{H}_{h,V})^{\frac{1+s}{2}}u(t,\cdot)\right\|_{ \ell^{2}(\hbar\mathbb{Z}^{n})}^{2}+\left\|(I+\mathcal{H}_{h,V})^{\frac{s}{2}} u_{t}(t,\cdot)\right\|_{\ell^{2}(\hbar\mathbb{Z}^{n})}^{2}\\ \leq C_{T}\left(\left\|(I+\mathcal{H}_{h,V})^{\frac{1+s}{2}}u_{0 }\right\|_{\ell^{2}(\hbar\mathbb{Z}^{n})}^{2}+\left\|(I+\mathcal{H}_{h,V})^{ \frac{s}{2}}u_{1}\right\|_{\ell^{2}(\hbar\mathbb{Z}^{n})}^{2}+\right.\\ \left\|(I+\mathcal{H}_{h,V})^{\frac{s}{2}}f\right\|_{L^{2}([0,T] ;\ell^{2}(\hbar\mathbb{Z}^{n}))}^{2}\right), \tag{4.25}\]
whence we get
\[\|u(t,\cdot)\|_{\mathrm{H}_{h,V}^{1+s}}^{2}+\|u_{t}(t,\cdot)\|_{\mathrm{H}_{ \mathcal{H}_{h,V}}^{s}}^{2}\leq C_{T}\left(\|u_{0}\|_{\mathrm{H}_{\mathcal{H} _{h,V}}^{1+s}}^{2}+\|u_{1}\|_{\mathrm{H}_{\mathcal{H}_{h,V}}^{s}}^{2}+\|f\|_{L ^{2}([0,T];\mathrm{H}_{\mathcal{H}_{h,V}}^{s})}^{2}\right), \tag{4.26}\]
for all \(t\in[0,T]\), where the constant \(C_{T}\) is given by
\[C_{T}=c_{0}^{-1}(1+\|a\|_{L^{\infty}})e^{c_{0}^{-1}\left(1+\|a^{\prime}\|_{L^ {\infty}}+\|q\|_{L^{\infty}}+2\|a\|_{L^{\infty}}\right)T}. \tag{4.27}\]
This completes the proof.
Thus, we have obtained the well-posedness for the Cauchy problem (1.3) in the Sobolev spaces associated with the discrete Schrodinger operator. We will now prove the existence of very weak solution in the case of distributional coefficients.
Proof of Theorem 2.4.: Consider the regularised Cauchy problem
\[\left\{\begin{array}{l}\partial_{t}^{2}u_{\varepsilon}(t,k)+a_{\varepsilon}( t)\mathcal{H}_{h,V}u_{\varepsilon}(t,k)+q_{\varepsilon}(t)u_{\varepsilon}(t,k)=f_{ \varepsilon}(t,k),\quad(t,k)\in(0,T]\times\hbar\mathbb{Z}^{n},\\ u_{\varepsilon}(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\\ \partial_{t}u_{\varepsilon}(0,k)=u_{1}(k),\quad k\in\hbar\mathbb{Z}^{n}. \end{array}\right. \tag{4.28}\]
Taking the Fourier transform with respect to \(k\in\hbar\mathbb{Z}^{n}\) and then using the transformation similar to Theorem 2.1, the Cauchy problem (4.28) reduces to
\[\left\{\begin{array}{l}\partial_{t}U_{\varepsilon}(t,\xi)=i\langle\xi \rangle A_{\varepsilon}(t)U_{\varepsilon}(t,\xi)+i\langle\xi\rangle^{-1}Q_{ \varepsilon}(t)U_{\varepsilon}(t,\xi)+F_{\varepsilon}(t,\xi),\quad\xi\in \mathcal{I}_{\hbar},\\ U_{\varepsilon}(0,\xi)=U_{0}(\xi),\quad\xi\in\mathcal{I}_{\hbar},\end{array}\right. \tag{4.29}\]
where
\[U_{\varepsilon}(t,\xi):=\left(\begin{array}{c}i\langle\xi\rangle\widehat{u }_{\varepsilon}(t,\xi)\\ \partial_{t}\widehat{u}_{\varepsilon}(t,\xi)\end{array}\right),\quad U_{0}( \xi):=\left(\begin{array}{c}i\langle\xi\rangle\widehat{u}_{0}(\xi)\\ \widehat{u}_{1}(\xi)\end{array}\right), \tag{4.30}\]
and the matrices
\[A_{\varepsilon}(t):=\left(\begin{array}{cc}0&1\\ a_{\varepsilon}(t)&0\end{array}\right),\ Q_{\varepsilon}(t):=\left(\begin{array}[] {cc}0&0\\ q_{\varepsilon}(t)-a_{\varepsilon}(t)&0\end{array}\right),\ F_{\varepsilon}(t, \xi):=\left(\begin{array}{c}0\\ \widehat{f}_{\varepsilon}(t,\xi)\end{array}\right). \tag{4.31}\]
We note that the eigenvalues of \(A_{\varepsilon}(t)\) are given by \(\pm\sqrt{a_{\varepsilon}(t)}\). The symmetriser \(S_{\varepsilon}\) of \(A_{\varepsilon}\) is given by
\[S_{\varepsilon}(t)=\left(\begin{array}{cc}a_{\varepsilon}(t)&0\\ 0&1\end{array}\right), \tag{4.32}\]
i.e., we have
\[S_{\varepsilon}A_{\varepsilon}-A_{\varepsilon}^{*}S_{\varepsilon}=0. \tag{4.33}\]
If we now define the energy
\[E_{\varepsilon}(t,\xi):=(S_{\varepsilon}(t)U_{\varepsilon}(t,\xi),U_{ \varepsilon}(t,\xi)), \tag{4.34}\]
then similar to (4.10), we have
\[\inf_{t\in[0,T]}\{a_{\varepsilon}(t),1\}\left|U_{\varepsilon}(t,\xi)\right|^ {2}\leq E_{\varepsilon}(t,\xi)\leq\sup_{t\in[0,T]}\{a_{\varepsilon}(t),1\} \left|U_{\varepsilon}(t,\xi)\right|^{2}. \tag{4.35}\]
Recall that \(a\) and \(q\) are distributions with supports contained in \([0,T]\) and \(\psi\in C_{0}^{\infty}(\mathbb{R}),\psi\geq 0,\)\(\mbox{supp}(\psi)\subseteq K.\) Given that the distributions \(a\) and \(q\) may be considered as supported in the interval \([0,T]\), it is sufficient to assume \(K=[0,T]\) throughout the article. By the structure theorem for compactly supported distributions, there exist \(L_{1},L_{2}\in\mathbb{N}\) and \(c_{1},c_{2}>0\) such that
\[\left|\partial_{t}^{k}a_{\varepsilon}(t)\right|\leq c_{1}\omega(\varepsilon)^ {-L_{1}-k}\quad\mbox{and}\quad\left|\partial_{t}^{k}q_{\varepsilon}(t)\right| \leq c_{2}\omega(\varepsilon)^{-L_{2}-k}, \tag{4.36}\]
for all \(k\in\mathbb{N}_{0}\) and \(t\in[0,T]\). Since \(a\geq a_{0}>0\) therefore we can write
\[a_{\varepsilon}(t)=\left(a*\psi_{\omega(\varepsilon)}\right)(t)=\left\langle a,\tau_{t}\tilde{\psi}_{\omega(\varepsilon)}\right\rangle\geq\tilde{a}_{0}>0, \tag{4.37}\]
where \(\tilde{\psi}(x)=\psi(-x),x\in\mathbb{R}\) and \(\tau_{t}\psi(\xi)=\psi(\xi-t),\xi\in\mathbb{R}\). Combining the inequalities (4.36) and (4.37), there exist two positive constants \(c_{0}\) and \(c_{1}\) such that
\[c_{0}\left|U_{\varepsilon}(t,\xi)\right|^{2}\leq E_{\varepsilon}(t,\xi)\leq \left(1+c_{1}\omega(\varepsilon)^{-L_{1}}\right)\left|U_{\varepsilon}(t,\xi) \right|^{2}. \tag{4.38}\]
Then we can calculate
\[\partial_{t}E_{\varepsilon}(t,\xi) = \left(\partial_{t}S_{\varepsilon}(t)U_{\varepsilon}(t,\xi),U_{ \varepsilon}(t,\xi)\right)+i\langle\xi\rangle\left((S_{\varepsilon}A_{ \varepsilon}-A_{\varepsilon}^{*}S_{\varepsilon})\left(t\right)U_{\varepsilon }(t,\xi),U_{\varepsilon}(t,\xi)\right)+\] \[i\langle\xi\rangle^{-1}\left((S_{\varepsilon}Q_{\varepsilon}-Q_ {\varepsilon}^{*}S_{\varepsilon})\left(t\right)U_{\varepsilon}(t,\xi),U_{ \varepsilon}(t,\xi)\right)+2\operatorname{Re}(S_{\varepsilon}(t)F_{ \varepsilon}(t,\xi),U_{\varepsilon}(t,\xi))\] \[= \left(\partial_{t}S_{\varepsilon}(t)U_{\varepsilon}(t,\xi),U_{ \varepsilon}(t,\xi)\right)+i\langle\xi\rangle^{-1}\left((S_{\varepsilon}Q_{ \varepsilon}-Q_{\varepsilon}^{*}S_{\varepsilon})\left(t\right)U_{\varepsilon}(t,\xi),U_{\varepsilon}(t,\xi)\right)+\] \[2\operatorname{Re}(S_{\varepsilon}(t)F_{\varepsilon}(t,\xi),U_ {\varepsilon}(t,\xi))\] \[\leq \left(\left\|\partial_{t}S_{\varepsilon}(t)\right\|+\left\|\left( S_{\varepsilon}Q_{\varepsilon}-Q_{\varepsilon}^{*}S_{\varepsilon}\right)(t)\right\|+ \left\|S_{\varepsilon}(t)\right\|\right)\left|U_{\varepsilon}(t,\xi)\right|^ {2}+\] \[\|S_{\varepsilon}(t)\||F_{\varepsilon}(t,\xi)|^{2}\] \[\leq \left(1+|a_{\varepsilon}^{\prime}(t)|+|q_{\varepsilon}(t)|+2|a_{ \varepsilon}(t)|\right)\left|U_{\varepsilon}(t,\xi)\right|^{2}+\left(1+|a_{ \varepsilon}(t)|\right)\left|F_{\varepsilon}(t,\xi)\right|^{2}.\]
Combining the above estimates with (4.36) and (4.38), we obtain
\[\partial_{t}E_{\varepsilon}(t,\xi) \leq c_{0}^{-1}\left(1+c_{1}\omega(\varepsilon)^{-L_{1}-1}+c_{2}\omega( \varepsilon)^{-L_{2}}+2c_{1}\omega(\varepsilon)^{-L_{1}}\right)E_{\varepsilon} (t,\xi)+ \tag{4.40}\] \[\left(1+c_{1}\omega(\varepsilon)^{-L_{1}}\right)|F_{\varepsilon} (t,\xi)|^{2}\] \[= \kappa_{1}\left(1+\omega(\varepsilon)^{-L_{1}-1}+\omega( \varepsilon)^{-L_{2}}+\omega(\varepsilon)^{-L_{1}}\right)E_{\varepsilon}(t,\xi)+\] \[\kappa_{2}\left(1+\omega(\varepsilon)^{-L_{1}}\right)|F_{ \varepsilon}(t,\xi)|^{2},\]
where \(\kappa_{1}=c_{0}^{-1}\max\{1,2c_{1},c_{2}\}\) and \(\kappa_{2}=\max\{1,c_{1}\}\). Applying the Gronwall's lemma to the inequality (4.40), we obtain
\[E_{\varepsilon}(t,\xi)\leq e^{\int_{0}^{t}\kappa_{1}\left(1+ \omega(\varepsilon)^{-L_{1}-1}+\omega(\varepsilon)^{-L_{2}}+\omega( \varepsilon)^{-L_{1}}\right)\mathrm{d}\tau}\times\\ \left(E_{\varepsilon}(0,\xi)+\kappa_{2}\left(1+\omega(\varepsilon )^{-L_{1}}\right)\int_{0}^{t}|F_{\varepsilon}(\tau,\xi)|^{2}\mathrm{d}\tau \right). \tag{4.41}\]
Combining the inequalities (4.38) and (4.41), we obtain
\[c_{0}\left|U_{\varepsilon}(t,\xi)\right|^{2} \leq E_{\varepsilon}(t,\xi)\] \[\leq e^{\kappa_{1}\left(1+\omega(\varepsilon)^{-L_{1}-1}+\omega( \varepsilon)^{-L_{2}}+\omega(\varepsilon)^{-L_{1}}\right)T}\times\] \[\left(\left(1+c_{1}\omega(\varepsilon)^{-L_{1}}\right)|U_{ \varepsilon}(0,\xi)|^{2}+\kappa_{2}\left(1+\omega(\varepsilon)^{-L_{1}} \right)\int_{0}^{T}|F_{\varepsilon}(\tau,\xi)|^{2}\mathrm{d}\tau\right)\] \[\leq C_{T}e^{\kappa_{T}\left(\omega(\varepsilon)^{-L_{1}-1}+\omega( \varepsilon)^{-L_{2}}+\omega(\varepsilon)^{-L_{1}}\right)}\left(|U_{\varepsilon }(0,\xi)|^{2}+\int_{0}^{T}|F_{\varepsilon}(\tau,\xi)|^{2}\mathrm{d}\tau \right),\]
where \(C_{T}=e^{k_{1}T}\max\{1,c_{1},\kappa_{2}\}\) and \(\kappa_{T}=2+2k_{1}T\). This gives
\[|U_{\varepsilon}(t,\xi)|^{2}\leq c_{0}^{-1}C_{T}e^{\kappa_{T}\left(\omega( \varepsilon)^{-L_{1}-1}+\omega(\varepsilon)^{-L_{2}}+\omega(\varepsilon)^{-L _{1}}\right)}\left(|U_{\varepsilon}(0,\xi)|^{2}+\int_{0}^{T}|F_{\varepsilon}( \tau,\xi)|^{2}\mathrm{d}\tau\right). \tag{4.43}\]
Putting \(\omega(\varepsilon)\sim|\log(\varepsilon)|^{-1}\) and recalling the definition of \(U_{\varepsilon}\), we get
\[\langle\xi\rangle^{2}|\widehat{u}_{\varepsilon}(t,\xi)|^{2}+| \partial_{t}\widehat{u}_{\varepsilon}(t,\xi)|^{2}\\ \lesssim\varepsilon^{-2L_{1}-L_{2}-1}\left(\langle\xi\rangle^{2} |\widehat{u}_{0}(\xi)|^{2}+|\widehat{u}_{1}(\xi)|^{2}+\int_{0}^{T}|\widehat{f} _{\varepsilon}(\tau,\xi)|^{2}\mathrm{d}\tau\right). \tag{4.44}\]
Multiplying by powers of \(\langle\xi\rangle\) for any \(s\in\mathbb{R}\) and using the \(\mathcal{H}_{\hbar,V}\)-Plancherel formula, we obtain
\[\|u_{\varepsilon}(t,\cdot)\|_{\mathrm{H}_{\mathcal{H}_{\hbar,V}}^ {s}}^{2}+\|\partial_{t}u_{\varepsilon}(t,\cdot)\|_{\mathrm{H}_{\mathcal{H}_{ \hbar,V}}^{s}}^{2}\\ \lesssim\varepsilon^{-2L_{1}-L_{2}-1}\left(\|u_{0}\|_{\mathrm{H}_ {\mathcal{H}_{\hbar,V}}^{1+s}}^{2}+\|u_{1}\|_{\mathrm{H}_{\mathcal{H}_{\hbar,V }}^{s}}^{2}+\|f_{\varepsilon}\|_{L^{2}([0,T];\mathrm{H}_{\mathcal{H}_{\hbar,V }}^{s})}^{2}\right), \tag{4.45}\]
with the constant independent of \(\hbar\) and \(t\in[0,T].\) Since \((f_{\varepsilon})_{\varepsilon}\) is \(L^{2}([0,T];\mathrm{H}_{\mathcal{H}_{\hbar,V}}^{s})\)-moderate regularisation of \(f\), therefore there exist positive constants \(L_{3}\) and \(c>0\) such that
\[\|f_{\varepsilon}\|_{L^{2}([0,T];\mathrm{H}_{\mathcal{H}_{\hbar,V}}^{s})} \leq c\varepsilon^{-L_{3}}. \tag{4.46}\]
On integrating the estimate (4.45) with respect to the variable \(t\in[0,T]\) and then combining together with (4.46), we obtain
\[\left\|u_{\varepsilon}\right\|_{L^{2}([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}} )}\lesssim\varepsilon^{-L_{1}-L_{2}-L_{3}}\text{ and }\left\|\partial_{t}u_{ \varepsilon}\right\|_{L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})} \lesssim\varepsilon^{-L_{1}-L_{2}-L_{3}-1}. \tag{4.47}\]
Therefore, we deduce that \(u_{\varepsilon}\) is \(L^{2}([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}})\)-moderate. This concludes the proof.
Thus, we have proved the existence of very weak solution for the Cauchy problem (1.3). We will now prove the uniqueness of very weak solution in the sense of Definition 2.5.
Proof of Theorem 2.7.: Let \((u_{\varepsilon})_{\varepsilon}\) and \((\tilde{u}_{\varepsilon})_{\varepsilon}\) be the families of solutions corresponding to the Cauchy problems (2.6) and (2.7), respectively. Denoting \(w_{\varepsilon}(t,k):=u_{\varepsilon}(t,k)-\tilde{u}_{\varepsilon}(t,k)\), we get
\[\left\{\begin{array}{l}\partial_{t}^{2}w_{\varepsilon}(t,k)+a_{\varepsilon} (t)\mathcal{H}_{h,V}w_{\varepsilon}(t,k)+q_{\varepsilon}(t)w_{\varepsilon}(t, k)=g_{\varepsilon}(t,k),\quad(t,k)\in(0,T]\times\hbar\mathbb{Z}^{n},\\ w_{\varepsilon}(0,k)=0,\quad k\in\hbar\mathbb{Z}^{n},\\ \partial_{t}w_{\varepsilon}(0,k)=0,\quad k\in\hbar\mathbb{Z}^{n},\end{array}\right. \tag{4.48}\]
where
\[g_{\varepsilon}(t,k):=(\tilde{a}_{\varepsilon}-a_{\varepsilon})\left(t\right) \mathcal{H}_{h,V}\tilde{u}_{\varepsilon}(t,k)+(\tilde{q}_{\varepsilon}-q_{ \varepsilon})\left(t\right)\tilde{u}_{\varepsilon}(t,k)+(f_{\varepsilon}- \tilde{f}_{\varepsilon})(t,k). \tag{4.49}\]
Since \((a_{\varepsilon}-\tilde{a}_{\varepsilon})_{\varepsilon}\) is \(L^{\infty}_{1}\)-negligible, \((q_{\varepsilon}-\tilde{q}_{\varepsilon})_{\varepsilon}\) is \(L^{\infty}\)-negligible, and \((f_{\varepsilon}-\tilde{f}_{\varepsilon})_{\varepsilon}\) is \(L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})\)-negligible, it follows that \(g_{\varepsilon}\) is \(L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})\)-negligible. Taking the Fourier transform with respect to \(k\in\hbar\mathbb{Z}^{n}\) and then using the transformation similar to Theorem 2.4, the Cauchy problem (4.48) reduces to
\[\left\{\begin{array}{l}W_{\varepsilon}^{\prime}(t,\xi)=i\langle\xi\rangle A _{\varepsilon}(t)W_{\varepsilon}(t,\xi)+i\langle\xi\rangle^{-1}Q_{\varepsilon }(t)W_{\varepsilon}(t,\xi)+G_{\varepsilon}(t,\xi),\quad\xi\in\mathcal{I}_{ \hbar},\\ W_{\varepsilon}(0,\xi)=0,\quad\xi\in\mathcal{I}_{\hbar},\end{array}\right. \tag{4.50}\]
where \(A_{\varepsilon}(t),Q_{\varepsilon}(t)\) are given in (4.31) and \(G_{\varepsilon}(t,\xi)=[0,\widehat{g_{\varepsilon}}(t,\xi)]^{\mathrm{T}}\). If we now define the energy
\[E_{\varepsilon}(t,\xi):=(S_{\varepsilon}(t)W_{\varepsilon}(t,\xi),W_{ \varepsilon}(t,\xi)), \tag{4.51}\]
where \(S_{\varepsilon}(t)\) is given by (4.32), then similar to estimate (4.38), we have
\[c_{0}\left|W_{\varepsilon}(t,\xi)\right|^{2}\leq E_{\varepsilon}(t,\xi)\leq \left(1+c_{1}\omega(\varepsilon)^{-L_{1}}\right)\left|W_{\varepsilon}(t,\xi) \right|^{2}, \tag{4.52}\]
where \(c_{0}\) and \(c_{1}\) are positive constants. Then we can calculate
\[\partial_{t}E_{\varepsilon}(t,\xi) \leq \left(\|\partial_{t}S_{\varepsilon}(t)\|+\|\left(S_{\varepsilon} Q_{\varepsilon}-Q_{\varepsilon}^{*}S_{\varepsilon}\right)(t)\|\right)\left|W_{\xi}(t, \xi)\right|^{2}+\] \[2\|S_{\varepsilon}(t)\|G_{\varepsilon}(t,\xi)\|W_{\varepsilon}(t,\xi)\|\] \[\leq \left(\|\partial_{t}S_{\varepsilon}(t)\|+\|\left(S_{\varepsilon} Q_{\varepsilon}-Q_{\varepsilon}^{*}S_{\varepsilon}\right)(t)\|+\|S_{\varepsilon}(t)\| \right)\left|W_{\varepsilon}(t,\xi)\right|^{2}+\] \[\|S_{\varepsilon}(t)\|\left|G_{\varepsilon}(t,\xi)\right|^{2}\] \[\leq \left(1+|\partial_{t}a_{\varepsilon}(t)|+|q_{\varepsilon}(t)|+2|a_ {\varepsilon}(t)|\right)\left|W_{\varepsilon}(t,\xi)\right|^{2}+\left(1+|a_{ \varepsilon}(t)|\right)\left|G_{\varepsilon}(t,\xi)\right|^{2}.\]
Combining (4.36) and (4.52) together with (4.53), and then using the Gronwall's lemma, we obtain the following estimate
\[|W_{\varepsilon}(t,\xi)|^{2}\leq c_{0}^{-1}C_{T}e^{\kappa_{T}\left(\omega( \varepsilon)^{-L_{1}-1}+\omega(\varepsilon)^{-L_{2}}+\omega(\varepsilon)^{-L _{1}}\right)}\left(|W_{\varepsilon}(0,\xi)|^{2}+\int_{0}^{T}|G_{\varepsilon}( \tau,\xi)|^{2}\mathrm{d}\tau\right), \tag{4.54}\]
where constants are similar to estimate (4.43). Putting \(\omega(\varepsilon)\sim|\log(\varepsilon)|^{-1}\) and using the fact that \(W_{\varepsilon}(0,\xi)\equiv 0\) for all \(\varepsilon\in(0,1]\), we get
\[|W_{\varepsilon}(t,\xi)|^{2}\lesssim\varepsilon^{-2L_{1}-L_{2}-1}\int_{0}^{T}| G_{\varepsilon}(\tau,\xi)|^{2}\mathrm{d}\tau,\quad(t,\xi)\in[0,T]\times\mathcal{I}_{h}, \tag{4.55}\]
with the constant independent of \(t\in[0,T]\) and \(\xi\in\mathcal{I}_{h}\). Recalling the definition of \(W_{\varepsilon}\), we get
\[\langle\xi\rangle^{2}|\widehat{w}_{\varepsilon}(t,\xi)|^{2}+|\partial_{t} \widehat{w}_{\varepsilon}(t,\xi)|^{2}\lesssim\varepsilon^{-2L_{1}-L_{2}-1} \int_{0}^{T}|\widehat{g}_{\varepsilon}(\tau,\xi)|^{2}\mathrm{d}\tau. \tag{4.56}\]
Multiplying by powers of \(\langle\xi\rangle\) for any \(s\in\mathbb{R}\) and using the \(\mathcal{H}_{h,V}\)-Plancherel formula, we obtain
\[\|w_{\varepsilon}(t,\cdot)\|^{2}_{\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}}}+\| \partial_{t}w_{\varepsilon}(t,\cdot)\|^{2}_{\mathrm{H}^{s}_{\mathcal{H}_{h,V }}}\lesssim\varepsilon^{-2L_{1}-L_{2}-1}\|g_{\varepsilon}\|^{2}_{L^{2}([0,T] ;\mathrm{H}^{s}_{\mathcal{H}_{h,V}})}, \tag{4.57}\]
for all \(t\in[0,T]\). Since \(g_{\varepsilon}\) is \(L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})\)-negligible, therefore we obtain
\[\|w_{\varepsilon}(t,\cdot)\|^{2}_{\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}}}+\| \partial_{t}w_{\varepsilon}(t,\cdot)\|^{2}_{\mathrm{H}^{s}_{\mathcal{H}_{h,V} }}\lesssim\varepsilon^{-2L_{1}-L_{2}-1}\varepsilon^{2L_{1}+L_{2}+1+q}= \varepsilon^{q},\quad\text{ for all }q\in\mathbb{N}_{0}, \tag{4.58}\]
for all \(t\in[0,T]\). On integrating the above estimate with respect to the variable \(t\in[0,T]\), we obtain
\[\|w_{\varepsilon}\|^{2}_{L^{2}([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}})}+ \|\partial_{t}w_{\varepsilon}\|^{2}_{L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_ {h,V}})}\lesssim\varepsilon^{q},\quad\text{ for all }q\in\mathbb{N}_{0}, \tag{4.59}\]
with the constant independent of \(h\) and \(t\in[0,T]\). Thus \((u_{\varepsilon}-\tilde{u}_{\varepsilon})_{\varepsilon}\) is \(L^{2}([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}})\)-negligible. This completes the proof.
Thus, we have proved that the very weak solution is unique in the sense of Definition 2.5. We will now prove the consistency of very weak solutions.
Proof of Theorem 2.8.: Let \(\tilde{u}\) be the classical solution given by Theorem 2.1. By definition, we know that
\[\left\{\begin{array}{l}\partial_{t}^{2}\tilde{u}(t,k)+a(t)\mathcal{H}_{h,V} \tilde{u}(t,k)+q(t)\tilde{u}(t,k)=f(t,k),\quad(t,k)\in(0,T]\times\hbar\mathbb{ Z}^{n},\\ \tilde{u}(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\\ \partial_{t}\tilde{u}(0,k)=u_{1}(k),\quad k\in\hbar\mathbb{Z}^{n},\end{array}\right. \tag{4.60}\]
and there exists a net \((u_{\varepsilon})_{\varepsilon}\) such that
\[\left\{\begin{array}{l}\partial_{t}^{2}u_{\varepsilon}(t,k)+a_{\varepsilon}(t) \mathcal{H}_{h,V}u_{\varepsilon}(t,k)+q_{\varepsilon}(t)u_{\varepsilon}(t,k)=f_ {\varepsilon}(t,k),\quad(t,k)\in(0,T]\times\hbar\mathbb{Z}^{n},\\ u_{\varepsilon}(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\\ \partial_{t}u_{\varepsilon}(0,k)=u_{1}(k),\quad k\in\hbar\mathbb{Z}^{n}. \end{array}\right. \tag{4.61}\]
Observing that the nets \(\left(a_{\varepsilon}-a\right)_{\varepsilon},\left(q_{\varepsilon}-q\right)_ {\varepsilon}\) and \(\left(f_{\varepsilon}-f\right)_{\varepsilon}\) are converging to \(0\) for \(a\in L_{1}^{\infty}([0,T])\), \(b\in L^{\infty}([0,T])\) and \(f\in L^{2}([0,T];\mathrm{H}_{\mathcal{H}_{h,V}}^{s})\), we can rewrite (4.60) as
\[\left\{\begin{array}{l}\partial_{t}^{2}\tilde{u}(t,k)+a_{\varepsilon}(t) \mathcal{H}_{h,V}\tilde{u}(t,k)+q_{\varepsilon}(t)\tilde{u}(t,k)=f_{ \varepsilon}(t,k)+g_{\varepsilon}(t,k),\ (t,k)\in(0,T]\times\hbar\mathbb{Z}^{n},\\ \tilde{u}(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\\ \partial_{t}\tilde{u}(0,k)=u_{1}(k),\quad k\in\hbar\mathbb{Z}^{n},\end{array}\right. \tag{4.62}\]
where
\[g_{\varepsilon}(t,k):=\left(a_{\varepsilon}-a\right)(t)\mathcal{H}_{h,V} \tilde{u}(t,k)+\left(q_{\varepsilon}-q\right)(t)\tilde{u}(t,k)+\left(f-f_{ \varepsilon}\right)(t,k),\]
\(g_{\varepsilon}\in L^{2}([0,T];\mathrm{H}_{\mathcal{H}_{h,V}}^{s})\) and \(g_{\varepsilon}\to 0\) in \(L^{2}([0,T];\mathrm{H}_{\mathcal{H}_{h,V}}^{s})\) as \(\varepsilon\to 0\). From (4.61) and (4.62), we get that \(w_{\varepsilon}(t,k):=\left(\tilde{u}-u_{\varepsilon}\right)(t,k)\) solves the Cauchy problem
\[\left\{\begin{array}{l}\partial_{t}^{2}w_{\varepsilon}(t,k)+a_{\varepsilon} (t)\mathcal{H}_{h,V}w_{\varepsilon}(t,k)+q_{\varepsilon}(t)w_{\varepsilon}(t, k)=g_{\varepsilon}(t,k),\quad(t,k)\in(0,T]\times\hbar\mathbb{Z}^{n},\\ w_{\varepsilon}(0,k)=0,\quad k\in\hbar\mathbb{Z}^{n},\\ \partial_{t}w_{\varepsilon}(0,k)=0,\quad k\in\hbar\mathbb{Z}^{n}.\end{array}\right. \tag{4.63}\]
Similar to the proof of Theorem 2.7, the following energy estimate can be easily obtained
\[\partial_{t}E_{\varepsilon}(t,\xi) \leq \left(\|\partial_{t}S_{\varepsilon}(t)\|+\|\left(S_{\varepsilon}Q _{\varepsilon}-Q_{\varepsilon}^{*}S_{\varepsilon}\right)(t)\|+\|S_{ \varepsilon}(t)\|\right)|W_{\varepsilon}(t,\xi)|^{2}+ \tag{4.64}\] \[\|S_{\varepsilon}(t)\|\|G_{\varepsilon}(t,\xi)|^{2}\] \[\leq (1+|a_{\varepsilon}^{\prime}(t)|+|q_{\varepsilon}(t)|+2|a_{ \varepsilon}(t)|)\,|W_{\varepsilon}(t,\xi)|^{2}+\] \[(1+|a_{\varepsilon}(t)|)|G_{\varepsilon}(t,\xi)|^{2}.\]
The coefficients are sufficiently regular, so we simply obtain
\[\partial_{t}E_{\varepsilon}(t,\xi)\leq c_{1}E_{\varepsilon}(t,\xi)+c_{2}\,|G_ {\varepsilon}(t,\xi)|^{2}\,, \tag{4.65}\]
for some positive constants \(c_{1}\) and \(c_{2}\). Then using the Gronwall's lemma and the energy bounds similar to estimate (4.12), we obtain
\[|W_{\varepsilon}(t,\xi)|^{2}\lesssim|W_{\varepsilon}(0,\xi)|^{2}+\int_{0}^{T} |G_{\varepsilon}(\tau,\xi)|^{2}\mathrm{d}\tau, \tag{4.66}\]
with the constant independent of \(t\in[0,T]\) and \(\xi\in\mathcal{I}_{h}\). Using the Plancherel formula and the fact that \(W_{\varepsilon}(0,\xi)\equiv 0\) for all \(\varepsilon\in(0,1]\), we obtain
\[\|w_{\varepsilon}(t,\cdot)\|_{\mathrm{H}_{\mathcal{H}_{h,V}}^{s}}^{2}+\| \partial_{t}w_{\varepsilon}(t,\cdot)\|_{\mathrm{H}_{\mathcal{H}_{h,V}}^{s}}^{2} \lesssim\|g_{\varepsilon}\|_{L^{2}([0,T];\mathrm{H}_{\mathcal{H}_{h,V}}^{s})}^ {2}, \tag{4.67}\]
for all \(t\in[0,T]\). On integrating the above estimate with respect to the variable \(t\in[0,T]\), we obtain
\[\|w_{\varepsilon}\|^{2}_{L^{2}([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}})}+\| \partial_{t}w_{\varepsilon}\|^{2}_{L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h, V}})}\lesssim\|g_{\varepsilon}\|^{2}_{L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})}, \tag{4.68}\]
with the constant independent of \(\hbar\) and \(t\in[0,T]\). Since \(g_{\varepsilon}\to 0\) in \(L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})\), therefore we have
\[w_{\varepsilon}\to 0\ \text{in}\ L^{2}([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}}), \quad\varepsilon\to 0, \tag{4.69}\]
i.e.,
\[u_{\varepsilon}\to\tilde{u}\ \text{in}\ L^{2}([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}}),\quad\varepsilon\to 0. \tag{4.70}\]
Furthermore, the limit is the same for every representation of \(u\), since they will differ from \(\left(u_{\varepsilon}\right)_{\varepsilon}\) by a \(L^{2}([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}})\)-negligible net. This concludes the proof.
## 5. Semiclassical limit as \(\hbar\to 0\)
In this section we will consider the semiclassical limit of solutions as \(\hbar\to 0\).
Proof of Theorem 2.9.: Consider two Cauchy problems:
\[\left\{\begin{array}{l}\partial_{t}^{2}u(t,k)+a(t)\mathcal{H}_{h,V}u(t,k)+q( t)u(t,k)=f(t,k),\quad(t,k)\in(0,T]\times\hbar\mathbb{Z}^{n},\\ u(0,k)=u_{0}(k),\quad k\in\hbar\mathbb{Z}^{n},\\ \partial_{t}u(0,k)=u_{1}(k),\quad k\in\hbar\mathbb{Z}^{n},\end{array}\right. \tag{5.1}\]
and
\[\left\{\begin{array}{l}\partial_{t}^{2}v(t,x)+a(t)\mathcal{H}_{V}v(t,x)+q(t) v(t,x)=f(t,x),\quad(t,x)\in(0,T]\times\mathbb{R}^{n},\\ v(0,x)=u_{0}(x),\quad x\in\mathbb{R}^{n},\\ \partial_{t}v(0,x)=u_{1}(x),\quad x\in\mathbb{R}^{n},\end{array}\right. \tag{5.2}\]
where \(\mathcal{H}_{V}\) is the usual Schrodinger operator on \(\mathbb{R}^{n}\). The potential in the discrete Schrodinger operator is the restriction of the potential in the usual Schrodinger operator to \(\hbar\mathbb{Z}^{n}\). Here the initial data and the source term of the Cauchy problem (5.1) is the evaluation of the initial data and the source term from (5.2) on the lattice \(\hbar\mathbb{Z}^{n}\). From the equations (5.1) and (5.2), denoting \(w:=u-v\), we get
\[\left\{\begin{array}{l}\partial_{t}^{2}w(t,k)+a(t)\mathcal{H}_{h,V}w(t,k)+q( t)w(t,k)=a(t)\left(\mathcal{H}_{V}-\mathcal{H}_{h,V}\right)v(t,k),\ k\in\hbar\mathbb{Z}^{n},\\ w(0,k)=0,\quad k\in\hbar\mathbb{Z}^{n},\\ \partial_{t}w(0,k)=0,\quad k\in\hbar\mathbb{Z}^{n}.\end{array}\right. \tag{5.3}\]
Since \(w_{0}=w_{1}=0\), applying Theorem 2.1 for the above Cauchy problem and using estimate (2.2), we get
\[\|w(t,\cdot)\|^{2}_{\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}}}+\|w_{t }(t,\cdot)\|^{2}_{\mathrm{H}^{s}_{\mathcal{H}_{h,V}}} \leq C_{T}\|a\left(\mathcal{H}_{V}-\mathcal{H}_{h,V}\right)v\|^{2}_{L^ {2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}})}\] \[\leq C_{T}\|a\|^{2}_{L^{\infty}([0,T])}\|\left(\mathcal{H}_{V}- \mathcal{H}_{h,V}\right)v\|^{2}_{L^{2}([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V} })}, \tag{5.4}\]
where the constant \(C_{T}\) is given by
\[C_{T}=c_{0}^{-1}(1+\|a\|_{L^{\infty}})e^{c_{0}^{-1}\left(1+\|a^{\prime}\|_{L^{ \infty}}+\|q\|_{L^{\infty}}+2\|a\|_{L^{\infty}}\right)T}, \tag{5.5}\]
with \(c_{0}=\min\{a_{0},1\}\).
Now we will estimate the term \(\left\|(\mathcal{H}_{V}-\mathcal{H}_{h,V})\,v\right\|_{L^{2}([0,T];\mathrm{H} ^{s}_{\mathcal{H}_{h,V}})}^{2}\). Let \(\phi\in C^{4}(\mathbb{R}^{n})\), then by Taylor's theorem with the Lagrange's form of the remainder, we have
\[\phi(\xi+\mathbf{h})=\sum_{|\alpha|\leq 3}\frac{\partial^{\alpha}\phi(\xi)}{ \alpha!}\mathbf{h}^{\alpha}+\sum_{|\alpha|=4}\frac{\partial^{\alpha}\phi(\xi+ \theta_{\xi}\mathbf{h})}{\alpha!}\mathbf{h}^{\alpha}, \tag{5.6}\]
for some \(\theta_{\xi}\in(0,1)\) depending on \(\xi\). Let \(v_{j}\) be the \(j^{th}\) basis vector in \(\mathbb{Z}^{n}\), having all zeros except for \(1\) as the \(j^{th}\) component and then by taking \(\mathbf{h}=v_{j}\) and \(-v_{j}\) in (5.6), we have
\[\phi(\xi+v_{j})=\phi(\xi)+\phi^{(v_{j})}(\xi)+\frac{1}{2!}\phi^{(2v_{j})}(\xi) +\frac{1}{3!}\phi^{(3v_{j})}(\xi)+\frac{1}{4!}\phi^{(4v_{j})}(\xi+\theta_{j, \xi}v_{j}), \tag{5.7}\]
and
\[\phi(\xi-v_{j})=\phi(\xi)-\phi^{(v_{j})}(\xi)+\frac{1}{2!}\phi^{(2v_{j})}(\xi) -\frac{1}{3!}\phi^{(3v_{j})}(\xi)+\frac{1}{4!}\phi^{(4v_{j})}(\xi-\tilde{ \theta}_{j,\xi}v_{j}), \tag{5.8}\]
for some \(\theta_{j,\xi},\tilde{\theta}_{j,\xi}\in(0,1)\). Using (5.7) and (5.8), we have
\[\phi(\xi+v_{j})+\phi(\xi-v_{j})-2\phi(\xi)=\phi^{(2v_{j})}(\xi)+\frac{1}{4!} \left(\phi^{(4v_{j})}(\xi+\theta_{j,\xi}v_{j})+\phi^{(4v_{j})}(\xi-\tilde{ \theta}_{j,\xi}v_{j})\right).\]
Since \(\delta_{\xi_{j}}^{2}\phi(\xi)=\phi(\xi+v_{j})+\phi(\xi-v_{j})-2\phi(\xi)\), where \(\delta_{\xi_{j}}\phi(\xi):=\phi(\xi+\frac{1}{2}v_{j})-\phi(\xi-\frac{1}{2}v_{j})\), is the usual central difference operator, it follows that
\[\delta_{\xi_{j}}^{2}\phi(\xi) = \phi^{(2v_{j})}(\xi)+\frac{1}{4!}\left(\phi^{(4v_{j})}(\xi+ \theta_{j,\xi}v_{j})+\phi^{(4v_{j})}(\xi-\tilde{\theta}_{j,\xi}v_{j})\right). \tag{5.9}\]
Now by adding all the above \(n\)-equations for \(j=1,\ldots,n\), we get
\[\sum_{j=1}^{n}\delta_{\xi_{j}}^{2}\phi(\xi)=\sum_{j=1}^{n}\phi^{(2v_{j})}(\xi) +\frac{1}{4!}\sum_{j=1}^{n}\left(\phi^{(4v_{j})}(\xi+\theta_{j,\xi}v_{j})+ \phi^{(4v_{j})}(\xi-\tilde{\theta}_{j,\xi}v_{j})\right). \tag{5.10}\]
Let us define a translation operator \(E_{\theta_{j}v_{j}}\phi:\mathbb{R}^{n}\to\mathbb{R}\) by \(E_{\theta_{j}v_{j}}\phi(\xi):=\phi(\xi-\theta_{j,\xi}v_{j})\), then we get
\[\sum_{j=1}^{n}\delta_{\xi_{j}}^{2}\phi(\xi)-\sum_{j=1}^{n}\frac{\partial^{2}}{ \partial\xi_{j}^{2}}\phi(\xi)=\frac{1}{4!}\sum_{j=1}^{n}\left(E_{-\theta_{j}v _{j}}\phi^{(4v_{j})}(\xi)+E_{\tilde{\theta}_{j}v_{j}}\phi^{(4v_{j})}(\xi) \right). \tag{5.11}\]
Now we extend this to \(\hbar\mathbb{Z}^{n}\). Consider a function \(\phi_{h}:\mathbb{R}^{n}\to\mathbb{R}\) defined by \(\phi_{h}(\xi):=\phi(\hbar\xi)\). Clearly \(\phi_{h}\in C^{4}(\mathbb{R}^{n})\), if we take \(\phi\in C^{4}(\mathbb{R}^{n})\). Now we have
\[\mathcal{L}_{1}\phi_{h}(\xi)-\mathcal{L}\phi_{h}(\xi)=\frac{1}{4!}\sum_{j=1}^{ n}\left(E_{-\theta_{j}v_{j}}\phi_{h}^{(4v_{j})}(\xi)+E_{\tilde{\theta}_{j}v_{j}} \phi_{h}^{(4v_{j})}(\xi)\right), \tag{5.12}\]
where \(\mathcal{L}\) is the Laplacian on \(\mathbb{R}^{n}\) and \(\mathcal{L}_{1}\) is the discrete difference Laplacian on \(\mathbb{Z}^{n}\). One can quickly notice that
\[E_{-\theta_{j}v_{j}}\phi_{h}^{(4v_{j})}(\xi)=\phi_{h}^{(4v_{j})}(\xi+\theta_{j, \xi}v_{j})=\hbar^{4}\phi^{(4v_{j})}(\hbar\xi+\hbar\theta_{j,\xi}v_{j})=\hbar^{ 4}E_{-\hbar\theta_{j}v_{j}}\phi^{(4v_{j})}(\hbar\xi). \tag{5.13}\]
Therefore, the equality (5.12) becomes
\[\left(\mathcal{L}_{\hbar}-\hbar^{2}\mathcal{L}\right)\phi(\hbar\xi)=\frac{\hbar^{ 4}}{4!}\sum_{j=1}^{n}\left(E_{-\hbar\theta_{j}v_{j}}\phi^{(4v_{j})}(\hbar\xi)+E _{\hbar\tilde{\theta}_{j}v_{j}}\phi^{(4v_{j})}(\hbar\xi)\right). \tag{5.14}\]
Combining (1.1), (1.5) and (5.14), we get
\[\left(\mathcal{H}_{V}-\mathcal{H}_{h,V}\right)\phi(\hbar\xi)=\left( \hbar^{-2}\mathcal{L}_{\hbar}-\mathcal{L}\right)\phi(\hbar\xi)=\frac{\hbar^{2} }{4!}\sum_{j=1}^{n}\left(E_{-\hbar\theta_{j}v_{j}}\phi^{(4v_{j})}(\hbar\xi)+ \right.\\ \left.E_{\hbar\tilde{\theta}_{j}v_{j}}\phi^{(4v_{j})}(\hbar\xi) \right). \tag{5.15}\]
Hence, it follows that
\[\left\|\left(\mathcal{H}_{V}-\mathcal{H}_{h,V}\right)\phi\right\|_{\mathrm{H} ^{s}_{\mathcal{H}_{h,V}}}^{2}\lesssim\hbar^{4}\max_{1\leq j\leq n}\left(\left\| E_{-\hbar\theta_{j}v_{j}}\phi^{(4v_{j})}\right\|_{\mathrm{H}^{s}_{\mathcal{H}_{h,V}}}^ {2}+\left\|E_{\hbar\tilde{\theta}_{j}v_{j}}\phi^{(4v_{j})}\right\|_{\mathrm{H} ^{s}_{\mathcal{H}_{h,V}}}^{2}\right). \tag{5.16}\]
Combining the relation (1.9) with the Sobolev embedding theorem (see e.g. [10, Excercise 2.6.17]), we have
\[s>k+\frac{n}{2}\Longrightarrow\mathrm{H}^{s}_{\mathcal{H}_{V}}(\mathbb{R}^{n} )\subseteq C^{k}\left(\mathbb{R}^{n}\right). \tag{5.17}\]
Since \((u_{0},u_{1})\in\mathrm{H}^{1+s}_{\mathcal{H}_{V}}\times\mathrm{H}^{s}_{ \mathcal{H}_{V}}\) with \(s>4+\frac{n}{2}\), therefore using Theorem 1.4, it follows that the classical solution satisfies \(v\in C([0,T];\mathrm{H}^{1+s}_{\mathcal{H}_{V}})\cap C^{1}([0,T];\mathrm{H}^{ s}_{\mathcal{H}_{V}})\) with \(s>4+\frac{n}{2}\). Using the embedding (5.17), we get \(v\in C^{4}(\mathbb{R}^{n})\) and using the hypothesis \((u_{0}^{(4v_{j})},u_{1}^{(4v_{j})})\in\mathrm{H}^{1+s}_{\mathcal{H}_{V}}\times \mathrm{H}^{s}_{\mathcal{H}_{V}}\) for all \(j=1,\ldots,n\), we deduce that
\[v^{(4v_{j})}(t,\cdot)\in\mathrm{H}^{s}_{\mathcal{H}_{V}}(\mathbb{R}^{n}),\quad \text{for all }t\in[0,T]. \tag{5.18}\]
Now from (5.16), it follows that
\[\left\|\left(\mathcal{H}_{V}-\mathcal{H}_{h,V}\right)v\right\|_{ L^{2([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}}})}^{2}\lesssim\hbar^{4}\max_{1 \leq j\leq n}\left(\left\|E_{-\hbar\theta_{j}v_{j}}v^{(4v_{j})}\right\|_{L^{2([ 0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}}})}^{2}+\right.\\ \left\|E_{\hbar\tilde{\theta}_{j}v_{j}}v^{(4v_{j})}\right\|_{L^{2 ([0,T];\mathrm{H}^{s}_{\mathcal{H}_{h,V}}})}^{2}\right). \tag{5.19}\]
Using (5.4), (5.18) and (5.19), we get \(\left\|w(t,\cdot)\right\|_{\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}}}^{2}+\left\|w_{ t}(t,\cdot)\right\|_{\mathrm{H}^{s}_{\mathcal{H}_{h,V}}}^{2}\to 0\) as \(\hbar\to 0\). Hence \(\left\|w(t,\cdot)\right\|_{\mathrm{H}^{1+s}_{\mathcal{H}_{h,V}}}\to 0\) and \(\left\|w_{t}(t,\cdot)\right\|_{\mathrm{H}^{s}_{\mathcal{H}_{h,V}}}\to 0\) as \(\hbar\to 0\). This concludes the proof of Theorem 2.9.
We can prove Theorem 2.10 without making any substantial modifications to the proof of Theorem 2.9.
**Acknowledgment**
The authors would like to thanks Prof. M. Krishna for his insightful comments. |
2305.03247 | Heavy-ball-based optimal thresholding algorithms for sparse linear
inverse problems | Linear inverse problems arise in diverse engineering fields especially in
signal and image reconstruction. The development of computational methods for
linear inverse problems with sparsity is one of the recent trends in this
field. The so-called optimal $k$-thresholding is a newly introduced method for
sparse optimization and linear inverse problems. Compared to other
sparsity-aware algorithms, the advantage of optimal $k$-thresholding method
lies in that it performs thresholding and error metric reduction simultaneously
and thus works stably and robustly for solving medium-sized linear inverse
problems. However, the runtime of this method is generally high when the size
of the problem is large. The purpose of this paper is to propose an
acceleration strategy for this method. Specifically, we propose a
heavy-ball-based optimal $k$-thresholding (HBOT) algorithm and its relaxed
variants for sparse linear inverse problems. The convergence of these
algorithms is shown under the restricted isometry property. In addition, the
numerical performance of the heavy-ball-based relaxed optimal $k$-thresholding
pursuit (HBROTP) has been evaluated, and simulations indicate that HBROTP
admits robustness for signal and image reconstruction even in noisy
environments. | Zhong-Feng Sun, Jin-Chuan Zhou, Yun-Bin Zhao | 2023-05-05T02:14:55Z | http://arxiv.org/abs/2305.03247v2 | # Heavy-ball-based optimal thresholding algorithms for sparse linear inverse problems
###### Abstract
Linear inverse problems arise in diverse engineering fields especially in signal and image reconstruction. The development of computational methods for linear inverse problems with sparsity is one of the recent trends in this field. The so-called optimal \(k\)-thresholding is a newly introduced method for sparse optimization and linear inverse problems. Compared to other sparsity-aware algorithms, the advantage of optimal \(k\)-thresholding method lies in that it performs thresholding and error metric reduction simultaneously and thus works stably and robustly for solving medium-sized linear inverse problems. However, the runtime of this method is generally high when the size of the problem is large. The purpose of this paper is to propose an acceleration strategy for this method. Specifically, we propose a heavy-ball-based optimal \(k\)-thresholding (HBOT) algorithm and its relaxed variants for sparse linear inverse problems. The convergence of these algorithms is shown under the restricted isometry property. In addition, the numerical performance of the heavy-ball-based relaxed optimal \(k\)-thresholding pursuit (HBROTP) has been evaluated, and simulations indicate that HBROTP admits robustness for signal and image reconstruction even in noisy environments.
Keywords:Sparse linear inverse problems optimal \(k\)-thresholding heavy-ball method restricted isometry property phase transition image processing
**Mathematics Subject Classification (2020)** 94A12 15A29 90C25 90C20 49M20
## 1 Introduction
In recent years, the linear inverse problem has gained much attention in various fields such as wireless communication [10; 19] and signal/image processing [4; 9; 25; 26; 35; 43; 52]. A typical linear inverse problem is about the reconstruction of unknown data \(z\in\mathbb{R}^{r}\) from the acquired linear measurements
\[y=\Phi z+\nu, \tag{1.1}\]
where \(\Phi\in\mathbb{R}^{m\times r}\) is a given measurement matrix, \(y\in\mathbb{R}^{m}\) are the acquired measurements, and \(\nu\in\mathbb{R}^{m}\) are the measurement errors. In this paper, we consider the case \(m<r\), for which it is generally impossible to reconstruct the data \(z\) from the linear system (1.1) unless \(z\) possesses a certain structure such as sparsity. Fortunately, in many practical applications, the signal to recover possesses certain sparse structure or it can be sparsely represented under a suitable transformation. For instance, many natural image can be sparsely represented via wavelet transforms. Suppose that \(z\) can be sparsely represented via the basis \(\Psi\in\mathbb{R}^{r\times n}\) (\(r\leq n\)), i.e., \(z=\Psi x\) where the vector \(x\in\mathbb{R}^{n}\) is either \(k\)-sparse or \(k\)-compressible for some integer number \(k\ll n\). A vector is said to be \(k\)-sparse if \(\|x\|_{0}\leq k\), and \(k\)-compressible if \(x\) can be approximated by a \(k\)-sparse vector, where \(\|\cdot\|_{0}\) denotes the number of nonzero entries of a vector. With a sparse representation of \(z\), the model (1.1) can be written as
\[y=Ax+\nu, \tag{1.2}\]
where \(A=\Phi\Psi\in\mathbb{R}^{m\times n}\) (\(m<n\)) is still referred to as a measurement matrix. In this case, the problem (1.1) is transformed to the so-called sparse linear inverse (SLI) problem which is to reconstruct the sparse data \(x\) via the linear system (1.2). Once the sparse data \(x\) is reconstructed, the original data \(z\) can be immediately obtained by setting \(z:=\Psi x\). The SLI problem can be formulated as an optimization problem (see, e.g., [4; 9; 25; 28; 19; 43; 49]). Typically, it can be formulated as the sparse optimization problem
\[\min_{u}\{\|y-Au\|_{2}^{2}:\|u\|_{0}\leq k\}. \tag{1.3}\]
It can also be formulated as the \(\ell_{1}\)-minimization (basis pursuit) problem
\[\min_{u}\{\|u\|_{1}:Au=y\} \tag{1.4}\]
as well as the LASSO problem
\[\min_{u}\!\!\|y-Au\|_{2}^{2}+\mu\|u\|_{1}, \tag{1.5}\]
where \(\mu>0\) is called a regularization parameter. All these models, (1.3)-(1.5), are widely used in signal and image reconstruction with sparsity.
Depending on the problem formulations, several classes of algorithms for SLI problems have been developed over the past decades, including the thresholding algorithms [25; 26; 28], greedy methods [20; 42; 48], convex optimization [14; 15; 18; 57], nonconvex optimization [17], and Bayesian methods [45; 51]. In this paper, we focus on the model (1.3) for which the thresholding algorithms are particularly convenient to develop. The thresholding method was first proposed by Donoho and Johnstone [23]. It has experienced a significant development since 1994 and has
evolved into a large family of algorithms which includes the hard thresholding [6; 8; 9; 27; 33; 38], soft thresholding [21; 22; 24] and optimal \(k\)-thresholding algorithms [56; 58]. It is worth stressing that an advantage of thresholding methods is that the algorithms can guarantee the generated points being feasible to the problem (1.3). The simplest thresholding method might be the iterative hard thresholding (IHT) [8]. The combination of IHT and orthogonal projection yields the hard thresholding pursuit (HTP) [27]. Due to low computational complexity, IHT and HTP have been widely used in signal reconstruction with compressive samplings [6; 7; 9; 33].
However, as pointed out in [56; 58], performing hard thresholding on non-sparse iterates may not necessarily reduce the objective value of (1.3) and thus may cause numerical oscillation during iterations. Thus the optimal \(k\)-thresholding operator was introduced in [56] (see also [58]) to alleviate such a weakness of hard thresholding. This operator may perform thresholding on iterates and, in the meantime, reduce the objective value of (1.3). The optimal \(k\)-thresholding (OT) and optimal \(k\)-thresholding pursuit (OTP) algorithms are first developed in [56]. Recall that the optimal \(k\)-thresholding of a given vector \(v\in\mathbb{R}^{n}\) is defined as
\[\min_{w}\{\left\|y-A(v\circ w)\right\|_{2}^{2}:\ \mathbf{e}^{T}w=k,\ w\in\{0,1 \}^{n}\}, \tag{1.6}\]
where \(\mathbf{e}=(1,1,\ldots,1)^{T}\in\mathbb{R}^{n}\), \(\{0,1\}^{n}\) is the set of \(n\)-dimensional binary vectors and \(v\circ w:=(v_{1}w_{1},\ldots,v_{n}w_{n})^{T}\) denotes the Hadamard product of two vectors. However, from a computational point of view, it is generally more convenient to solve the following convex optimization
\[\min_{w}\{\left\|y-A(v\circ w)\right\|_{2}^{2}:\ \ \mathbf{e}^{T}w=k,\ \ 0\leq w\leq\mathbf{e}\}, \tag{1.7}\]
which is a tight relaxation of (1.6). This problem is referred to as data compressing problem in [56; 58]. Based on (1.7), the relaxed optimal \(k\)-thresholding (ROTP\(\omega\)) and relaxed optimal \(k\)-thresholding pursuit (ROTP\(\omega\)) algorithms were proposed in [56; 58], where \(\omega\) represents times of data compression that are performed in the algorithms. When \(\omega=1\), the algorithm is termed as ROTP. Some modifications of ROTP using partial gradient and Newton-type search direction were studied recently in [39; 40]. While the convex optimization (1.7) can be efficiently solved by existing convex optimization solvers, however, solving such a problem remains time-consuming when the size of the problem is large. Thus it is important to study how the computational cost of ROTP-type methods might be reduced and how these methods can be accelerated by integrating an acceleration technique such as the heavy-ball (HB) or Nesterov's technique. By using linearization together with a certain binary regularization method, the so-called nature thresholding (NT) algorithm is developed recently in [59], whose computational complexity is significantly lower than that of ROTP\(\omega\) since the NT algorithm is able to avoid solving any optimization problem like (1.7). In this paper, we investigate the ROTP-type algorithms from the acceleration perspective by showing that the HB technique is able to improve the performance of the ROTP-type algorithms.
The HB method introduced by Polyak [44] can be seen as a two-step method which combines the momentum term and gradient descent direction. In recent years, HB has found wide applications in image processing, data analysis, distributed optimization and undirected networks [3; 31; 32; 34; 36; 41; 50; 53]. The theoretical analysis (global convergence and local convergence rate) for HB methods
has been investigated by several researchers. For example, the linear convergence rate of HB for unconstrained convex optimization problem was established by Aujol et. al [3]; Mohammadi et. al [41] analyzed the relation between the convergence rate of HB and its variance amplification when the objective function of the problem is strongly convex and quadratic; Xin and Khan [53] showed that the distributed HB method with appropriate parameters attains a global \(R\)-linear rate, and it has potential acceleration compared with some first-order methods for ill-conditioned problems. Other acceleration techniques including the Nesterov's one can be found in [32; 34; 36; 41].
In this paper, we merge the optimal \(k\)-thresholding and HB acceleration technique to form the following algorithms for the SLI problem formulated as (3):
* Heavy-ball-based optimal \(k\)-thresholding (HBOT),
* Heavy-ball-based optimal \(k\)-thresholding pursuit (HBOTP),
* Heavy-ball-based relaxed optimal \(k\)-thresholding (HBROT\(\omega\)),
where the integer parameter \(\omega\) denotes the number of times for solving (7) at every iteration. The global convergence of these algorithms is established in this paper under the restricted isometry property (RIP) introduced by Candes and Tao [14], and the main results are summarized in Theorems 2.1 and 2. The performances of HBROTP (i.e., HBROTP\(\omega\) with \(\omega=1\)) and several existing algorithms such as ROTP2 [56], partial gradient ROTP (PGROTP) [40], \(\ell_{1}\)-minimization [18], orthogonal matching pursuit (OMP) [25; 48] and projected linearized Bregman method (PLB) [11] are compared through numerical experiments. The phase transition with Gaussian random data is adopted to demonstrate the performances of the proposed algorithms for SLI problems.
The algorithm development for linear inverse problems is usually model-based in the sense that different formulations of the problem require different algorithms. The \(\ell_{1}\)-minimization method is naturally applied to the model (4) and a more general convex optimization solver can be directly used to handle the LASSO problem (5). However, \(\ell_{1}\)-minimization and LASSO solvers are not convenient to solve the problem (3) for which a thresholding method might be more suitable. The optimal \(k\)-thresholding method is proposed to enhance the success rates and stability of existing hard thresholding algorithms. Unlike \(\ell_{1}\)-minimization and LASSO solvers, the hard or optimal \(k\)-thresholding procedures can reconstruct any prescribed (interested) \(k\) significant components of the target signals without the need to reconstruct the whole signal. Also, recent study in [59] indicates that a certain modification of the optimal \(k\)-thresholding method may lead to a fast and efficient algorithm which has far lower computational cost than most existing algorithms including \(\ell_{1}\)-minimization and LASSO solvers. Thus a further study of the optimal \(k\)-thresholding algorithms on their acceleration, simplification and modification remains interesting and important from both viewpoints of practical applications and algorithmic development itself.
While our discussion in this paper is focused on hard/optimal thresholding algorithms, it is worth briefly mentioning the class of soft thresholding methods which is widely used for signal processing as well. The soft thresholding method can be derived in different ways. Taking the model (4) as an example, a soft thresholding method can be developed from the Bregman regularization framework [54], which involves solving the convex subproblem (5) at each iteration.
Based on (1.5), using linearization and \(\ell_{2}\)-proximity can lead to the linearized Bregman (LB) methods [13; 54; 55], which is a class of soft thresholding methods. Moreover, linearization combined with Krylov subspace projection can also yield a soft thresholding method such as the PLB in [11]. Other soft thresholding methods can be found in [4; 10; 37]. The soft thresholding method needs to select a regularization parameter, but how to select such a parameter so that the algorithm can guarantee to solve a SLI problem remains an open question. The numerical experiments in Sections 5.1 and 5.3 indicate that the HBROTP algorithm proposed in this paper might be more stable and robust than \(\ell_{1}\)-minimization and PLB for data reconstruction in many cases.
This paper is organized as follows. In Section 2, we introduce some notations, definitions, useful inequalities and algorithms. In Section 3, we discuss the error bounds and convergence of HBOT and HBOTP under the RIP. The error bounds for HBROT\(\omega\) and HBROTP\(\omega\) are given in Section 4. Numerical results from synthetic signals and real images are reported in Section 5.
## 2 Preliminary and algorithms
### Notations
Denote by \(N:=\{1,2,\ldots,n\}\). Given a subset \(\Omega\subseteq N\), \(\overline{\Omega}:=N\setminus\Omega\) denotes the complement set of \(\Omega\) and \(|\Omega|\) denotes its cardinality. For a vector \(z\in\mathbb{R}^{n}\), the support of \(z\) is represented as \(\mathrm{supp}(z):=\{i\in N:z_{i}\neq 0\}\), and the vector \(z_{\Omega}\in\mathbb{R}^{n}\) is obtained by zeroing out the elements of \(z\) supported on \(\overline{\Omega}\) and retaining those supported on \(\Omega\). Given the sparse level \(k\), \(\mathcal{L}_{k}(z)\) denotes the index set of the \(k\) largest absolute entries of \(z\). As usual, \(\mathcal{H}_{k}(z):=z_{\mathcal{L}_{k}(z)}\) is called the hard thresholding of \(z\). The symbols \(\|\cdot\|_{1}\) and \(\|\cdot\|_{2}\) represent \(\ell_{1}\)-norm and \(\ell_{2}\)-norm of a vector, respectively. Throughout the paper, \(\mathbf{e}\) denotes the vector of ones. \(\mathcal{W}^{k}\) and \(\mathcal{P}^{k}\) are two sets in \(\mathbb{R}^{n}\) defined as
\[\mathcal{W}^{k}=\{w\in\mathbb{R}^{n}:\ \mathbf{e}^{T}w=k,\ w\in\{0,1\}^{n}\},\ \mathcal{P}^{k}=\{w\in\mathbb{R}^{n}:\mathbf{e}^{T}w=k,\ 0\leq w\leq\mathbf{e}\}.\]
### Definitions and basic inequalities
Let us first recall the restricted isometry property (RIP) of a matrix and the optimal \(k\)-thresholding operator \(Z_{k}^{\#}(\cdot)\).
Definition 1 (14): Given a matrix \(A\in\mathbb{R}^{m\times n}\) with \(m<n\), the \(k\)th order restricted isometry constant (RIC) of \(A\), denoted by \(\delta_{k}\), is the smallest nonnegative number \(\delta\) such that
\[(1-\delta)\|u\|_{2}^{2}\leq\|Au\|_{2}^{2}\leq(1+\delta)\|u\|_{2}^{2} \tag{2.1}\]
for all \(k\)-sparse vectors \(u\in\mathbb{R}^{n}\). The matrix \(A\) is said to satisfy the RIP of order \(k\) if \(\delta_{k}<1\).
Definition 2 (56; 58): Given a vector \(u\in\mathbb{R}^{n}\), let \(w^{*}(u)\) be the solution of the binary optimization problem
\[\min_{w}\left\{\|y-A(u\circ w)\|_{2}^{2}:\ w\in\mathcal{W}^{k}\right\}.\]
Then the \(k\)-sparse vector \(Z_{k}^{\#}(u):=u\circ w^{*}(u)\) is called the optimal \(k\)-thresholding of \(u\), and \(Z_{k}^{\#}(\cdot)\) is called the optimal \(k\)-thresholding operator.
The two lemmas below will be used for the analysis in Sections 3 and 4.
Lemma 1[27]: _Let \(u\in\mathbb{R}^{n}\), \(v\in\mathbb{R}^{m}\), \(W\subseteq N\) and \(t\in N\)._
(i) _If \(\left|W\cup\text{supp}(u)\right|\leq t\), then_
\[\left\|\left[(I-A^{T}A)u\right]_{W}\right\|_{2}\leq\delta_{t}\|u\|_{2}.\]
(ii) _If \(\left|W\right|\leq t\), then_
\[\left\|\left(A^{T}v\right)_{W}\right\|_{2}\leq\sqrt{1+\delta_{t}}\|v\|_{2}.\]
Lemma 2[46]: _[_46_]_ _Let \(\{a^{p}\}\subseteq\mathbb{R}\)\((p=0,1,\dots)\) be a nonnegative sequence satisfying_
\[a^{p+1}\leq b_{1}a^{p}+b_{2}a^{p-1}+b_{3}\]
_for \(p\geq 1,\) where \(b_{1},b_{2}\) and \(b_{3}\geq 0\) are constants and \(b_{1}+b_{2}<1.\) Then_
\[a^{p}\leq\theta^{p-1}\left(a^{1}+(\theta-b_{1})a^{0}\right)+\frac{b_{3}}{1-\theta}\]
_for \(p\geq 2,\) where \(0\leq\theta<1\) is a constant given by \(\theta=(b_{1}+\sqrt{b_{1}^{2}+4b_{2}})/2<1.\)_
### Algorithms
Given iterates \(x^{p-1}\) and \(x^{p}\), the heavy-ball search direction is defined as
\[d^{p}=\alpha A^{T}(y-Ax^{p})+\beta(x^{p}-x^{p-1}),\]
where \(\alpha>0\) and \(\beta\geq 0\) are two parameters. We use the optimal \(k\)-thresholding operator \(Z_{k}^{\#}(\cdot)\) to generate the new iterate \(x^{p+1}\), i.e.,
\[x^{p+1}=Z_{k}^{\#}(x^{p}+d^{p}),\]
which is called the heavy-ball-based optimal \(k\)-thresholding (HBOT) algorithm. Combining HBOT and orthogonal projection (i.e., the least squares problem (2.4) below) leads to the heavy-ball-based optimal \(k\)-thresholding pursuit (HBOTP) algorithm. HBOT and HBOTP can be seen as the multi-step extensions of the OT and OTP algorithms in [56; 58]. The two algorithms are formally described as follows.
#### HBOT and HBOTP algorithms.
Input the data \((A,y,k)\) and two initial points \(x^{0}\) and \(x^{1}\). Choose the parameters \(\alpha>0\) and \(\beta\geq 0\).
1. At \(x^{p}\), set \[u^{p}=x^{p}-\alpha A^{T}(Ax^{p}-y)+\beta(x^{p}-x^{p-1}).\] (2.2)
2. Solve the optimization problem \[w^{*}=\arg\min_{w}\{\left\|y-A(u^{p}\circ w)\right\|_{2}^{2}:\ \mathbf{e}^{T}w=k,\ w\in\{0,1\}^{n}\}.\] (2.3)
* Generate the next iterate \(x^{p+1}\) as follows: For HBOT, let \(x^{p+1}=u^{p}\circ w^{*}\). For HBOTP, let \(S^{p+1}=\mathrm{supp}(u^{p}\circ w^{*})\) and \(x^{p+1}\) be the solution to the least squares problem \[x^{p+1}=\arg\min_{x\in\mathbb{R}^{n}}\{\|y-Ax\|_{2}^{2}:\mathrm{supp}(x)\subseteq S ^{p+1}\}.\] (2.4)
Repeat S1-S3 above until a certain stopping criterion is met.
In general, the computational cost for solving the binary optimization problem (2.3) is usually high [12; 16]. Replacing (2.3) by its convex relaxation
\[\arg\min_{w}\left\{\|y-A(u\circ w)\|_{2}^{2}:\ w\in\mathcal{P}^{k}\right\}\]
yields the next heavy-ball-based relaxed optimal \(k\)-thresholding (HBROT\(\omega\)) and the heavy-ball-based relaxed optimal \(k\)-thresholding pursuit (HBROTP\(\omega\)) algorithms, where \(\omega\) represents the times of solving such a convex relaxation problem at each iteration (which, as pointed out in [56], can be interpreted as the times of data compression within each iteration). As \(\omega=1\), we simply use HBROT and HBROTP to denote the algorithms HBROT1 and HBROTP1, respectively. Clearly, when \(\alpha=1\) and \(\beta=0\), HBROT\(\omega\) and HBROTP\(\omega\) reduce, respectively, to ROT\(\omega\) and ROTP\(\omega\) in [58].
**HBROT\(\omega\) and HBROTP\(\omega\) algorithms.** Input the data \((A,y,k)\), two initial points \(x^{0},x^{1}\) and \(\omega\). Choose the parameters \(\alpha>0\) and \(\beta\geq 0\).
* At \(x^{p}\), calculate \(u^{p}\) by (2.2).
* Set \(v\gets u^{p}\). Perform the following loops to produce the vectors \(w^{(j)}(j=1,\ldots,\omega)\): **for**\(j=1:\omega\)**do** \[w^{(j)}=\arg\min_{w}\{\|y-A(v\circ w)\|_{2}^{2}:\ \ \mathbf{e}^{T}w=k,\ \ 0\leq w\leq\mathbf{e}\},\] (2.5) and set \(v\gets v\circ w^{(j)}\).
* **end**
* Let \(x^{\sharp}=\mathcal{H}_{k}(u^{p}\circ w^{(1)}\circ\cdots\circ w^{(\omega)})\). Generate the next iterate \(x^{p+1}\) as follows: For HBROT\(\omega\), let \(x^{p+1}=x^{\sharp}\). For HBROTP\(\omega\), let \(S^{p+1}=\mathrm{supp}(x^{\sharp})\), and \(x^{p+1}\) be the solution to the least squares problem \[x^{p+1}=\arg\min_{x\in\mathbb{R}^{n}}\{\|y-Ax\|_{2}^{2}:\mathrm{supp}(x) \subseteq S^{p+1}\}.\] (2.6)
Repeat S1-S3 above until a certain stopping criterion is met.
The choice of stopping criterions depends on the application scenarios. For instance, one can simply prescribe the maximum number of iterations, \(p_{\max}\), which allows the algorithm to perform a total of \(p_{\max}\) iterations. One can also terminate the algorithm when \(\|y-Ax^{p}\|_{2}\leq\varepsilon\), where \(\varepsilon>0\) is a prescribed tolerance depending on the noise level.
Remark 1: The common feature of the proposed algorithms and existing hard-type thresholding algorithms is that the solutions generated by the algorithms depend on the input value of \(k\), which reflects the user's interest in reconstructing how many significant components of the target signal \(x^{*}\) whose sparsity level is denoted by \(k^{*}\). In many scenarios, one needs to reconstruct only a few largest components in magnitude of the target signal, instead of the whole signal. In such cases, the user is free to set the desired number \(k\) for the proposed algorithms. The quality of reconstruction depends on the input value of \(k\). In fact, the main theorems established in later sections imply that under the RIP of certain order \(\widehat{k}\), the solution generated by the algorithms is the best \(k\)-term approximation to the true signal \(x^{*}\) when \(k<k^{*}\), and it coincides with \(x^{*}\) when \(k\) satisfies that \(k^{*}\leq k<\widehat{k}\). When \(k\geq\widehat{k}\), there would be no guarantee for the proposed algorithms (including existing ones) to recover the signal. If the user expects to reconstruct the whole signal as possible, some information from theory and numerical experiments might be useful for the choice of \(k\). For instance, we may choose \(k\) as follows.
1. The prior information on the sparsity level \(k^{*}\) of the signal might be available in some situations. In this case, just set \(k=k^{*}\).
2. It is well known that the signal can be very likely to be recovered by a certain algorithm if its sparsity level \(k^{*}\) is lower than the half of the spark of the measurement matrix in \(\mathbb{R}^{m\times n}\)[25]. So it makes sense to choose \(k<(m+1)/2\) since the spark is bounded above by \(m+1\).
3. A large body of simulations and applications indicate that many algorithms work well when the sparsity level of signal is lower than \(m/3\), and many such signals can be generally reconstructed by some existing algorithms. Thus it is also reasonable to set \(k\leq m/3\) in the proposed algorithms in order to achieve a better chance for the signal to be recovered.
4. The number \(k\) can be also suggested by experiments including the phase transition of algorithms which sheds light on certain relation between the factors \((k,m,n)\) and the recovery success of signals by given algorithms.
Remark 2: Since \(x^{p-1},x^{p}\) are two \(k\)-sparse vectors in \(\mathbb{R}^{n}\) and \(A\in\mathbb{R}^{m\times n}\), the computations of \(Ax^{p}\) and \(\beta(x^{p}-x^{p-1})\) in (2.2) need at most \(mk\) and \(2k\) multiplication operations, respectively. Thus S1 in HBROTP\(\omega\) requires at most \(mn+m+mk+2k\) multiplication operations. As pointed out in [58, Section 5.1], S2 and S3 in HBROTP\(\omega\) requires \(O(n^{3.5}L+n\log k)+(mk^{2}+k^{3}/3)\) flops, in which \(L\) is the length of the problem data encoding in binary. Since \(k\ll m\), the computational complexity of HBROTP\(\omega\) at each iteration is about \(O(n^{3.5}L+mn+n\log k+mk^{2})\).
## 3 Analysis of HBOT and HBOTP
In this section, we establish the error bounds for HBOT and HBOTP under the RIP of order \(k\) or \(k+1\). Taking into account the noise influence, the error bound provides the estimation of the distance between the problem solution and the iterates generated by the algorithms. Thus the error bound is an important measurement of the quality of iterates as the approximation to the true solution of the linear inverse problem. In noiseless situations, the error bound implies the global convergence of the algorithms under the RIP assumption. Let us first introduce the following property, which is a combination of Lemmas 3.3 and 3.6 in [58].
**Lemma 3**: [58] _Let \(z\) be a \((2k)\)-sparse vector. Then \(\|Az\|_{2}^{2}\geq(1-2\delta_{k}-\delta_{k+s(k)})\|z\|_{2}^{2}\) where_
\[s(k)=\left\{\begin{array}{ll}1,&\mbox{if }\ k\mbox{ is an odd number},\\ 0,&\mbox{if }\ k\mbox{ is an even number}.\end{array}\right. \tag{3.1}\]
Note that Lemma 3.4 in [58] was established for the sparsity level \(k\) being an even number. We now establish the similar result even when \(k\) is odd.
**Lemma 4**: _Let \(h,z\in\mathbb{R}^{n}\) be two \(k\)-sparse vectors, and let \(\widehat{w}\in\mathcal{W}^{k}\) be any \(k\)-sparse binary vector satisfied \(\mbox{supp}(h)\subseteq\mbox{supp}(\widehat{w})\), then_
\[\|[(I-A^{T}A)(h-z)]\circ\widehat{w}\|_{2}\leq\sqrt{5}\delta_{k+s(k)}\|h-z\|_{ 2}, \tag{3.2}\]
_where \(s(k)\) is given by (3.1)._
_Proof_ For given vectors \(h,z,\widehat{w}\) satisfying the conditions of the lemma, from [58, Lemma 3.4], we get
\[\|[(I-A^{T}A)(h-z)]\circ\widehat{w}\|_{2}\leq\sqrt{5}\delta_{k}\|h-z\|_{2} \tag{3.3}\]
for even number \(k\). Therefore, we just need to show that (3.2) also holds when \(k\) is an odd number.
Indeed, assume that \(k\) is an odd number. Taking a \((k+1)\)-sparse binary vector \(\overline{w}\in\mathcal{W}^{k+1}\) such that \(\mbox{supp}(\widehat{w})\subseteq\mbox{supp}(\overline{w})\), we obtain
\[\|[(I-A^{T}A)(h-z)]\circ\widehat{w}\|_{2} =\left\|\left[(I-A^{T}A)(h-z)\right]_{\mbox{supp}(\widehat{w})} \right\|_{2}\] \[\leq\left\|\left[(I-A^{T}A)(h-z)\right]_{\mbox{supp}(\overline{w })}\right\|_{2}\] \[=\|[(I-A^{T}A)(h-z)]\circ\overline{w}\|_{2}. \tag{3.4}\]
As \(h\) and \(z\) are two \(k\)-sparse vectors, they are also \((k+1)\)-sparse vectors. Since \(\mbox{supp}(h)\subseteq\mbox{supp}(\widehat{w})\subseteq\mbox{supp}( \overline{w})\), and since \(k+1\) is even (when \(k\) is odd), applying (3.3) to this case yields
\[\|[(I-A^{T}A)(h-z)]\circ\overline{w}\|_{2}\leq\sqrt{5}\delta_{k+1}\|h-z\|_{2}. \tag{3.5}\]
Combining (3.4) and (3.5), we obtain
\[\|[(I-A^{T}A)(h-z)]\circ\widehat{w}\|_{2}\leq\sqrt{5}\delta_{k+1}\|h-z\|_{2}\]
for odd number \(k\). We conclude that (3.2) holds for any positive integer \(k\). \(\Box\)
The main results for HBOT and HBOTP are summarized as follows.
**Theorem 1**: _Let \(x\in\mathbb{R}^{n}\) be a solution to the system \(y=Ax+\nu\) where \(\nu\) is a noise vector. Assume that the RIC, \(\delta_{k+s(k)},\) of \(A\) and the parameters \(\alpha,\beta\) in HBOT and HBOTP satisfy that \(\delta_{k+s(k)}<\gamma^{*}\) and_
\[0\leq\beta<\frac{1+1/\eta}{1+\sqrt{5}\delta_{k+s(k)}}-1,\ \frac{1+2\beta-1/\eta}{1-\sqrt{5}\delta_{k+s(k)}}<\alpha<\frac{1+1/\eta}{1+ \sqrt{5}\delta_{k+s(k)}}, \tag{3.6}\]
_where \(\gamma^{*}(\approx 0.2274)\) is the unique root of the equation \(5\gamma^{3}+5\gamma^{2}+3\gamma-1=0\) in the interval \((0,1)\), \(s(k)\) is given by (3.1) and \(\eta:=\sqrt{\frac{1+\delta_{k}}{1-2\delta_{k}-\delta_{k+s(k)}}}\). Then the sequence \(\{x^{p}\}\) generated by HBOT or HBOTP obeys_
\[\|x_{S}-x^{p}\|_{2}\leq C_{1}\theta^{p-1}+C_{2}\|\nu^{\prime}\|_{2}, \tag{3.7}\]
_where \(S:=\mathcal{L}_{k}(x)\), \(\nu^{\prime}:=\nu+Ax_{\overline{S}}\), and the quantities \(C_{1},C_{2}\) are defined as_
\[C_{1}=\|x_{S}-x^{1}\|_{2}+(\theta-b)\|x_{S}-x^{0}\|_{2},\ C_{2}=\frac{2+(1+ \delta_{k})\alpha}{(1-\theta)\sqrt{1-2\delta_{k}-\delta_{k+s(k)}}}, \tag{3.8}\]
_and \(\theta:=(b+\sqrt{b^{2}+4\eta\beta})/2<1\) is ensured under the conditions (3.6) and the constant \(b\) is given by_
\[b:=\eta\left(|1+\beta-\alpha|+\sqrt{5}\alpha\delta_{k+s(k)}\right). \tag{3.9}\]
Proof: From (2.2), we have
\[u^{p}-x_{S}=(1-\alpha+\beta)(x^{p}-x_{S})+\alpha(I-A^{T}A)(x^{p}-x_{S})-\beta (x^{p-1}-x_{S})+\alpha A^{T}\nu^{\prime}, \tag{3.10}\]
where \(S=\mathcal{L}_{k}(x)\) and \(\nu^{\prime}=\nu+Ax_{\overline{S}}\). Let \(\widehat{w}\in\mathcal{W}^{k}\) be a \(k\)-sparse binary vector such that \(\text{supp}(x_{S})\subseteq\text{supp}(\widehat{w})\). Then \(x_{S}=x_{S}\circ\widehat{w}\). Since \((x_{S}-u^{p})\circ\widehat{w}\) is a \(k\)-sparse vector and \(y=Ax_{S}+\nu^{\prime}\), we have
\[\|y-A(u^{p}\circ\widehat{w})\|_{2}= \|A(x_{S}-u^{p}\circ\widehat{w})+\nu^{\prime}\|_{2}\] \[\leq \|A[(x_{S}-u^{p})\circ\widehat{w}]\|_{2}+\|\nu^{\prime}\|_{2}\] \[\leq \sqrt{1+\delta_{k}}\|(x_{S}-u^{p})\circ\widehat{w}\|_{2}+\|\nu^{ \prime}\|_{2}, \tag{3.11}\]
where the last inequality is obtained by using (2.1). From (3.10), one has
\[\|(x_{S}-u^{p})\circ\widehat{w}\|_{2}\] \[\leq |1-\alpha+\beta|\cdot\|(x^{p}-x_{S})\circ\widehat{w}\|_{2}+\alpha \|[(I-A^{T}A)(x^{p}-x_{S})]\circ\widehat{w}\|_{2}\] \[+\beta\|(x^{p-1}-x_{S})\circ\widehat{w}\|_{2}+\alpha\|(A^{T}\nu^ {\prime})\circ\widehat{w}\|_{2}. \tag{3.12}\]
Since \(x_{S},x^{p},\widehat{w}\) are \(k\)-sparse vectors and \(\text{supp}(x_{S})\subseteq\text{supp}(\widehat{w})\), by using Lemmas 4 and 1 (ii), we obtain
\[\|[(I-A^{T}A)(x^{p}-x_{S})]\circ\widehat{w}\|_{2}\leq\sqrt{5}\delta_{k+s(k)} \|x^{p}-x_{S}\|_{2} \tag{3.13}\]
and
\[\|(A^{T}\nu^{\prime})\circ\widehat{w}\|_{2}=\left\|(A^{T}\nu^{\prime})_{ \text{supp}(\widehat{w})}\right\|_{2}\leq\sqrt{1+\delta_{k}}\|\nu^{\prime}\|_ {2}. \tag{3.14}\]
Substituting (3.13) and (3.14) into (3.12) yields
\[\|(x_{S}-u^{p})\circ\widehat{w}\|_{2} \leq |1-\alpha+\beta|\cdot\|x^{p}-x_{S}\|_{2}+\alpha\sqrt{5}\delta_{k+ s(k)}\|x^{p}-x_{S}\|_{2}\] \[+\beta\|x^{p-1}-x_{S}\|_{2}+\alpha\sqrt{1+\delta_{k}}\|\nu^{ \prime}\|_{2}\] \[= (|1+\beta-\alpha|+\sqrt{5}\alpha\delta_{k+s(k)})\|x^{p}-x_{S}\|_{ 2}+\beta\|x^{p-1}-x_{S}\|_{2}\] \[+\alpha\sqrt{1+\delta_{k}}\|\nu^{\prime}\|_{2}.\]
It follows from (3.11) that
\[\|y-A(u^{p}\circ\widehat{w})\|_{2}\leq \sqrt{1+\delta_{k}}\left(|1+\beta-\alpha|+\sqrt{5}\alpha\delta_{k+s (k)}\right)\|x^{p}-x_{S}\|_{2}\] \[+\beta\sqrt{1+\delta_{k}}\|x^{p-1}-x_{S}\|_{2}+[1+(1+\delta_{k}) \alpha]\|\nu^{\prime}\|_{2}. \tag{3.15}\]
Since \(x^{p+1}=u^{p}\circ w^{*}\) in HBOT or \(x^{p+1}\) is the optimal solution of (2.4) in HBOTP, the sequence \(\{x^{p}\}\) generated by HBOT or HBOTP satisfies
\[\left\|y-Ax^{p+1}\right\|_{2}\leq\left\|y-A(u^{p}\circ w^{*})\right\|_{2}\leq \left\|y-A(u^{p}\circ w)\right\|_{2} \tag{3.16}\]
for all \(w\in\mathcal{W}^{k}\), where the second inequality follows from (2.3). For \(\widehat{w}\in\mathcal{W}^{k}\), it follows from (3.16) that
\[\left\|y-Ax^{p+1}\right\|_{2}\leq\left\|y-A(u^{p}\circ\widehat{w})\right\|_{ 2}. \tag{3.17}\]
As \(x_{S}-x^{p+1}\) is a \((2k)\)-sparse vector, by using Lemma 3, one has
\[\|y-Ax^{p+1}\|_{2}= \|A(x_{S}-x^{p+1})+\nu^{\prime}\|_{2}\] \[\geq \|A(x_{S}-x^{p+1})\|_{2}-\|\nu^{\prime}\|_{2}\] \[\geq \sqrt{1-2\delta_{k}-\delta_{k+s(k)}}\|x_{S}-x^{p+1}\|_{2}-\|\nu^{ \prime}\|_{2}. \tag{3.18}\]
Combining (3.15), (3.17) and (3.18) yields
\[\|x^{p+1}-x_{S}\|_{2} \leq\eta(|1+\beta-\alpha|+\sqrt{5}\alpha\delta_{k+s(k)})\|x^{p}-x_ {S}\|_{2}+\eta\beta\|x^{p-1}-x_{S}\|_{2}\] \[\quad+\frac{2+(1+\delta_{k})\alpha}{\sqrt{1-2\delta_{k}-\delta_{ k+s(k)}}}\|\nu^{\prime}\|_{2}\] \[=b\|x^{p}-x_{S}\|_{2}+\eta\beta\|x^{p-1}-x_{S}\|_{2}+(1-\theta)C_ {2}\|\nu^{\prime}\|_{2}, \tag{3.19}\]
where \(\eta,b,\theta,C_{2}\) are given exactly as in Theorem 3.1. Since \(\delta_{k}\leq\delta_{k+s(k)}<\gamma^{*}\), we have
\[\eta\sqrt{5}\delta_{k+s(k)}=\sqrt{5}\delta_{k+s(k)}\sqrt{\frac{1+\delta_{k}}{1 -2\delta_{k}-\delta_{k+s(k)}}}<\sqrt{5}\gamma^{*}\sqrt{\frac{1+\gamma^{*}}{1- 3\gamma^{*}}}=1,\]
where the last equality follows from the fact that \(\gamma^{*}\) is the root of \(5\gamma^{3}+5\gamma^{2}+3\gamma=1\) in \((0,1)\). It implies that \(0<\frac{1+1/\eta}{1+\sqrt{5}\delta_{k+s(k)}}-1\), which shows that the range of \(\beta\) in (3.6) is well defined. Furthermore, the first inequality in (3.6) implies that
\[\frac{1+2\beta-1/\eta}{1-\sqrt{5}\delta_{k+s(k)}}<1+\beta<\frac{1+1/\eta}{1+ \sqrt{5}\delta_{k+s(k)}},\]
which indicates that the range for \(\alpha\) in (3.6) is also well defined. Combining (3.9) with (3.6), we deduce that
\[b= \eta\left(|1+\beta-\alpha|+\sqrt{5}\alpha\delta_{k+s(k)}\right)\] \[= \left\{\begin{aligned} &\eta\left[1+\beta-\alpha(1- \sqrt{5}\delta_{k+s(k)})\right],&\text{ if }\frac{1+2\beta-1/\eta}{1-\sqrt{5}\delta_{k+s(k)}}< \alpha\leq 1+\beta,\\ &\eta\left[-1-\beta+\alpha(1+\sqrt{5}\delta_{k+s(k)})\right],& \text{ if }1+\beta<\alpha<\frac{1+1/\eta}{1+\sqrt{5}\delta_{k+s(k)}}, \end{aligned}\right.\] \[< 1-\eta\beta,\]
which means that the relation (3.19) obeys the conditions of Lemma 2. It follows from Lemma 2 that (3.7) holds with \(\theta=\frac{b+\sqrt{b^{2}+4\eta\beta}}{2}<1\) and \(C_{1},C_{2}\) are given by (3.8). \(\Box\)
The error bound (3.7) indicates that the iterate \(x^{p}\) generated by the algorithms can approximate \(x_{S}\), the significant components of the solution to the linear inverse problem. In particular, we immediately obtain the following convergence result for the algorithms.
Corollary 1: _Let \(x\in\mathbb{R}^{n}\) be a \(k\)-sparse solution to the system \(y=Ax\). Assume that the RIC, \(\delta_{k+s(k)},\) of \(A\) and the parameters \(\alpha,\beta\) in HBOT and HBOTP satisfy the conditions of Theorem 1. Then the sequence \(\{x^{p}\}\) generated by HBOT or HBOTP obeys that \(\|x-x^{p}\|_{2}\leq C_{1}\theta^{p-1},\) where the constant \(C_{1}\) is defined in Theorem 1. Thus the sequence \(\{x^{p}\}\) generated by HBOT or HBOTP converges to \(x\)._
## 4 Analysis of HBROT\(\omega\) and HBROTP\(\omega\)
In this section, we establish the error bounds for HBROT\(\omega\) and HBROTP\(\omega\). The analysis is far from being trivial. We need a few useful technical results before we actually establish the error bounds. We first recall a helpful lemma concerning the polytope \(\mathcal{P}^{k}\), which is the special case of Lemma 4.2 with \(\tau=k\) in [58].
Lemma 5: [58] _Given an index set \(\Lambda\subseteq N\) and a vector \(w\in\mathcal{P}^{k},\) decompose \(w_{\Lambda}\) as the sum of \(k\)-sparse vectors: \(w_{\Lambda}=\sum_{j=1}^{q}w_{\Lambda_{j}}\), where \(q:=\lceil\frac{|\Lambda|}{k}\rceil\), \(\Lambda=\bigcup_{j=1}^{q}\Lambda_{j}\) and \(\Lambda_{1}:=\mathcal{L}_{k}(\omega_{\Lambda})\), \(\Lambda_{2}:=\mathcal{L}_{k}(\omega_{\Lambda\setminus\Lambda_{1}})\) and so on. Then_
\[\sum_{j=1}^{q}\|w_{\Lambda_{j}}\|_{\infty}<2.\]
We now give an inequality concerning the norms \(\|\cdot\|_{2},\ \|\cdot\|_{1}\) and \(\|\cdot\|_{\infty}\). This inequality is a modification of Lemma 6.14 in [28], but tailored to the need of the later analysis in this paper.
Lemma 6: _Let \(h\in\mathbb{R}^{r}\setminus\{0\}\) be a vector with \(r\geq 2\), and let \(\zeta_{1}>\zeta_{2}\) be two positive numbers such that \(\|h\|_{1}\leq\zeta_{1}\) and \(\|h\|_{\infty}\leq\zeta_{2}\). Then_
\[\|h\|_{2}\leq\left\{\begin{matrix}g(r),&\text{if}\ \ r\leq t_{0},\\ \min\{g(t_{0}),g(t_{0}+1)\},&\text{if}\ \ r\geq t_{0}+1,\end{matrix}\right. \tag{4.1}\]
_where \(t_{0}:=\lfloor\frac{4\zeta_{1}}{\zeta_{2}}\rfloor\) and_
\[g(j):=\frac{1}{\sqrt{j}}\zeta_{1}+\frac{\sqrt{j}}{4}\zeta_{2},\ \ j\in(0,+\infty), \tag{4.2}\]
_is strictly decreasing in the interval \((0,\frac{4\zeta_{1}}{\zeta_{2}}]\) and strictly increasing in the interval \([\frac{4\zeta_{1}}{\zeta_{2}},+\infty)\)._
Proof: Without loss of generality, we assume that \(h\) is a nonnegative vector. Sort the components of \(h\) into descending order, and denote such ordered components by \(z_{1}\geq z_{2}\geq\cdots\geq z_{r}\geq 0\) and \(z=\left(z_{1},\ldots,z_{r}\right)^{T}\). Thus, \(\|z\|_{q}=\|h\|_{q}\) for \(q\geq 1\). For a given positive integer \(s\) and \(a_{1}\geq a_{2}\geq\cdots\geq a_{s}\geq 0\), from (28, Lemma 6.14), one has
\[\sqrt{a_{1}^{2}+\cdots+a_{s}^{2}}\leq\frac{a_{1}+\cdots+a_{s}}{\sqrt{s}}+ \frac{\sqrt{s}}{4}(a_{1}-a_{s}). \tag{4.3}\]
There are only two cases according to the relation between \(r\) and \(t_{0}\).
**Case 1**. \(r\leq t_{0}\). By using (4.2) and (4.3), we have
\[\|z\|_{2}\leq\frac{\|z\|_{1}}{\sqrt{r}}+\frac{\sqrt{r}}{4}(z_{1}-z_{r})\leq \frac{\|z\|_{1}}{\sqrt{r}}+\frac{\sqrt{r}}{4}\|z\|_{\infty}\leq\frac{1}{\sqrt {r}}\zeta_{1}+\frac{\sqrt{r}}{4}\zeta_{2}=g(r). \tag{4.4}\]
**Case 2**. \(r\geq t_{0}+1\). Denote by \(t:=\arg\min_{j}\{g(j):j=t_{0},t_{0}+1\}\) and let \(r_{1},r_{2}\) be nonnegative integers such that \(r=r_{1}t+r_{2}(0\leq r_{2}<t)\). Decompose \(z\) as the sum of \(t\)-sparse vectors: \(z=\sum_{j=1}^{r_{1}+1}z_{Q_{j}}\), where \(Q_{j}:=\{(j-1)t+1,\ldots,jt\}\) with \(j=1,\ldots,r_{1}\), and \(Q_{r_{1}+1}:=\{r_{1}t+1,\ldots,r_{1}t+r_{2}\}\).
Firstly, we consider the case \(r_{2}>0\). With the aid of (4.3), we see that
\[\|z_{Q_{j}}\|_{2}\leq\frac{\|z_{Q_{j}}\|_{1}}{\sqrt{t}}+\frac{\sqrt{t}}{4} \left(z_{(j-1)t+1}-z_{jt}\right),\ \ j=1,\ldots,r_{1}, \tag{4.5}\]
and
\[\|z_{Q_{r_{1}+1}}\|_{2}\leq\frac{\|z_{Q_{r_{1}+1}}\|_{1}}{\sqrt{t}}+\frac{ \sqrt{t}}{4}z_{r_{1}t+1}, \tag{4.6}\]
which is ensured under the conditions \(a_{1}=z_{r_{1}t+1},\ldots,a_{r_{2}}=z_{r_{1}t+r_{2}}\) and \(a_{r_{2}+1}=\ldots=a_{t}=0\). Merging (4.5) with (4.6), one has
\[\|z\|_{2}=\left\|\sum_{j=1}^{r_{1}+1}z_{Q_{j}}\right\|_{2}\leq\sum_{j=1}^{r_{1 }+1}\|z_{Q_{j}}\|_{2}\leq\frac{1}{\sqrt{t}}\sum_{j=1}^{r_{1}+1}\|z_{Q_{j}}\|_ {1}+\frac{\sqrt{t}}{4}\mu\]
with
\[\mu:=\sum_{j=1}^{r_{1}}\left(z_{(j-1)t+1}-z_{jt}\right)+z_{r_{1}t+1}=z_{1}- \sum_{j=1}^{r_{1}}\left(z_{jt}-z_{jt+1}\right)\leq z_{1},\]
where the inequality is resulted from \(z_{1}\geq z_{2}\geq\cdots\geq z_{r}\). It follows that
\[\|z\|_{2}\leq\frac{1}{\sqrt{t}}\|z\|_{1}+\frac{\sqrt{t}}{4}z_{1}\leq\frac{1}{ \sqrt{t}}\zeta_{1}+\frac{\sqrt{t}}{4}\zeta_{2}=g(t), \tag{4.7}\]
where the second inequality is ensured by \(\|z\|_{1}=\|h\|_{1}\leq\zeta_{1}\) and \(z_{1}=\|h\|_{\infty}\leq\zeta_{2}\).
Secondly, we now consider the case \(r_{2}=0\), which means \(Q_{r_{1}+1}=\emptyset\) and \(z=\sum_{j=1}^{r_{1}}z_{Q_{j}}\). Hence, by using (4.3), we obtain
\[\|z\|_{2}\leq\sum_{j=1}^{r_{1}}\|z_{Q_{j}}\|_{2}\leq\frac{1}{\sqrt{t}}\sum_{j= 1}^{r_{1}}\|z_{Q_{j}}\|_{1}+\frac{\sqrt{t}}{4}\sum_{j=1}^{r_{1}}\left(z_{(j-1) t+1}-z_{jt}\right). \tag{4.8}\]
For \(z_{1}\geq z_{2}\geq\cdots\geq z_{r}\geq 0\), we have
\[\sum_{j=1}^{r_{1}}\left(z_{(j-1)t+1}-z_{jt}\right)=z_{1}-\sum_{j=1}^{r_{1}-1} \left(z_{jt}-z_{jt+1}\right)-z_{r_{1}}\leq z_{1}=\|z\|_{\infty}. \tag{4.9}\]
Merging (4.8) with (4.9) leads to
\[\|z\|_{2}\leq\frac{1}{\sqrt{t}}\|z\|_{1}+\frac{\sqrt{t}}{4}\|z\|_{ \infty}\leq\frac{1}{\sqrt{t}}\zeta_{1}+\frac{\sqrt{t}}{4}\zeta_{2}=g(t). \tag{4.10}\]
Combining (4.4), (4.7), (4.10) with \(\|z\|_{q}=\|h\|_{q}(q\geq 1)\), we obtain the relation (4.1) directly.
Now, we use an example to show that the upper bound of \(\|h\|_{2}\) in Lemma 6 is tighter than that of Lemma 6.14 in [28] in some situations.
Example 1: Let \(h=(1,\epsilon_{1},\ldots,\epsilon_{14},\epsilon_{0})^{T}\in\mathbb{R}^{16}\), where \(\epsilon_{j}\geq\epsilon_{0}\)\((j=1,\ldots,14)\), \(\sum_{j=1}^{14}\epsilon_{j}=1-\epsilon_{0}\) and \(\epsilon_{0}\in(0,1/15]\). Hence, \(\|h\|_{1}=2\) and \(\|h\|_{\infty}=1\). Set \(\zeta_{1}=2\) and \(\zeta_{2}=1\). Then \(t_{0}=\frac{4\zeta_{1}}{\zeta_{2}}=8\). The upper bound of \(\|h\|_{2}\) can be given by \(\|h\|_{2}\leq 1.5-\epsilon_{0}\) in (4.3) and \(\|h\|_{2}\leq g(8)=\sqrt{2}\) in (4.1), respectively. Since \(1.5-\epsilon_{0}>\sqrt{2}\) for \(\epsilon_{0}\in(0,1/15]\), we see that the upper bound of \(\|h\|_{2}\) given by (4.1) is tighter than that of (4.3) if \(r>t_{0}+1\) and \(\epsilon_{0}=\min_{1\leq i\leq r}|h_{i}|\) is small enough.
Taking \(h=(\|w_{\Lambda_{1}}\|_{\infty},\ldots,\|w_{\Lambda_{q}}\|_{\infty})^{T}\) with \(q=\lceil\frac{|A|}{k}\rceil\), by using Lemma 5, we have \(\|h\|_{1}<2\) and \(\|h\|_{\infty}=\|w_{\Lambda_{1}}\|_{\infty}\leq 1\). Hence, by setting \(\zeta_{1}=2\) and \(\zeta_{2}=1\), we get \(t_{0}=8\) in Lemma 6. This results in the following corollary.
Corollary 2: _Under the conditions of Lemma 5, one has \(\left(\sum_{j=1}^{q}\|w_{A_{j}}\|_{\infty}^{2}\right)^{1/2}\leq\xi_{q}\), where_
\[\xi_{q}=\left\{\begin{array}{cl}1,&\mbox{if}\ \ q=1,\\ \frac{2}{\sqrt{q}}+\frac{\sqrt{q}}{4},&\mbox{if}\ \ 2\leq q<8,\\ \sqrt{2},&\mbox{if}\ \ q\geq 8,\end{array}\right. \tag{4.11}\]
_which is strictly decreasing in the interval \([2,8]\) and \(\max\limits_{q\geq 1}\,\xi_{q}=\xi_{2}=\frac{5}{4}\sqrt{2}\)._
Using Lemmas 5 and 6, we can establish the next lemma.
Lemma 7: _Let \(x\in\mathbb{R}^{n}\) be a vector satisfying \(y=Ax+\nu\) where \(\nu\) is a noise vector. Let \(S=\mathcal{L}_{k}(x)\) and let \(V\subseteq N\) be any given index set such that \(S\subseteq V\). At the iterate \(x^{p}\), the vectors \(w^{(j)},\ j=1,\cdots,\omega,\) are generated by HBROT\(\omega\) or HBROTP\(\omega\). For every \(i\in\{1,\ldots,\omega\}\), we have_
\[\Theta^{i}:= \Big{\|}A\left[(u^{p}-x_{S})\circ w^{(i)}_{H}\right]_{\overline{V }}\Big{\|}_{2}\] \[\leq \sqrt{1+\delta_{k}}\Big{[}\big{(}\xi_{q}|1-\alpha+\beta|+2\alpha \delta_{3k}\big{)}\|x^{p}-x_{S}\|_{2}+\beta\xi_{q}\|x^{p-1}-x_{S}\|_{2}\] \[+2\alpha\sqrt{1+\delta_{k}}\|\nu^{\prime}\|_{2}\Big{]}, \tag{4.12}\]
_where \(w^{(i)}_{H}\) is the Hadamard product of vectors \(w^{(j)}(j=1,\cdots,i)\), i.e.,_
\[w^{(i)}_{H}:=w^{(1)}\circ w^{(2)}\circ\cdots\circ w^{(i)},\ \ i=1,\ldots,\omega, \tag{4.13}\]
_and \(\xi_{q}\) is given by (4.11) with \(q=\lceil\frac{n-|V|}{k}\rceil\)._
Proof: Taking \(w=w^{(1)}\) and \(\Lambda=\overline{V}\) in Lemma 5 and Corollary 2, one has
\[\sum_{j=1}^{q}\|(w^{(1)})_{\Lambda_{j}}\|_{\infty}<2,\ \sqrt{\sum_{j=1}^{q}\|(w^{(1)})_{ \Lambda_{j}}\|_{\infty}^{2}}\leq\xi_{q}, \tag{4.14}\]
where \(q=\lceil\frac{n-|V|}{k}\rceil\) and the definition of \(\Lambda_{j},\ j=1,\ldots,q\), can be found in Lemma 5. Next, we derive the relation (4.12) for given \(i\in\{1,\ldots,\omega\}\). Define the \(k\)-sparse vectors \(z^{(l)}:=[(u^{p}-x_{S})\circ w^{(i)}_{H}]_{\Lambda_{l}},\ l=1,\ldots,q\), where \(u^{p}\) and \(w^{(i)}_{H}\) are given by (2.2) and (4.13), respectively. Since \(w^{(j)}\in\mathcal{P}^{k}(j=1,\ldots,\omega)\), we have
\[\|z^{(l)}\|_{2}\leq\|(w^{(i)}_{H})_{\Lambda_{l}}\|_{\infty}\cdot\|(u^{p}-x_{S} )_{\Lambda_{l}}\|_{2}\leq\|(w^{(1)})_{\Lambda_{l}}\|_{\infty}\cdot\|(u^{p}-x_{ S})_{\Lambda_{l}}\|_{2}, \tag{4.15}\]
where the second inequality follows from (4.13) and \(0\leq w^{(j)}\leq\mathbf{e}\) for \(j=1,\ldots,\omega\). Since \(z^{(l)},\ l=1,\ldots,q\), are \(k\)-sparse vectors, from the definition of \(\Theta^{i}\) in (4.12) and (4.15), we obtain
\[\Theta^{i} =\left\|A\sum_{l=1}^{q}z^{(l)}\right\|_{2}\leq\sum_{l=1}^{q}\|Az^ {(l)}\|_{2}\leq\sqrt{1+\delta_{k}}\sum_{l=1}^{q}\|z^{(l)}\|_{2}\] \[\leq\sqrt{1+\delta_{k}}\sum_{l=1}^{q}\|(w^{(1)})_{\Lambda_{l}}\|_{ \infty}\cdot\|(u^{p}-x_{S})_{\Lambda_{l}}\|_{2}, \tag{4.16}\]
where the second inequality is given by (2.1). Since \(|\Lambda_{l}|\leq k\) and \(|\text{supp}(x^{p}-x_{S})\cup\Lambda_{l}|\leq 3k\) for \(l=1,\ldots,q\), by using (3.10) and Lemma 1, we have
\[\|(u^{p}-x_{S})_{\Lambda_{l}}\|_{2}\leq |1-\alpha+\beta|\cdot\|(x^{p}-x_{S})_{\Lambda_{l}}\|_{2}+\alpha \|[(I-A^{T}A)(x^{p}-x_{S})]_{\Lambda_{l}}\|_{2}\] \[+\beta\|(x^{p-1}-x_{S})_{\Lambda_{l}}\|_{2}+\alpha\|(A^{T}\nu^{ \prime})_{\Lambda_{l}}\|_{2}\] \[\leq |1-\alpha+\beta|\cdot\|(x^{p}-x_{S})_{\Lambda_{l}}\|_{2}+\alpha \delta_{3k}\|x^{p}-x_{S}\|_{2}\] \[+\beta\|(x^{p-1}-x_{S})_{\Lambda_{l}}\|_{2}+\alpha\sqrt{1+\delta_ {k}}\|\nu^{\prime}\|_{2}. \tag{4.17}\]
Substituting (4.17) into (4.16) yields
\[\frac{\Theta^{i}}{\sqrt{1+\delta_{k}}}\leq |1-\alpha+\beta|\cdot\sum_{l=1}^{q}\|(w^{(1)})_{\Lambda_{l}}\|_{ \infty}\cdot\|(x^{p}-x_{S})_{\Lambda_{l}}\|_{2}\] \[+\alpha\delta_{3k}\sum_{l=1}^{q}\|(w^{(1)})_{\Lambda_{l}}\|_{ \infty}\cdot\|x^{p}-x_{S}\|_{2}\] \[+\beta\sum_{l=1}^{q}\|(w^{(1)})_{\Lambda_{l}}\|_{\infty}\cdot\|(x^ {p-1}-x_{S})_{\Lambda_{l}}\|_{2}\] \[+\alpha\sqrt{1+\delta_{k}}\sum_{l=1}^{q}\|(w^{(1)})_{\Lambda_{l}} \|_{\infty}\cdot\|\nu^{\prime}\|_{2}.\]
It follows from Cauchy-Schwarz inequality and (4.14) that
\[\frac{\Theta^{i}}{\sqrt{1+\delta_{k}}}\] \[\leq|1-\alpha+\beta|\sqrt{\sum_{l=1}^{q}\|(w^{(1)})_{\Lambda_{l}}\|_ {\infty}^{2}}\sqrt{\sum_{l=1}^{q}\|(x^{p}-x_{S})_{\Lambda_{l}}\|_{2}^{2}}+2 \alpha\delta_{3k}\|x^{p}-x_{S}\|_{2}\] \[\quad+\beta\sqrt{\sum_{l=1}^{q}\|(w^{(1)})_{\Lambda_{l}}\|_{ \infty}^{2}}\sqrt{\sum_{l=1}^{q}\|(x^{p-1}-x_{S})_{\Lambda_{l}}\|_{2}^{2}}+2 \alpha\sqrt{1+\delta_{k}}\|\nu^{\prime}\|_{2}\] \[\leq|1-\alpha+\beta|\xi_{q}\|x^{p}-x_{S}\|_{2}+2\alpha\delta_{3k} \|x^{p}-x_{S}\|_{2}+\beta\xi_{q}\|x^{p-1}-x_{S}\|_{2}\] \[\quad+2\alpha\sqrt{1+\delta_{k}}\|\nu^{\prime}\|_{2},\]
where the last inequality follows from the relation \(\sum_{j=1}^{q}\|z_{\Lambda_{l}}\|_{2}^{2}=\|z_{\overline{V}}\|_{2}^{2}\leq\|z \|_{2}^{2}\) for any \(z\in\mathbb{R}^{n}\) due to \(\overline{V}=\bigcup_{j=1}^{q}\Lambda_{j}\) and \(\Lambda_{j}\bigcap\Lambda_{l}=\emptyset\) for \(j\neq l\). Thus (4.12) holds. \(\square\)
We now estimate the term \(\|y-A(u^{p}\circ w_{H}^{(\omega)})\|_{2}\) by using Lemma 7.
Lemma 8: _Let \(x\in\mathbb{R}^{n}\) be a vector satisfying \(y=Ax+\nu\) where \(\nu\) is a noise vector. At the iterate \(x^{p}\), the vectors \(u^{p}\) and \(w^{(j)}(j=1,\cdots,\omega)\) are generated by HBROT\(\omega\) or HBROTP\(\omega\). Then_
\[\|y-A(u^{p}\circ w_{H}^{(\omega)})\|_{2}\] \[\leq c_{1,q}\sqrt{1+\delta_{k}}\|x^{p}-x_{S}\|_{2}+\sqrt{1+\delta _{k}}\beta\big{[}\xi_{q}(\omega-1)+1\big{]}\|x^{p-1}-x_{S}\|_{2}\] \[\quad+\big{[}\alpha(2\omega-1)(1+\delta_{k})+1\big{]}\|\nu^{ \prime}\|_{2}, \tag{4.18}\]
_where \(w_{H}^{(\omega)}\) is given by (4.13), \(S:=\mathcal{L}_{k}(x)\), \(q=\lceil\frac{n-k}{k}\rceil\), \(\xi_{q}\) is given by (4.11) and \(c_{1,q}\) is given as_
\[c_{1,q}:=\big{(}\xi_{q}(\omega-1)+1\big{)}|1-\alpha+\beta|+\alpha \big{(}2(\omega-1)\delta_{3k}+\delta_{2k}\big{)}. \tag{4.19}\]
_Proof_ Let \(\widehat{w}\in\mathcal{W}^{k}\) be a binary vector satisfying \(\mathrm{supp}(x_{S})\subseteq\mathrm{supp}(\widehat{w})\) and \(V=\mathrm{supp}(\widehat{w})\). From Lemma 4.3 in [58], we get
\[\|y-A[u^{p}\circ w_{H}^{(\omega)}]\|_{2}\leq\|y-A(u^{p}\circ \widehat{w})\|_{2}+\sum_{i=1}^{\omega-1}\Big{\|}A\left[(u^{p}-x_{S})\circ w_{ H}^{(i)}\circ(\mathbf{e}-\widehat{w})\right]\Big{\|}_{2}, \tag{4.20}\]
where \(w_{H}^{(i)},\ i=1,\ldots,\omega\), are given by (4.13). As \(V=\mathrm{supp}(\widehat{w})\) and \(|V|=k\), it follows from (4.12) that
\[\Big{\|}A\left[(u^{p}-x_{S})\circ w_{H}^{(i)}\circ(\mathbf{e}- \widehat{w})\right]\Big{\|}_{2}=\left\|A\left[(u^{p}-x_{S})\circ w_{H}^{(i)} \right]_{\overline{\mathrm{supp}(\widehat{w})}}\right\|_{2}\] \[\leq\sqrt{1+\delta_{k}}\Big{[}(\xi_{q}|1-\alpha+\beta|+2\alpha \delta_{3k})\|x^{p}-x_{S}\|_{2}+\beta\xi_{q}\|x^{p-1}-x_{S}\|_{2}\] \[\quad+2\alpha\sqrt{1+\delta_{k}}\|\nu^{\prime}\|_{2}\Big{]}, \tag{4.21}\]
where \(q=\lceil\frac{n-k}{k}\rceil\) and \(i=1,\ldots,\omega-1\). We now estimate the term \(\|y-A(u^{p}\circ\widehat{w})\|_{2}\) in (4.20). Because \(|\text{supp}(x^{p}-x_{S})\cup\text{supp}(\widehat{w})|\leq 2k\), by using (3.12) and Lemma 1, we obtain
\[\|(x_{S}-u^{p})\circ\widehat{w}\|_{2}\] \[\leq|1-\alpha+\beta|\cdot\|x^{p}-x_{S}\|_{2}+\alpha\left\|\left[ (I-A^{T}A)(x^{p}-x_{S})\right]_{\text{supp}(\widehat{w})}\right\|_{2}\] \[\quad+\beta\|x^{p-1}-x_{S}\|_{2}+\alpha\left\|(A^{T}\nu^{\prime}) _{\text{supp}(\widehat{w})}\right\|_{2}\] \[\leq(|1-\alpha+\beta|+\alpha\delta_{2k})\|x^{p}-x_{S}\|_{2}+\beta \|x^{p-1}-x_{S}\|_{2}+\alpha\sqrt{1+\delta_{k}}\|\nu^{\prime}\|_{2}.\]
It follows from (3.11) that
\[\|y-A(u^{p}\circ\widehat{w})\|_{2} \leq\sqrt{1+\delta_{k}}\Big{[}(|1-\alpha+\beta|+\alpha\delta_{2k} )\|x^{p}-x_{S}\|_{2}\] \[\quad+\beta\|x^{p-1}-x_{S}\|_{2}+\alpha\sqrt{1+\delta_{k}}\|\nu^ {\prime}\|_{2}\Big{]}+\|\nu^{\prime}\|_{2}. \tag{4.22}\]
Combining (4.21), (4.22) with (4.20) yields (4.18). \(\square\)
The following property of the hard thresholding operator \(\mathcal{H}_{k}(\cdot)\) is shown in [58, Lemma 4.1].
Lemma 9: [58] _Let \(z,h\in\mathbb{R}^{n}\) be two vectors and \(\|h\|_{0}\leq k\). Then_
\[\|h-\mathcal{H}_{k}(z)\|_{2}\leq\|(z-h)_{S\cup S^{*}}\|_{2}+\|(z-h)_{S^{*} \setminus S}\|_{2},\]
_where \(S:=\text{supp}(h)\) and \(S^{*}:=\text{supp}(\mathcal{H}_{k}(z))\)._
Let us state a fundamental property of the orthogonal projection in the following lemma, which can be found in [27, Eq.(3.21)] and [56, p.49], and was extended to the general case in [60, Lemma 4.2].
Lemma 10: [27; 56; 60] _Let \(x\in\mathbb{R}^{n}\) be a vector satisfying \(y=Ax+\nu\) where \(\nu\) is a noise vector. Let \(S^{*}\subseteq N\) be an index set satisfying \(|S^{*}|\leq k\) and_
\[z^{*}=\arg\min_{z\in\mathbb{R}^{n}}\{\|y-Az\|_{2}^{2}:\text{supp}(z)\subseteq S ^{*}\}.\]
_Then_
\[\|z^{*}-x_{S}\|_{2}\leq\frac{1}{\sqrt{1-(\delta_{2k})^{2}}}\|(z^{*}-x_{S})_{ \overline{S^{*}}}\|_{2}+\frac{\sqrt{1+\delta_{k}}}{1-\delta_{2k}}\|\nu^{\prime }\|_{2},\]
_where \(S:=\mathcal{L}_{k}(x)\) and \(\nu^{\prime}:=\nu+Ax_{\overline{S}}\)._
We now establish the error bounds for HBROT\(\omega\) and HBROTP\(\omega\).
Theorem 2.1: _Suppose that \(n>3k\) and denote \(\sigma:=\lceil\frac{n-2k}{k}\rceil\). Let \(x\in\mathbb{R}^{n}\) be a vector satisfying \(y=Ax+\nu\) where \(\nu\) is a noise vector. Denote_
\[t_{k}:=\frac{\sqrt{1+\delta_{k}}}{\sqrt{1-\delta_{2k}}},\ \ z_{k}:=\sqrt{1- \delta_{2k}^{2}}. \tag{4.23}\]
1. _Assume that the_ \((3k)\)_-th order RIC,_ \(\delta_{3k}\)_, of the matrix_ \(A\) _and the nonnegative parameters_ \((\alpha,\beta)\) _satisfy_ \(\delta_{3k}<\gamma^{*}(\omega)\) _and_ \[\beta<\frac{1-d_{1}}{1+d_{1}+d_{2}},\ \frac{(d_{0}+d_{2}+2)\beta+d_{0}}{d_{0}-d_{1}+1}< \alpha<\frac{d_{0}+2-(d_{2}-d_{0})\beta}{d_{0}+d_{1}+1},\] (4.24) _where_ \(\gamma^{*}(\omega)\) _is the unique root of the equation_ \(G_{\omega}(\gamma)=1\) _in the interval_ \((0,1)\)_, where_ \[G_{\omega}(\gamma):=(2\omega+1)\gamma\sqrt{\frac{1+\gamma}{1-\gamma}}+\gamma,\] (4.25) _and the constants_ \(d_{0},d_{1},d_{2}\) _are given as_ \[\left\{\begin{aligned} d_{0}&:=t_{k}(\omega \xi_{\sigma}+1),\\ d_{1}&:=t_{k}(2\omega\delta_{3k}+\delta_{2k})+\delta_{3k},\\ d_{2}&:=t_{k}[\xi_{\sigma}(\omega-1)+1]\frac{2 \omega\delta_{3k}+\delta_{2k}}{2(\omega-1)\delta_{3k}+\delta_{2k}}.\end{aligned}\right.\] (4.26) _Then, the sequence_ \(\{x^{p}\}\) _produced by HBROT_\(\omega\) _obeys_ \[\|x^{p}-x_{S}\|_{2}\leq\theta_{1}^{p-1}\left[\|x^{1}-x_{S}\|_{2}+(\theta_{1}- b_{1})\|x^{0}-x_{S}\|_{2}\right]+\frac{b_{3}}{1-\theta_{1}}\|\nu^{\prime}\|_{2}\] (4.27) _with_ \(\theta_{1}:=\frac{b_{1}+\sqrt{b_{1}^{2}+4b_{2}}}{2}\)_. The fact_ \(\theta_{1}<1\) _is ensured under (_4.24_) and_ \(b_{1},b_{2},b_{3}\) _are given as_ \[b_{1}:= t_{k}c_{\sigma}+\left(|1+\beta-\alpha|+\alpha\delta_{3k}\right), \ \ b_{2}:=\beta t_{k}[\xi_{\sigma}(\omega-1)+1]\frac{c_{\sigma}}{c_{1, \sigma}}+\beta,\] \[b_{3}:= \frac{\alpha(2\omega-1)(1+\delta_{k})+2}{\sqrt{1-\delta_{2k}}} \cdot\frac{2\delta_{3k}}{2(\omega-1)\delta_{3k}+\delta_{2k}}\cdot\frac{c_{ \sigma}}{c_{\sigma}-c_{1,\sigma}}+\alpha\sqrt{1+\delta_{k}},\] (4.28) _where_ \(c_{1,\sigma}\) _and_ \(\xi_{\sigma}\) _are given by (_4.19_) and (_4.11_), respectively, and_ \[c_{\sigma}:=(\omega\xi_{\sigma}+1)|1-\alpha+\beta|+\alpha(2\omega\delta_{3k}+ \delta_{2k}).\] (4.29)
2. _Suppose that the_ \((3k)\)_-th order RIC,_ \(\delta_{3k}\)_, of the matrix_ \(A\) _and the nonnegative parameters_ \((\alpha,\beta)\) _satisfy_ \(\delta_{3k}<\gamma^{\sharp}(\omega)\) _and_ \[\beta<\frac{z_{k}-d_{1}}{1+d_{1}+d_{2}},\ \frac{(d_{0}+d_{2}+2)\beta+d_{0}+1-z_{k}}{d_{0}-d_{1}+1}< \alpha<\frac{d_{0}+1+z_{k}-(d_{2}-d_{0})\beta}{d_{0}+d_{1}+1},\] (4.30) _where the constants_ \(d_{0},d_{1},d_{2}\) _are given by (_4.26_) and_ \(\gamma^{\sharp}(\omega)\) _is the unique root of the equation_ \(\frac{1}{\sqrt{1-\gamma^{2}}}G_{\omega}(\gamma)=1\) _in the interval_ \((0,1)\)_, where_ \(G_{\omega}(\gamma)\) _is given by (_4.25_). Then, the sequence_ \(\{x^{p}\}\) _produced by HBROTP_\(\omega\) _obeys_ \[\|x^{p}-x_{S}\|_{2} \leq\theta_{2}^{p-1}\left[\|x^{1}-x_{S}\|_{2}+(\theta_{2}-\frac{b_ {1}}{z_{k}})\|x^{0}-x_{S}\|_{2}\right]\] \[\quad+\frac{1}{1-\theta_{2}}\left(\frac{b_{3}}{z_{k}}+\frac{\sqrt{ 1+\delta_{k}}}{1-\delta_{2k}}\right)\|\nu^{\prime}\|_{2}\] (4.31) _with_ \(\theta_{2}:=\frac{b_{1}+\sqrt{b_{1}^{2}+4b_{2}z_{k}}}{2z_{k}}\)_. The fact_ \(\theta_{2}<1\) _is ensured under (_4.30_) and the constants_ \(b_{i}(i=1,2,3)\) _and_ \(z_{k}\) _are given by (_4.28_) and (_4.23_), respectively._
Proof: Let \(x^{\sharp}=\mathcal{H}_{k}(u^{p}\circ w_{H}^{(\omega)})\) be generated by the Algorithms, where \(w_{H}^{(\omega)}\) is given by (4.13). By using Lemma 9, we have
\[\|x_{S}-x^{\sharp}\|_{2}\leq\|(u^{p}\circ w_{H}^{(\omega)}-x_{S})_{X\cup S}\|_{2 }+\|(u^{p}\circ w_{H}^{(\omega)}-x_{S})_{X\setminus S}\|_{2}, \tag{4.32}\]
where \(X=\operatorname{supp}(x^{\sharp})\). Using (3.10) and the triangle inequality, we have that
\[\|(u^{p}\circ w_{H}^{(\omega)}-x_{S})_{X\setminus S}\|_{2}=\|[(u^ {p}-x_{S})\circ w_{H}^{(\omega)}]_{X\setminus S}\|_{2}\leq\|(u^{p}-x_{S})_{X \setminus S}\|_{2}\] \[\leq|1-\alpha+\beta|\cdot\big{\|}(x^{p}-x_{S})_{X\setminus S} \big{\|}_{2}+\alpha\|[(I-A^{T}A)(x^{p}-x_{S})]_{X\setminus S}\|_{2}\] \[\quad+\beta\|(x^{p-1}-x_{S})_{X\setminus S}\|_{2}+\alpha\|(A^{T} \nu^{\prime})_{X\setminus S}\|_{2},\]
where the first equality is ensured by \((x_{S})_{X\setminus S}=0\) and the first inequality is due to (4.13) and \(0\leq w^{(j)}\leq\mathbf{e}\) for \(j=1,\ldots,\omega\). Since \(|X\setminus S|\leq k\) and \(|\operatorname{supp}(x^{p}-x_{S})\cup(X\setminus S)|\leq 3k\), by using Lemma 1, we see that
\[\|(u^{p}\circ w_{H}^{(\omega)}-x_{S})_{X\setminus S}\|_{2}\leq (|1+\beta-\alpha|+\alpha\delta_{3k})\|x^{p}-x_{S}\|_{2}\] \[+\beta\|x^{p-1}-x_{S}\|_{2}+\alpha\sqrt{1+\delta_{k}}\|\nu^{ \prime}\|_{2}. \tag{4.33}\]
Denote
\[\Theta_{1}:=\|A(u^{p}\circ w_{H}^{(\omega)}-x_{S})_{X\cup S}\|_{2},\ \ \Theta_{2}:=\|A(u^{p}\circ w_{H}^{(\omega)}-x_{S})_{\overline{X\cup S}}\|_{2}. \tag{4.34}\]
As \(|X\cup S|\leq 2k\), by using (2.1), we obtain
\[\Theta_{1}\geq\sqrt{1-\delta_{2k}}\|(u^{p}\circ w_{H}^{(\omega)}-x_{S})_{X \cup S}\|_{2}. \tag{4.35}\]
For any given \(\zeta\in(0,1)\), we consider the following two cases associated with \(\Theta_{1}\) and \(\Theta_{2}\).
**Case 1.**\(\Theta_{2}\leq\zeta\Theta_{1}\). Since \(y=Ax_{S}+\nu^{\prime}\), by the triangle inequality and (4.34), we have
\[\|y-A(u^{p}\circ w_{H}^{(\omega)})\|_{2}=\|A(u^{p}\circ w_{H}^{( \omega)}-x_{S})-\nu^{\prime}\|_{2}\] \[=\|A(u^{p}\circ w_{H}^{(\omega)}-x_{S})_{X\cup S}+A(u^{p}\circ w_ {H}^{(\omega)}-x_{S})_{\overline{X\cup S}}-\nu^{\prime}\|_{2}\] \[\geq\Theta_{1}-\Theta_{2}-\|\nu^{\prime}\|_{2}\] \[\geq(1-\zeta)\Theta_{1}-\|\nu^{\prime}\|_{2}. \tag{4.36}\]
Merging (4.35), (4.36) with (4.18) yields
\[\|(u^{p}\circ w_{H}^{(\omega)}-x_{S})_{X\cup S}\|_{2}\] \[\leq\frac{1}{(1-\zeta)\sqrt{1-\delta_{2k}}}(\|y-A(u^{p}\circ w_{H }^{(\omega)})\|_{2}+\|\nu^{\prime}\|_{2})\] \[\leq\frac{t_{k}c_{1,q_{1}}}{1-\zeta}\|x^{p}-x_{S}\|_{2}+\frac{ \beta t_{k}}{1-\zeta}\big{[}\xi_{q_{1}}(\omega-1)+1\big{]}\|x^{p-1}-x_{S}\|_{2}\] \[\quad+\frac{\alpha(2\omega-1)(1+\delta_{k})+2}{(1-\zeta)\sqrt{1- \delta_{2k}}}\|\nu^{\prime}\|_{2}, \tag{4.37}\]
where \(q_{1}=\lceil\frac{n-k}{k}\rceil=\sigma+1\) and \(t_{k},c_{1,q_{1}}\) are given in (4.23) and (4.19), respectively.
**Case 2.**\(\Theta_{2}>\zeta\Theta_{1}\). From (4.34) and (4.35), we obtain
\[\|(u^{p}\circ w_{H}^{(\omega)}-x_{S})_{X\cup S}\|_{2}\leq\frac{1}{\zeta\sqrt{1- \delta_{2k}}}\|A[(u^{p}-x_{S})\circ w_{H}^{(\omega)}]_{\overline{X\cup S}}\|_{2}. \tag{4.38}\]
Taking \(V=X\cup S\) and \(i=\omega\) in (4.12), one has
\[\|A[(u^{p}-x_{S})\circ w_{H}^{(\omega)}]_{\overline{X\cup S}}\|_{2}\] \[\leq\sqrt{1+\delta_{k}}\left[c_{2,q_{2}}\|x^{p}-x_{S}\|_{2}+\beta \xi_{q_{2}}\|x^{p-1}-x_{S}\|_{2}+2\alpha\sqrt{1+\delta_{k}}\|\nu^{\prime}\|_{2}\right] \tag{4.39}\]
with \(q_{2}=\lceil\frac{n-|X\cup S|}{k}\rceil\geq\sigma\) which is due to \(|X\cup S|\leq 2k\), and \(c_{2,q_{2}}\) is defined as
\[c_{2,q_{2}}:=\xi_{q_{2}}|1-\alpha+\beta|+2\alpha\delta_{3k}. \tag{4.40}\]
Substituting (4.39) into (4.38), we get
\[\|(u^{p}\circ w_{H}^{(\omega)}-x_{S})_{X\cup S}\|_{2} \leq\frac{t_{k}}{\zeta}[c_{2,q_{2}}\|x^{p}-x_{S}\|_{2}+\beta\xi_{ q_{2}}\|x^{p-1}-x_{S}\|_{2}\] \[\quad+2\alpha\sqrt{1+\delta_{k}}\|\nu^{\prime}\|_{2}]. \tag{4.41}\]
From (4.11), we see that \(\xi_{q}\) is decreasing in \([2,n]\). For \(q_{1}=\sigma+1\) and \(q_{2}\geq\sigma\geq 2\), we have \(\xi_{q_{1}},\xi_{q_{2}}\leq\xi_{\sigma}\). It follows from (4.19) and (4.40) that \(c_{1,q_{1}}\leq c_{1,\sigma}\) and \(c_{2,q_{2}}\leq c_{2,\sigma}\). Combining (4.37) and (4.41) leads to
\[\|(u^{p}\circ w_{H}^{(\omega)}-x_{S})_{X\cup S}\|_{2}\] \[\leq t_{k}\max\left\{\frac{c_{1,\sigma}}{1-\zeta},\frac{c_{2, \sigma}}{\zeta}\right\}\|x^{p}-x_{S}\|_{2}\] \[\quad+\beta t_{k}\max\left\{\frac{\xi_{\sigma}(\omega-1)+1}{1- \zeta},\frac{\xi_{\sigma}}{\zeta}\right\}\|x^{p-1}-x_{S}\|_{2}\] \[\quad+\frac{1}{\sqrt{1-\delta_{2k}}}\max\left\{\frac{\alpha(2 \omega-1)(1+\delta_{k})+2}{1-\zeta},\frac{2\alpha(1+\delta_{k})}{\zeta}\right\} \|\nu^{\prime}\|_{2} \tag{4.42}\]
for any \(\zeta\in(0,1)\).
Next, we select a suitable parameter \(\zeta\in(0,1)\) such that the right hand of (4.42) is as small as possible. For \(\delta_{2k}\leq\delta_{3k}\) and \(\xi_{\sigma}<2\) in (4.11), we have
\[\frac{c_{2,\sigma}}{c_{1,\sigma}}=\frac{\xi_{\sigma}|1-\alpha+\beta|+2\alpha \delta_{3k}}{[\xi_{\sigma}(\omega-1)+1]|1-\alpha+\beta|+\alpha[2(\omega-1) \delta_{3k}+\delta_{2k}]}\leq\frac{2\delta_{3k}}{2(\omega-1)\delta_{3k}+ \delta_{2k}}. \tag{4.43}\]
It is easy to check that
\[\min_{\zeta\in(0,1)}\max\left\{\frac{c_{1,\sigma}}{1-\zeta},\frac{c_{2,\sigma} }{\zeta}\right\}=c_{1,\sigma}+c_{2,\sigma}=c_{\sigma}, \tag{4.44}\]
where \(c_{\sigma}\) is given by (4.29) and its minimum attains at
\[\zeta^{*}=\frac{c_{2,\sigma}}{c_{1,\sigma}+c_{2,\sigma}}=\frac{\xi_{\sigma}|1- \alpha+\beta|+2\alpha\delta_{3k}}{(\omega\xi_{\sigma}+1)|1-\alpha+\beta|+ \alpha(2\omega\delta_{3k}+\delta_{2k})}. \tag{4.45}\]
That is,
\[\max\left\{\frac{c_{1,\sigma}}{1-\zeta^{*}},\frac{c_{2,\sigma}}{\zeta^{*}} \right\}=c_{\sigma}. \tag{4.46}\]
Moreover, noting that \(\xi_{\sigma}<2\) and \(\delta_{2k}\leq\delta_{3k}\), we have \(\zeta^{*}\geq\frac{\xi_{\sigma}}{\omega\xi_{\sigma}+1}\). In particular, by taking \(\zeta=\zeta^{*}\) in (4.42), we deduce that
\[\max\left\{\frac{\xi_{\sigma}(\omega-1)+1}{1-\zeta^{*}},\frac{\xi_{\sigma}}{ \zeta^{*}}\right\}=\frac{\xi_{\sigma}(\omega-1)+1}{1-\zeta^{*}}=\frac{\xi_{ \sigma}(\omega-1)+1}{c_{1,\sigma}}c_{\sigma},\]
and
\[\max\left\{\frac{\alpha(2\omega-1)(1+\delta_{k})+2}{1-\zeta^{*}}, \ \frac{2\alpha(1+\delta_{k})}{\zeta^{*}}\right\}\] \[=\frac{1}{\zeta^{*}}\max\left\{[\alpha(2\omega-1)(1+\delta_{k})+2 ]\frac{\zeta^{*}}{1-\zeta^{*}},\ 2\alpha(1+\delta_{k})\right\}\] \[=\frac{c_{\sigma}}{c_{2,\sigma}}\max\left\{[\alpha(2\omega-1)(1+ \delta_{k})+2]\frac{c_{2,\sigma}}{c_{1,\sigma}},\ 2\alpha(1+\delta_{k})\right\}\] \[\leq\frac{c_{\sigma}}{c_{2,\sigma}}\max\left\{[\alpha(2\omega-1) (1+\delta_{k})+2]\frac{2\delta_{3k}}{2(\omega-1)\delta_{3k}+\delta_{2k}},\ 2\alpha(1+\delta_{k})\right\}\] \[=\left[\alpha(2\omega-1)(1+\delta_{k})+2\right]\frac{2\delta_{3k} }{2(\omega-1)\delta_{3k}+\delta_{2k}}\cdot\frac{c_{\sigma}}{c_{2,\sigma}}, \tag{4.47}\]
where the second equality is given by (4.45), the inequality above follows from (4.43), and the last equality holds owing to \(\delta_{2k}\leq\delta_{3k}\). Merging (4.42) with (4.46)-(4.47), we obtain
\[\|(u^{p}\circ w^{(\omega)}_{H}-x_{S})_{X\cup S}\|_{2}\] \[\leq t_{k}c_{\sigma}\|x^{p}-x_{S}\|_{2}+\beta t_{k}\frac{\xi_{ \sigma}(\omega-1)+1}{c_{1,\sigma}}c_{\sigma}\|x^{p-1}-x_{S}\|_{2}\] \[\quad+\frac{\alpha(2\omega-1)(1+\delta_{k})+2}{\sqrt{1-\delta_{2 k}}}\cdot\frac{2\delta_{3k}}{2(\omega-1)\delta_{3k}+\delta_{2k}}\cdot\frac{c_{ \sigma}}{c_{2,\sigma}}\|\nu^{\prime}\|_{2}. \tag{4.48}\]
Combining (4.33), (4.48) with (4.32), we have
\[\|x_{S}-x^{\sharp}\|_{2}\leq b_{1}\|x^{p}-x_{S}\|_{2}+b_{2}\|x^{p-1}-x_{S}\|_{ 2}+b_{3}\|\nu^{\prime}\|_{2}, \tag{4.49}\]
where the constants \(b_{1},b_{2},b_{3}\) are given by (4.28).
Next, we estimate \(\|x^{p+1}-x_{S}\|_{2}\) for HBROT\(\omega\) and HBROTP\(\omega\) based on the relation (4.49).
(i) Since \(x^{p+1}=x^{\sharp}\) in HBROT\(\omega\), (4.49) becomes
\[\|x^{p+1}-x_{S}\|_{2}\leq b_{1}\|x^{p}-x_{S}\|_{2}+b_{2}\|x^{p-1}-x_{S}\|_{2}+b _{3}\|\nu^{\prime}\|_{2}. \tag{4.50}\]
Now, we consider the conditions of Lemma 2. Merging (4.43) with (4.44) produces
\[\frac{c_{\sigma}}{c_{1,\sigma}}\leq\frac{2\omega\delta_{3k}+\delta_{2k}}{2( \omega-1)\delta_{3k}+\delta_{2k}}.\]
It follows from (4.28) and (4.29) that
\[b_{1}+b_{2} \leq t_{k}c_{\sigma}+(|1+\beta-\alpha|+\alpha\delta_{3k})\] \[\quad+\left\{t_{k}\frac{2\omega\delta_{3k}+\delta_{2k}}{2(\omega -1)\delta_{3k}+\delta_{2k}}[\xi_{\sigma}(\omega-1)+1]+1\right\}\beta=F(\alpha, \beta), \tag{4.51}\]
where
\[F(\alpha,\beta):= (d_{0}+1)|1-\alpha+\beta|+d_{1}\alpha+(d_{2}+1)\beta,\] \[= \left\{\begin{array}{ll}-(d_{0}-d_{1}+1)\alpha+(d_{0}+d_{2}+2) \beta+d_{0}+1,&\text{if}\ \ \alpha\leq 1+\beta,\\ (d_{0}+d_{1}+1)\alpha+(d_{2}-d_{0})\beta-(d_{0}+1),&\text{if}\ \ \alpha>1+\beta, \end{array}\right. \tag{4.52}\]
with the constants \(d_{0},d_{1},d_{2}\) are given by (4.26).
Based on the fact \(\delta_{k}\leq\delta_{2k}\leq\delta_{3k}<\gamma^{*}(\omega)\) and the function \(G_{\omega}(\gamma)\) in (4.25) is strictly increasing in the interval \((0,1)\), by using (4.26), we have
\[d_{1}\leq[(2\omega+1)t_{k}+1]\delta_{3k}\leq G_{\omega}(\delta_{3k})<G_{\omega }(\gamma^{*}(\omega))=1, \tag{4.53}\]
which shows that the range of \(\beta\) in (4.24) is well defined. From the first inequality in (4.24), we see that
\[\frac{(d_{0}+d_{2}+2)\beta+d_{0}}{d_{0}-d_{1}+1}<1+\beta<\frac{d_{0}+2-(d_{2} -d_{0})\beta}{d_{0}+d_{1}+1}, \tag{4.54}\]
which implies that the range of \(\alpha\) in (4.24) is also well defined. Merging (4.52)-(4.54) with the second inequality in (4.24), we see that if \(\frac{(d_{0}+d_{2}+2)\beta+d_{0}}{d_{0}-d_{1}+1}<\alpha\leq 1+\beta\), then
\[F(\alpha,\beta)<-(d_{0}-d_{1}+1)\frac{(d_{0}+d_{2}+2)\beta+d_{0}}{d_{0}-d_{1}+ 1}+(d_{0}+d_{2}+2)\beta+d_{0}+1=1,\]
and if \(1+\beta<\alpha<\frac{d_{0}+2-(d_{2}-d_{0})\beta}{d_{0}+d_{1}+1}\), then
\[F(\alpha,\beta)<(d_{0}+d_{1}+1)\frac{d_{0}+2-(d_{2}-d_{0})\beta}{d_{0}+d_{1}+1 }+(d_{2}-d_{0})\beta-(d_{0}+1)=1.\]
It follows from (4.51) that \(b_{1}+b_{2}<1\). Hence, applying Lemma 2 to the relation (4.50), we conclude that (4.27) holds with \(\theta_{1}=\frac{b_{1}+\sqrt{b_{1}^{2}+4b_{2}}}{2}<1\).
(ii) Since \(x^{p+1}\) is given by (2.6) and \(S^{p+1}=\text{supp}(x^{\sharp})\) in HBROTP\(\omega\), by setting \(S^{*}=S^{p+1}\) and \(z^{*}=x^{p+1}\) in Lemma 10, we have
\[\|x^{p+1}-x_{S}\|_{2} \leq\frac{1}{z_{k}}\left\|(x^{\sharp}-x_{S})_{\overline{S^{p+1}}} \right\|_{2}+\frac{\sqrt{1+\delta_{k}}}{1-\delta_{2k}}\|\nu^{\prime}\|_{2}\] \[\leq\frac{1}{z_{k}}\|x^{\sharp}-x_{S}\|_{2}+\frac{\sqrt{1+\delta _{k}}}{1-\delta_{2k}}\|\nu^{\prime}\|_{2}, \tag{4.55}\]
where \(z_{k}\) is given in (4.23) and the first inequality follows from the fact \((x^{p+1})_{\overline{S^{p+1}}}=(x^{\sharp})_{\overline{S^{p+1}}}=0\). Combining (4.55) with (4.49), we have
\[\|x^{p+1}-x_{S}\|_{2}\leq\frac{b_{1}}{z_{k}}\|x^{p}-x_{S}\|_{2}+ \frac{b_{2}}{z_{k}}\|x^{p-1}-x_{S}\|_{2}+\left(\frac{b_{3}}{z_{k}}+\frac{ \sqrt{1+\delta_{k}}}{1-\delta_{2k}}\right)\|\nu^{\prime}\|_{2}. \tag{4.56}\]
Similar to the analysis in Part (i), we need to show that \(\frac{b_{1}}{z_{k}}+\frac{b_{2}}{z_{k}}<1\).
From the conditions of Theorem 2(ii), we have \(\delta_{2k}\leq\delta_{3k}<\gamma^{\sharp}(\omega)\). Since the function \(G_{\omega}(\gamma)\) in (4.25) is strictly increasing in \((0,1)\), one has
\[d_{1}\leq G_{\omega}(\delta_{3k})<G_{\omega}(\gamma^{\sharp}(\omega))=\sqrt{1- (\gamma^{\sharp}(\omega))^{2}}<\sqrt{1-(\delta_{2k})^{2}}=z_{k},\]
where the first inequality is given by (4.53), the first equality follows from the fact that \(\gamma^{\sharp}(\omega)\) is the root of \(\frac{1}{\sqrt{1-\gamma^{2}}}G_{\omega}(\gamma)=1\) in \((0,1)\) and the last equality is given by (4.23). It follows that the range of \(\beta\) in (4.30) is well defined. From the first inequality in (4.30), we derive \[\frac{(d_{0}+d_{2}+2)\beta+d_{0}+1-z_{k}}{d_{0}-d_{1}+1}<1+\beta<\frac{d_{0}+1 +z_{k}-(d_{2}-d_{0})\beta}{d_{0}+d_{1}+1},\] (4.57) which means that the range of \(\alpha\) in (4.30) is well defined. Combining (4.52), (4.57) with the second inequality in (4.30) leads to \[F(\alpha,\beta)< \left\{\begin{array}{l}-(d_{0}-d_{1}+1)\frac{(d_{0}+d_{2}+2) \beta+d_{0}+1-z_{k}}{d_{0}-d_{1}+1}+(d_{0}+d_{2}+2)\beta+d_{0}+1,\\ \text{if}\;\;\frac{(d_{0}+d_{2}+2)\beta+d_{0}+1-z_{k}}{d_{0}-d_{1}+1}<\alpha \leq 1+\beta,\\ (d_{0}+d_{1}+1)\frac{d_{0}+1+z_{k}-(d_{2}-d_{0})\beta}{d_{0}+d_{1}+1}+(d_{2}- d_{0})\beta-(d_{0}+1),\\ \text{if}\;\;1+\beta<\alpha<\frac{d_{0}+1+z_{k}-(d_{2}-d_{0})\beta}{d_{0}+d_{1 }+1},\end{array}\right.\] \[= z_{k}.\] It follows from (4.51) that \(\frac{b_{1}}{z_{k}}+\frac{b_{2}}{z_{k}}<1\). Therefore, by Lemma 2, it follows from (4.56) that (4.31) holds with \(\theta_{2}=\frac{b_{1}+\sqrt{b_{1}^{2}+4b_{2}z_{k}}}{2z_{k}}<1\).
Remark 3(i): When \(\nu=0\) and \(x\) is a \(k\)-sparse vector, from (4.27) and (4.31), we observe that the sequence \(\{x^{p}\}\) generated by HBROT\(\omega\) or HBROTP\(\omega\) converges to \(x\).
(ii) The condition \(n>3k\) in Theorem 2 can be removed. If so, the constant \(\xi_{\sigma}\) will be replaced by \(\max\limits_{q\geq 1}\xi_{q}=\frac{5}{4}\sqrt{2}\) (see Corollary 2). In addition, if \(n>9k\), then \(\sigma=\lceil\frac{n-2k}{k}\rceil\geq 8\). In this case, we see from (4.11) that \(\xi_{\sigma}\) in Theorem 2 can be replaced by \(\min\limits_{q\geq 2}\xi_{q}=\sqrt{2}\).
(iii) When \(\omega=1\), HBROT\(\omega\) and HBROTP\(\omega\) reduce to HBROT and HBROTP, respectively. In this case, the RIP bounds in Theorem 2 are reduced to \(\delta_{3k}<\gamma^{*}(1)\approx 0.2118\) for HBROT and \(\delta_{3k}<\gamma^{\sharp}(1)\approx 0.2079\) for HBROTP.
(iv) It is not convenient to calculate the RIC of the matrix A and (4.30) is just a sufficient condition for the theoretical performance of HBROTP\(\omega\). In practical implementation, the parameters \((\alpha,\beta)\) in HBROTP may be set as \(0\leq\beta<1/4\) and \(\alpha\geq 1+\beta\) for simplicity to roughly meet the conditions (4.30).
## 5 Numerical experiments
Sparse signal and image recovery through measurements \(y=Ax+\nu\), where \(x\) denotes the signal/image to recover, is a typical linear inverse problem. In this section, we provide some experiment results for the proposed HBROTP algorithm and compare its performance with several existing methods. The experiments in Sections 5.1 and 5.2 are performed on a server with the processor Intel(R) Xeon(R) CPU E5-2680 v3@ 2.50GHz and 256GB memory, while others are performed on a PC with the processor Intel(R) Core(TM) i7-10700 CPU @ 2.90 GHz and 16 GB memory. All involved convex optimization problems are solved by CVX [30] with
solver _'Mosek'_[2]. The comparison of six algorithms including HBROTP, ROTP2, PGROTP, \(\ell_{1}\)-min, OMP and PLB is mainly made via the phase transitions based on synthetic data together with the reconstruction, deblurring and denoising of a few real images.
### Phase transition
The first experiment is carried out to compare the performances of the algorithms except PLB through the phase transition curve (PTC) [5; 6] and average recovery time. All sparse vectors \(x^{*}\in\mathbb{R}^{n}\) and matrices \(A\in\mathbb{R}^{m\times n}\) are randomly generated, and the position of nonzero elements of \(x^{*}\) follows the uniform distribution. In addition, all columns of \(A\) are normalized and the entries of \(A\) and the nonzeros of \(x^{*}\) are independent and identically distributed random variables following \(\mathcal{N}(0,1)\). In this experiment, we consider both accurate measurements \(y=Ax^{*}\) and inaccurate measurements \(y=Ax^{*}+\epsilon h\) with fixed \(n=1000\), where \(\epsilon=5\times 10^{-3}\) is the noise level and \(h\in\mathbb{R}^{m}\) is a normalized standard Gaussian noise. We let HBROTP start from \(x^{1}=x^{0}=0\) with fixed parameters \(\alpha=5\) and \(\beta=0.2\), while other algorithms start from \(x^{0}=0\). The maximum number of iterations of HBROTP, ROTP2 and PGROTP is set as \(50\), while OMP is performed exactly \(k\) iterations and \(\ell_{1}\)-min is performed by the solver _'Mosek'_ directly. Given the random data \((A,x^{*})\) or \((A,x^{*},h)\), the recovery is counted as _'success'_ when the criterion
\[\|x^{p}-x^{*}\|_{2}/\|x^{*}\|_{2}\leq 10^{-3}\]
is satisfied, in which \(x^{p}\) is the solution generated by algorithms.
Denote by \(\kappa=m/n\) and \(\rho=k/m\), where \(\kappa\) is often called the sampling rate or the compression ratio. In the \((\kappa,\rho)\)-space, the region below the PTC is called the _'success'_ recovery region, where the solution of the SLI problem can be exactly or approximately recovered, while the region above the PTC corresponds to the _'failure'_ region. Thus if the region below the PTC is wider, the performance of an algorithm would be better. We now briefly describe the mechanism for plotting the PTC which is taken as the classical \(50\%\) logistic regression curve, and more detailed information can be found in [5; 6]. To generate the PTCs, \(13\) groups of \(m=\lceil\kappa\cdot n\rceil\) are considered, where the sampling rate \(\kappa\) is ranged from \(0.1\) to \(0.7\) with stepsize \(0.05\). For any given \(m\), by using the bisection method, the approximated recovery phase transition region \([k_{\min},k_{\max}]\) is produced for each algorithm, in which the success rate of recovery is at least \(90\%\) as \(k<k_{\min}\) and at most \(10\%\) as \(k>k_{\max}\). The interval \([k_{\min},k_{\max}]\) will be equally divided into \(\min\{k_{\max}-k_{\min},50\}\) parts, and \(10\) problem instances are tested for each \(k\) to produce the recovery success rate for given algorithm. Thus the PTCs can be obtained from the logistic regression model in [5; 6] directly.
The PTCs for the experimented algorithms are shown in Fig. 1(a) and (b), which correspond to the accurate measurements and inaccurate measurements with the noise level \(\epsilon=5\times 10^{-3}\), respectively. The results indicate that HBROTP has the highest PTC as \(\kappa\leq 0.5\), in which case the recovery capability of HBROTP is superior to other algorithms in this experiment. However, the PTCs indicate that ROTP2, PGROTP and OMP may perform relatively better than HBROTP with a larger \(\kappa\). The comparison in Fig. 1(a) and (b) demonstrates that all algorithms are robust for signal recovery when the measurements are slightly inaccurate except
\(\ell_{1}\)-min. The comparison indicates that the overall performance of HBROTP is very comparable to those existing methods in this experiment.
In the intersection of the recovery regions of multiple algorithms, we compare the average CPU time for signal recovery via these algorithms. Specifically, for each given \(\kappa\), we test 10 problem instances for each algorithm with the mesh \((\kappa,\rho)\), wherein \(\rho\) is ranged from 0.02 to 1 with stepsize 0.02 until the success rate of recovery is less than 90%. The ratios of the average computational time of ROTP2, PGROTP, \(\ell_{1}\)-min and OMP against that of HBROTP are displayed in Fig. 2 (a)-(d), respectively. Fig. 2 (a) and (b) show that HBROTP is at least 1.6 times faster than ROTP2 in most areas and slower than PGROTP except in the region \([0.1,0.2]\times[0.02,0.1]\). On the other hand, from Fig. 2 (a)-(d), we observe that the ROT-type algorithms including HBROTP, ROTP2 and PGROTP take relatively more time to solve the problems than \(\ell_{1}\)-min and OMP, due to solving quadratic convex optimization problems.
### Image reconstruction
In this section, we compare the performances of several algorithms on the reconstruction of several images (_Lena_, _Peppers_ and _Baboon_) of size \(512\times 512\). Only accurate measurements are used in the experiment, and the measurement matrices are \(m\times n\) normalized standard Gaussian matrices with \(n=512\) and \(m=\lceil\kappa\cdot n\rceil\), where \(\kappa\) is the sampling rate. The discrete wavelet transform with the _'sym8'_ wavelet is used to establish the sparse representation of the images. The input sparsity level is set as \(k=\lceil n/10\rceil\) for HBROTP, ROTP2 and PGROTP, and the parameters of HBROTP are set as \(\alpha=5\) and \(\beta=0.2\). The peak signal-to-noise ratio (PSNR) is used to compare the reconstruction quality of images, which is defined by
\[PSNR:=10\cdot log_{10}(V^{2}/MSE),\]
where \(MSE\) denotes the mean-squared error between the reconstructed and original image, and \(V\) represents the maximum fluctuation in the original image data type (\(V=255\) is used in our experiments). Clearly, the larger the value of PSNR, the higher the reconstruction quality.
Figure 1: The 50% success rate phase transition curves for algorithms.
The results in terms of PSNR with sampling rates \(\kappa=0.3,0.4,0.5\) are summarized in Tab. 1, from which we see that HBROTP is always superior to OMP and inferior to \(\ell_{1}\)-min in reconstruction quality. For ROTP-type algorithms with \(\kappa=0.4,0.5\), the PSNR values of HBROTP exceed that of ROTP2, PGROTP at least 1.88 dB for _Lena_ and 1.32 dB for _Peppers_, respectively. In other cases, ROTP2 and PGROTP obtained better results than HBROTP in reconstruction quality. In particular, the performances of ROTP2 and PGROTP are always equivalent or superior to \(\ell_{1}\)-min for _Baboon_. In the meantime, the comparison of visual quality for the constructed images by HBROTP with \(\kappa=0.3,0.4,0.5\) is displayed in Fig.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \(\kappa\) & HBROTP & ROTP2 & PGROTP & \(\ell_{1}\)-min & OMP \\ \hline & 0.3 & 32.60 & 32.34 & 33.12 & 33.63 & 31.37 \\ Lena & 0.4 & 34.37 & 32.49 & 31.75 & 35.10 & 32.95 \\ & 0.5 & 35.63 & 33.11 & 31.93 & 37.04 & 34.34 \\ \hline & 0.3 & 31.31 & 32.33 & 33.27 & 33.03 & 30.17 \\ Peppers & 0.4 & 33.10 & 31.78 & 31.60 & 34.08 & 31.66 \\ & 0.5 & 34.23 & 32.04 & 31.10 & 35.90 & 33.38 \\ \hline & 0.3 & 28.70 & 31.35 & 32.33 & 29.90 & 28.35 \\ Baboon & 0.4 & 29.12 & 30.06 & 30.00 & 30.05 & 28.53 \\ & 0.5 & 29.37 & 30.06 & 30.07 & 30.20 & 28.78 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of PSNR (dB) for algorithms with different sampling rates.
Figure 2: The ratios of average CPU time of the algorithms.
3. It can be seen that the reconstruction quality has been significantly improved for three images as the sampling rate \(\kappa\) is ranged from 0.3 to 0.5, and the best visual results have been achieved around \(\kappa=0.5\).
### Image deblurring and denoising
In this section, we compare the performances of HBROTP and PLB on image deblurring and denoising. In our experiments, several images including _Boats_, _Cameraman_, _Clock_, _Goldhill_ and _Shepp-Logan_ of size \(128\times 128\) are expressed as vectors in \(\mathbb{R}^{n}\) with \(n=16384\) through concatenating their columns. For a given image \(z\), the corresponding blurred noisy image \(y\in\mathbb{R}^{n}\) is obtained by (1.1), in which \(\Phi\in\mathbb{R}^{n\times n}\) is the blurring matrix generated by a Gaussian kernel _fspecial('Gaussian',11,0.6)_ in Matlab with periodic boundary condition (see Chapter 4 in [29]), and \(\nu\) is a Gaussian white noise vector with mean 0 and standard deviation \(\hat{\sigma}\). The sparse representation of \(z\) is expressed as \(z=\Psi x\), where \(\Psi\) is taken as the synthesis operator generated by the linear B-splines [1; 11], denoted \(\Psi_{1}\), or the discrete wavelet
Figure 3: Performance of HBROTP for three images with different sampling rates.
matrix generated by the _'sym8'_ wavelet, denoted \(\Psi_{2}\). Thus the image deblurring and denoising can be achieved by solving the corresponding SLI problem (1.2).
For HBROTP, the discrete wavelet transform, i.e., \(\Psi=\Psi_{2}\) is used to achieve the sparse representation of the image, and the parameters in this algorithm are set as \(k=\lceil 0.4n\rceil\), \(\alpha=1\) and \(\beta=0.8\). For PLB, we use \(\mathrm{PLB}_{i}\) to represent PLB with \(\Psi=\Psi_{i}\) for \(i=1,2\), and the parameters \((\mu,d,\delta)\) are given as follows: \(\mu=0.05\) is determined experimentally in terms of PSNR; the dimension of Krylov subspace is set as \(d=11\) according to the suggestion in [1]; \(\delta\) is the same as that of [11]. The stopping criterion of the algorithm is given by
\[\|x^{p+1}-x^{p}\|_{2}/\|x^{p+1}\|_{2}\leq 10^{-4}.\]
The results in terms of CPU time and PSNR for HBROTP and PLB on image deblurring and denoising with two different standard deviations \(\hat{\sigma}=2,4\) are given in Tab. 2. In the case \(\hat{\sigma}=2\), the PSNR values of HBROTP exceed that of \(\mathrm{PLB}_{1}\) and \(\mathrm{PLB}_{2}\) at least \(1.6\) dB for all images except _Boats_ and _Goldhill_. As the noise intensity increases, the differences of PSNR values between HBROTP and \(\mathrm{PLB}_{i}(i=1,2)\) are enlarged to \(2.2\) dB for all images as \(\hat{\sigma}=4\). This experiment shows that HBROTP can be stronger than PLB on image deblurring and denoising, and HBROTP is more stable than PLB in noisy situations. However, solving quadratic subproblem (2.5) causes the HBROTP method to consume more time than \(\mathrm{PLB}_{1}\) and \(\mathrm{PLB}_{2}\). Moreover, \(\mathrm{PLB}_{1}\) is faster than \(\mathrm{PLB}_{2}\) since the synthesis operator \(\Psi_{1}\) is more sparser than the discrete wavelet matrix \(\Psi_{2}\). Finally, the deblurring/denoising effects of HBROTP and \(\mathrm{PLB}_{1}\) on _Cameraman_ and _Shepp-Logan_ with \(\hat{\sigma}=2\) are shown in Fig. 4, from which it can be observed that both HBROTP and \(\mathrm{PLB}_{1}\) can successfully recover the two images in high quality.
## 6 Conclusions
The new algorithms that combine the optimal \(k\)-thresholding and heavy-ball technique are proposed in this paper. Such algorithms can be seen as the acceleration
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{
\begin{tabular}{c} Standard \\ deviation \\ \end{tabular} } & \multirow{2}{*}{Images} & \multicolumn{4}{c|}{PSNR(dB)} & \multicolumn{4}{c|}{CPU time(seconds)} \\ \cline{3-8} & & \(\mathrm{PLB}_{1}\) & \(\mathrm{PLB}_{2}\) & HBROTP & \(\mathrm{PLB}_{1}\) & \(\mathrm{PLB}_{2}\) & HBROTP \\ \hline \multirow{6}{*}{\(\hat{\sigma}=2\)} & Barbara & 35.34 & 35.34 & 37.75 & 0.80 & 4.31 & 2291 \\ & Boats & 35.53 & 35.52 & 35.34 & 0.83 & 3.30 & 2049 \\ & Cameraman & 35.58 & 35.57 & 37.26 & 0.83 & 2.80 & 1383 \\ & Clock & 35.66 & 35.65 & 38.12 & 0.59 & 2.13 & 1305 \\ & Goldhill & 35.34 & 35.33 & 35.05 & 0.70 & 2.81 & 2319 \\ & Shepp-Logan & 35.54 & 35.51 & 38.72 & 0.86 & 3.28 & 1764 \\ \hline \multirow{6}{*}{\(\hat{\sigma}=4\)} & Barbara & 30.74 & 30.74 & 33.86 & 0.94 & 4.03 & 2071 \\ & Boats & 30.80 & 30.80 & 33.06 & 0.38 & 3.11 & 2411 \\ \cline{1-1} & Cameraman & 30.79 & 30.79 & 33.53 & 0.78 & 4.14 & 1370 \\ \cline{1-1} & Clock & 30.90 & 30.90 & 33.44 & 0.53 & 3.28 & 1737 \\ \cline{1-1} & Goldhill & 30.76 & 30.75 & 33.02 & 0.80 & 3.95 & 2444 \\ \cline{1-1} & Shepp-Logan & 30.93 & 30.92 & 33.70 & 0.61 & 4.66 & 1381 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of PSNR (dB) and CPU time (in seconds) of HBROTP and PLB on image deblurring and denoising with different standard deviation \(\hat{\sigma}\).
versions of the optimal \(k\)-thresholding methods. The solution error bounds and convergence of the proposed algorithms have been shown mainly under the RIP of the matrices. The numerical performance of the proposed HBROTP algorithm has been evaluated through phase transition, average runtime and image processing. The experiment results indicate that HBROTP is a robust signal recovery method, especially when the sampling rate is relatively low (e.g., \(\kappa\leq 0.5\)), and it is generally faster than the standard ROTP method thank to the heavy-ball acceleration technique.
|
2306.04078 | Parametrically driven pure-Kerr temporal solitons in a chip-integrated
microcavity | The discovery that externally-driven nonlinear optical resonators can sustain
ultrashort pulses corresponding to coherent optical frequency combs has enabled
landmark advances in applications from telecommunications to sensing. The main
research focus has hitherto been on resonators with purely cubic (Kerr-type)
nonlinearity that are externally-driven with a monochromatic continuous wave
laser -- in such systems, the solitons manifest themselves as unique attractors
whose carrier frequency coincides with that of the external driving field.
Recent experiments have, however, shown that a qualitatively different type of
temporal soliton can arise via parametric down-conversion in resonators with
simultaneous quadratic and cubic nonlinearity. In contrast to conventional
solitons in pure-Kerr resonators, these parametrically driven solitons come in
two different flavours with opposite phases, and they are spectrally centred at
half of the frequency of the driving field. Here, we theoretically predict and
experimentally demonstrate that parametrically driven solitons can also arise
in resonators with pure Kerr nonlinearity under conditions of bichromatic
driving. In this case, the solitons arise through four-wave mixing mediated
phase-sensitive amplification, come with two distinct phases, and have a
carrier frequency in between the two external driving fields. Our experiments
are performed in an integrated silicon nitride microcavity, and we observe
frequency comb spectra in good agreement with theoretical predictions. In
addition to representing a fundamental discovery of a new type of temporal
dissipative soliton, our results constitute the first unequivocal realisation
of parametrically driven soliton frequency combs in a microcavity platform
compatible with foundry-ready mass fabrication. | Grégory Moille, Miriam Leonhardt, David Paligora, Nicolas Englebert, François Leo, Julien Fatome, Kartik Srinivasan, Miro Erkintalo | 2023-06-07T00:39:53Z | http://arxiv.org/abs/2306.04078v1 | # Parametrically driven pure-Kerr temporal solitons in a chip-integrated microcavity
###### Abstract
The discovery that externally-driven nonlinear optical resonators can sustain ultrashort pulses corresponding to coherent optical frequency combs has enabled landmark advances in applications from telecommunications to sensing. The main research focus has hitherto been on resonators with purely cubic (Kerr-type) nonlinearity that are externally-driven with a monochromatic continuous wave laser - in such systems, the solitons manifest themselves as unique attractors whose carrier frequency coincides with that of the external driving field. Recent experiments have, however, shown that a qualitatively different type of temporal soliton can arise via parametric down-conversion in resonators with simultaneous quadratic and cubic nonlinearity. In contrast to conventional solitons in pure-Kerr resonators, these _parametrically driven solitons_ come in two different flavours with opposite phases, and they are spectrally centred at half of the frequency of the driving field. Here, we theoretically predict and experimentally demonstrate that parametrically driven solitons can also arise in resonators with pure Kerr nonlinearity under conditions of bichromatic driving. In this case, the solitons arise through four-wave mixing mediated phase-sensitive amplification, come with two distinct phases, and have a carrier frequency in between the two external driving fields. Our experiments are performed in an integrated silicon nitride microcavity, and we observe frequency comb spectra in good agreement with theoretical predictions. In addition to representing a fundamental discovery of a new type of temporal dissipative soliton, our results constitute the first unequivocal realisation of parametrically driven soliton frequency combs in a microcavity platform compatible with foundry-ready mass fabrication.
## I Introduction
The injection of monochromatic continuous wave (CW) laser light into dispersive optical resonators with purely Kerr-type \(\chi^{(3)}\) nonlinearity can lead to the generation of localized structures known as dissipative Kerr cavity solitons [1; 2]. These CSs correspond to ultrashort pulses of light that can persist within the resonator [Fig. 1(a)], indefinitely maintaining constant shape and energy [3]. While first observed in macroscopic optical fiber ring resonators [1], CSs have attracted particular attention in the context of monolithic Kerr microcavities [2], where they underpin the generation of coherent and broadband optical frequency combs [4; 5; 6]. By offering a route to coherent frequency comb generation in chip-integrated, foundry-ready platforms, CSs have enabled ground breaking advances in applications including telecommunications [7; 8], artificial intelligence [9; 10], astronomy [11; 12], frequency synthesis [13], microwave generation [14; 15], and distance measurements [16; 17].
The conventional CSs that manifest themselves in resonators with pure Kerr nonlinearity sit atop a CW background, and they gain their energy through four-wave-mixing (FWM) interactions with that background [1]. In the frequency domain, the solitons are (to first order) centred around the frequency of the external CW laser that drives the resonator [Fig. 1(a)]. They are (barring some special exceptions [18; 19; 20; 21; 22]) unique attracting states: except for trivial time translations, all the CSs that exist for given system parameters are identical. These features can be disadvantageous or altogether prohibitive for selected applications: noise on the external CW laser can degrade the coherence of nearby comb lines, removal of the CW background may require careful spectral filtering, whilst applications that require coexistence of distinguishable binary elements [23; 24; 25; 26; 27] are fundamentally beyond reach. Interestingly, recent experiments reveal that qualitatively different types of CSs can exist in resonators that display a quadratic \(\chi^{(2)}\) in addition to a cubic \(\chi^{(3)}\) nonlinearity [Fig. 1(b)]; in particular, degenerate optical parametric oscillators driven at \(2\omega_{0}\) can support CSs at \(\omega_{0}\)[28; 29]. In this configuration, the solitons are _parametrically driven_ through the quadratic down-conversion of the externally-injected field, which endows them with fundamental differences compared to the conventional CSs emerging in monochromatically-driven, pure-Kerr resonators. Specifically, _parametrically driven cavity solitons_ (PDCSs) are spectrally separated from the driving frequency (e.g. \(\omega_{0}\) versus \(2\omega_{0}\)), and they come in two binary forms with opposite phase. These traits render PDCSs of interest for an altogether new range of appli
cations.
Optical PDCSs have so far been generated only via the quadratic \(\chi^{(2)}\) nonlinearity, which is not intrinsically available in integrated (foundry-ready) resonator platforms, such as silicon [30] or silicon nitride [31; 32; 33]. However, it is well-known that phase-sensitive amplification analogous to \(\chi^{(2)}\) parametric down-conversion can also be realised in pure Kerr resonators when driven with two lasers with different carrier frequencies [34; 35; 36; 37; 38], allowing e.g. for novel random number generators [23; 24; 25] and coherent optical Ising machines [26; 27]. A natural question that arises is: is it possible to generate PDCSs in foundry-ready, pure-Kerr resonators with bichromatic driving? Whilst a related question has been theoretically explored in the context of _diffractive_ Kerr-only resonators [39], the presence of _dispersion_ substantially changes the physics of the problem. The impact of bichromatic driving in the dynamics of conventional Kerr CSs has also been considered [40; 41; 42; 43; 44; 45], but the possibility of using the scheme to generate temporal PDCSs remains unexplored.
Here, we theoretically predict and experimentally demonstrate that a dispersive resonator with pure Kerr nonlinearity can support PDCSs in the presence of bichromatic driving [Fig. 1(c)]. We reveal that, under appropriate conditions, a signal field with carrier frequency in between two spectrally-separated driving fields obeys the damped, parametrically driven nonlinear Schrodinger equation (PDNLSE) that admits PDCS solutions, and we unveil the system requirements for the practical excitation of such solutions. Our experiments are performed in a 23 \(\mu\)m-radius, chip-integrated silicon nitride microring resonator whose dispersion is judiciously engineered to facilitate PDCS generation at 253 THz (1185 nm) when bichromatically pumping at 314 THz (955 nm) and 192 THz (1560 nm). We observe PDCS frequency comb spectra that are in good agreement with numerical simulations, as well as clear signatures of the anticipated \(\mathbb{Z}_{2}\) symmetry, i.e., coexistence of two PDCSs with opposite phase. By revealing a fundamentally new pathway for the generation of coherent PDCS frequency combs far from any pump frequency, in a platform that has direct compatibility with foundry-ready fabrication, our work paves the way for integrated, low-noise frequency comb generation in new spectral regions, as well as photonic integration of applications requiring combs with a binary degree of freedom.
## II Results
We first summarise the main points that lead to the prediction of PDCSs in bichromatically-driven Kerr resonators [for full details, see Methods]. To this end, we consider a resonator made out of a dispersive, \(\chi^{(3)}\) nonlinear waveguide that is driven with two coherent CW fields with angular frequencies \(\omega_{\pm}\) [see Fig. 1(c)]. The dispersion of the resonator is described by the integrated dispersion [6] at the cavity resonance \(\omega_{0}^{\prime}\) (apostrophes highlight resonance frequencies throughout the article) closest to the frequency \(\omega_{0}=(\omega_{+}+\omega_{-})/2\):
\[D_{\text{int}}(\mu)=\omega_{\mu}^{\prime}-\omega_{0}^{\prime}-\mu D_{1}=\sum_ {k\geq 2}\frac{D_{k}}{k!}\mu^{k}. \tag{1}\]
Here, \(\mu\) is a relative mode number with respect to the resonance \(\omega_{0}^{\prime}\) and \(D_{1}/(2\pi)\) is the cavity free-spectral range (FSR) at \(\omega_{0}^{\prime}\). The terms \(D_{k}\) with \(k>1\) account for deviations of the resonance frequencies \(\omega_{\mu}^{\prime}\) from an equidistant grid defined by \(\omega_{0}^{\prime}+\mu D_{1}\).
Under particular conditions [see Methods], the evolution of the slowly-varying electric field envelope centred at \(\omega_{0}\) can be shown to be (approximately) governed by the PDNLSE, with the parametric driving ensuing from non-degenerate FWM driven by the intracavity fields at the pump frequencies \([\omega_{+}+\omega_{-}\rightarrow\omega_{\mu}+\omega_{-\mu}\), see Fig. 1(d)]. (Note: in stark contrast to standard Kerr CSs, for which only _one_ comb line is externally driven, _all_ of the components of a PDCS frequency comb are separately driven via non-degenerate FWM.) Because the PDNLSE is well-known to admit PDCS solutions [46; 28; 47], it follows that the system may support such solitons with a carrier frequency \(\omega_{0}\) in between the two driving frequencies, provided however that the system parameters - particularly resonator dispersion - are conducive for soliton existence.
The resonator dispersion must meet three key conditions for PDCS excitation to be viable [Methods]. First, for solitons to exist, the dispersion around the degenerate FWM frequency \(\omega_{0}\) must be anomalous, i.e., \(D_{2}>0\) in Eq. (1). Second, the effective detuning [see Methods] between the degenerate FWM frequency (\(\omega_{0}\)) and the closest cavity resonance (\(\omega_{0}^{\prime}\)) must be within the range of soliton existence, essentially requiring that the degenerate FWM process \(\omega_{+}+\omega_{-}\to 2\omega_{0}\) (approximately) satisfies linear phase-matching [Fig. 1(e)]. This second condition can be written as \(\delta\omega=(\omega_{+}^{\prime}+\omega_{-}^{\prime})/2-\omega_{0}^{\prime}=[ D_{\text{int}}(p)+D_{\text{int}}(-p)]/2\approx 0\),
where \(\pm p\) correspond to the modes excited by the driving lasers at \(\omega_{\pm}\). Given that \(D_{2}>0\), this requires at least one higher-even-order dispersion coefficient (e.g. \(D_{4}\)) to be negative. Third, the intracavity field amplitudes at the driving frequencies, \(|E_{\pm}|\), must remain (approximately) homogeneous and stationary to ensure a constant parametric driving strength for the PDCS field \(E_{0}\) centred at \(\omega_{0}\) [Fig. 1(f)]. This final condition can be met by ensuring dispersion at the driving frequencies is (i) normal (or driving amplitudes small), such that the corresponding intracavity fields do not undergo pattern forming (modulation) instabilities [48], and (ii) such that the temporal walk-off between the driving frequencies \(\omega_{\pm}\) and the signal frequency \(\omega_{0}\) is sufficiently large so as to mitigate pump depletion in the vicinity of the soliton
that would otherwise break the homogeneity of the fields at \(\omega_{\pm}\) [Fig. 1(f)]. As will be demonstrated below, all of these conditions can be met through judicious dispersion engineering that is within the reach of contemporary microphotonic fabrication.
**Simulations.** Before discussing our experiments, we present results from numerical simulations that illustrate the salient physics. Our simulations are based upon a full iterative "Ikeda" map of the system without any approximations [Methods], and they consider a toy resonator with 25 GHz FSR and minimal dispersion necessary for PDCS existence [see Fig. 2(a)]. Specifically, we assume a quartic dispersion with \(D_{2}=2\pi\times 4.1\) kHz and \(D_{4}=-2\pi\times 33\) mHz, yielding \(D_{\mathrm{int}}(p)+D_{\mathrm{int}}(-p)\approx 0\) for pump frequency shift \(\Omega_{\mathrm{p}}=2\pi\times 30.4\) THz (corresponding to mode number \(p=1217\)). We assume for simplicity that the two driving fields are coincident on their respective linear cavity resonances (zero detuning), and both carry CW laser power of about 140 mW [see Methods for other parameters]. Because the group-velocity dispersion at the pump frequencies is _normal_, modulational instabilities are suppressed and the intracavity fields converge to stable homogeneous states with equal circulating CW power of about 43 W, thus yielding an effective parametric driving strength and detuning within the regime of PDCS existence [see Methods].
Figure 2(b) shows the evolution of the numerically simulated intracavity intensity profile with an initial condition consisting of two hyperbolic secant pulses with opposite phases. As can be seen, after a short transient, the field reaches a steady-state that is indicative of two pulses circulating around the resonator. The pulses sit atop a rapidly oscillating background that is due to the beating between the quasi-homogeneous fields at the pump frequencies [Fig. 2(c)]. Correspondingly, the spectrum of the simulation output [Fig. 2(d)] shows clearly the presence of a hyperbolic secant-shaped feature that sits in between the strong quasi-monochromatic components at the pump frequencies. In accordance with PDCS theory [see Methods], there is no significant CW peak at the parametric signal frequency \(\omega_{0}\) at which the solitons are
Figure 1: **Comparison of platforms and schematic illustration of PDCS generation in Kerr resonators.** (a) Conventional Kerr CSs [1; 2; 4] form around the input frequency \(\omega_{0}\) in dispersive resonators with \(\chi^{(3)}\) Kerr nonlinearity; for given parameters, all solitons in the resonator are identical. (b) Parametric down-conversion of an input field at \(2\omega_{0}\) can yield PDCSs at \(\omega_{0}\) in a resonator with combined \(\chi^{(2)}\) and \(\chi^{(3)}\) nonlinearity [28; 29]. Here, the solitons come in two forms with opposite phase [28]: the complex soliton electric field \(E_{\pm}(\pi)\propto\pm E_{0}(\tau)\exp(i\omega_{0}\tau)\), where \(\pm E_{0}(\tau)\) is the slowly-varying envelope [real part visualised in (a)–(c)]. (c) In the bichromatically driven Kerr resonator configuration studied in this work, PDCSs arise in between two input frequencies \(\omega_{\pm}\). (d) Cartoon of the non-degenerate FWM process (and corresponding energy flow diagram) through which the intracavity fields at \(\omega_{\pm}\) provide coherent parametric driving to _all_ of the PDCS comb lines around \(\omega_{0}\). Note that this energy flow is in contrast to the standard Kerr CS case in (a), where only the mode \(\omega_{0}\) is driven. The shaded curves in the background of (d) depict cavity modes. (e) PDCSs arise under conditions close to linear phase-matching of degenerate FWM, which in terms of cavity modes occurs when the frequency deviation \(\delta\omega=(\omega_{+}^{\prime}+\omega_{-}^{\prime})-\omega_{0}^{\prime}\approx 0\), with \(\omega_{\pm}^{\prime}\) the driven cavity modes and \(\omega_{0}^{\prime}\) the mode closest to \(\omega_{0}\). (f) Illustrative slowly-varying electric field amplitudes around the parametric signal frequency \(\omega_{0}\) (\(E_{0}\), green) and the pump frequencies \(\omega_{\pm}\) (\(E_{\pm}\), blue and red). The fields \(E_{\pm}\) must be approximately CW to ensure homogeneous parametric driving strength, calling for (i) sufficient dispersive walk-off to mitigate pump depletion and (ii) suppression of modulation instabilities. (g) Because the full intracavity amplitude consists of a superposition of the \(E_{\pm}\) and \(E_{0}\) fields, the PDCS manifests itself as a localized structure amidst a rapidly oscillating background.
spectrally centred. To highlight the phase disparity of the steady-state pulses, we apply a numerical filter to remove the quasi-monochromatic intracavity components around the pump frequencies, and plot in Fig. 2(e) the real part of the complex intracavity electric field envelope. The simulation results in Fig. 2(e) are compared against the real parts of the exact, analytical PDCS solutions [Methods], and we clearly observe excellent agreement.
The results in Fig. 2(a)-(e) corroborate the fundamental viability of our scheme. However, they were obtained assuming a completely symmetric dispersion profile with no odd-order terms, which may be difficult to realise even with state-of-the-art microphotonic fabrication (including the resonators considered in our experiments). We find, however, that PDCSs can exist even in the presence of odd-order-dispersion, albeit in a perturbed form. This point is highlighted in Figs. 2(f)-(h), which show results from simulations with all parameters as in Fig. 2(a)-(e) except an additional non-zero third-order dispersion term \(D_{3}=-2\pi\times 58\) Hz. As for conventional (externally-driven) Kerr CSs [5; 49; 50; 51], we find that third-order dispersion causes the solitons to emit dispersive radiation at a spectral position determined by the phase-matching condition \(D_{\text{int}}(\mu_{\text{DW}})\approx(\omega_{0}-\omega_{0}^{\prime})\) [Fig. 2(g)]. This emission results in the solitons experiencing constant drift in the temporal domain, and endows them with oscillatory tails [Fig. 2(h)]. Yet, as can clearly be seen, the PDCSs continue to exist
Figure 2: **Illustrative simulations of PDCSs in dispersive Kerr resonators.** (a)–(e) Simulation results obtained for a 25 GHz toy resonator with \(D_{2}=2\pi\times 4.1\) kHz and \(D_{4}=-2\pi\times 33\) mHz, yielding the integrated dispersion in (a). The shaded gray region highlights a region of anomalous (A) dispersion sandwiched between regions of normal (N) dispersion. Note that, because \(D_{\text{int}}\) is completely symmetric in this example, the frequency mismatch \(\delta\omega(p)=D_{\text{int}}(p)\), such that the pump frequencies satisfying linear phase-matching \(\delta\omega\approx 0\) can be directly read off the graph: the dashed vertical lines in (a) indicate those pump frequencies and were used in the simulations. (b) Dynamical evolution of two hyperbolic secant pulses with opposite phase. The colormap depicts instantaneous power in Watts. (c) Temporal intensity profile around one of the steady-state solitons at the output of the simulation in (b). (d) Optical spectrum corresponding to the output of the simulation in (b). (e) The red dashed curve shows the real part of the analytical PDCS solution [see Methods] while the solid blue curve shows the real part of the simulated intracavity field about zero frequency shift. The simulation result was obtained by first spectrally filtering out the intracavity fields at the pump frequencies (green dashed area in (d) indicates the filter passband). The orange curve shows the (mean-subtracted) total field amplitude for reference. (f)–(h) Simulation results with parameters as in (a)–(e) but with an additional third-order dispersion term \(D_{3}=-2\pi\times 58\) Hz, yielding the integrated dispersion (left axis) and corresponding group-velocity dispersion \(D_{2}\) (right axis) shown in (f). (g) PDCS spectrum in the presence of third-order dispersion; vertical red dashed line indicates the predicted dispersive wave position. The black vertical dash-dotted line in (f) and (g) indicate the zero-dispersion point that demarcates regions of normal (N) and anomalous (A) dispersion. (h) The blue and orange curves are as in (e) but with third-order dispersion. No analytical solution exist in the presence of third-order dispersion.
in two distinct forms with near-opposite phase. It is worth noting that, for the parameters considered in Fig. 2(f)-(h), the low-frequency driving field experiences anomalous group-velocity dispersion; however, the intracavity intensity at that frequency is below the modulation instability threshold [48], thus allowing the corresponding field to remain quasi-homogeneous (the modulation on the total intensity profile arises solely from the linear beating between the different fields).
**Experiments.** For experimental demonstration [see Fig. 3(a) and Methods], we use a microring resonator made from a 690 nm-thick, 850 nm-wide silicon nitride layer embedded in fused silica, fabricated in a commercial foundry. The ring exhibits a radius of 23 \(\mu\)m, thus yielding a free-spectral range of about 1 THz. We use two external cavity diode lasers to drive the resonator: one tunable in the telecommunications C-band (from 186 THz to 198 THz, i.e., from 1613 nm to 1515 nm) and the other tunable from 306 THz to 330 THz (980 nm to 910 nm). Both driving fields are optically amplified and combined using a wavelength-division multiplexer (WDM) before being coupled into the resonator via a pulley scheme that ensures efficient coupling at all the relevant frequencies [52]. At the output of the resonator, 90% of the signal is routed to an optical spectrum analyzer for analysis. The remaining 10% is passed through a bandpass filter to remove spectral components around the driving frequencies, thus allowing to isolate the parametrically-generated signal field for characterisation.
The orange curve in Fig. 3(b) depicts an estimate of the resonator's integrated dispersion around a cavity mode at 253 THz, obtained through a combination of finite-element-modelling and fitting to our experimental observations [see Methods]. This data is _consistent_ with experimentally measured resonance frequencies (blue circles), yet we caution that our inability to probe the resonances around 253 THz prevents unequivocal evaluation of the dispersion at that frequency. The estimated dispersion can be seen to be such that the requisite phase-matching for generating a PDCS at 253 THz (\(\delta\omega\approx 0\)) can be satisfied, provided that the pump lasers are configured to drive cavity modes at 314 THz and 192 THz [Fig. 3(c)].
In our experiments, we set the on-chip driving power for both driving fields to be about 150 mW and tune the high-frequency pump to the cavity mode at 314 THz. We then progressively tune the low-frequency pump to the cavity mode at 192 THz (from blue to red), maintaining the high-frequency pump at a fixed frequency. As the low-frequency pump tunes into resonance, we initially observe non-degenerate parametric oscillation characterised by the generation of two CW components symmetrically detuned about 253 THz. These CW components progressively shift closer to each other as the pump tunes into the resonance, concomitant with the formation of a frequency comb around the degenerate FWM frequency \(\omega_{0}\) [see Fig. 3(c)]. To characterise the comb noise, we performed a heterodyne beat measurement using a helper laser at 230 THz within the vicinity of a single comb line. Initially, no beat note is observed, which is characteristic of an unstable, non-solitonic state within the resonator. Remarkably, as the 192 THz driving field is tuned further into resonance, we observe that the parametric signals reach degeneracy, concomitant with the emergence of a broadband comb state with smooth spectral envelope [Fig. 3(e)] and a heterodyne beat note (comparable with the helper laser linewidth of 250 kHz) that is considerably narrower than the 300 MHz microcavity linewidth [Fig. 3(f) and Supplementary Figure 1].
The emergence of the smooth comb state [Fig. 3(e)] is associated with an abrupt drop in the photodetector signal recorded around 253 THz, giving rise to a noticeable step-like feature [Fig. 3(f)]. Similar steps are well-known signatures of conventional CSs in monochromatically-driven Kerr resonators [4]. Moreover, as shown in Fig. 3(e), the smooth spectral envelope observed in the step-region is in very good agreement with the spectrum of a 24 fs (full-width at half-maximum) PDCS derived from numerical modelling that use estimated experimental parameters [see Supplementary Figure 2]. The simulations faithfully reproduce the main features of the experimentally observed spectrum, including a strong dispersive wave peak at about 210 THz. We note that the prominent dip at about 275 THz arises due to the frequency-dependence of the pulley coupler [52], which was taken into account _ad hoc_ when estimating the spectrum of the out-coupled PDCS [shown as blue curve in Fig. 3 - see also Methods and Supplementary Figure 2].
It is interesting to note that, in addition to the frequency comb around the degenerate FWM frequency 253 THz, frequency combs arise also around both of the pump frequencies. These combs originate from FWM interactions between the pump fields and the comb lines around 253 THz, in a manner similar to spectral extension [42; 43; 44] and two-dimensional frequency comb [53] schemes studied in the context of conventional Kerr CSs. The combs around the pump frequencies share the line spacing with the comb around 253 THz, but there is a constant offset between the pump and PDCS combs. In our experiments, this comb offset is directly observable in the optical spectrum [inset of Fig. 3(e)] and found to be about 50 GHz \(\pm\) 2 GHz (uncertainty defined by the optical spectrum analyzer resolution), which is in good agreement with the value of 49 GHz predicted by our modelling [see Methods]. All in all, given the considerable uncertainties in key experimental parameters (particularly dispersion and detunings), we find the level of agreement between the simulations and experiments remarkable.
The results shown in Fig. 3 are strongly indicative of PDCS generation in our experiments. Further confirmation is provided by observations of low-noise combs with
complex spectral structures that afford a straightforward interpretation in terms of multi-PDCS states [Fig. 4]. Specifically, whilst a single PDCS circulating in the resonator is expected to yield a smooth spectral envelope, the presence of two (or more) PDCSs results in a spectral interference pattern whose details depend upon the soliton's relative temporal delay and - importantly - phase.
Figures 4(a) and (b) show selected examples of multi-soliton comb spectra measured in our experiments. Also shown as solid curves are spectral envelopes corresponding to fields with two linearly superposed, temporally delayed PDCSs [Figs. 4(c) and (d) and Methods]. We draw particular attention to the fact that, in the measured data shown in Fig. 4(b), the comb component at the degen
Figure 3: **Experimental observation of pure-Kerr temporal PDCSs in an on-chip microcavity.** (a) Experimental setup. EDFA, erbium-doped fiber amplifier; TA, semiconductor taper amplifier; WDM, wavelength division multiplexer; OSA, optical spectrum analyzer; BPF, band-pass filter; LO, local oscillator; ESA, electrical spectrum analyzer; Osc., oscilloscope. (b) Orange curve shows integrated dispersion around 253 THz used to simulate our experiments while blue circles show the dispersion fitted from experimental data [see Methods]. (c) Linear phase-mismatch for the degenerate FWM process computed from the integrated dispersion data in (b). The fit uncertainties for (b) and (c) are smaller than the circle markers shown [Methods]. (d) Experimentally measured spectra as the low-frequency pump (P1) tunes into resonance from the blue (high-frequency pump P2 kept fixed). (e) As the low-frequency pump tunes in sufficiently, the frequency comb spectrum abruptly transitions into a smooth envelope. This transition is indicative of a PDCS comb; the experimentally measured comb spectrum is in good agreement with a numerically simulated PDCS spectrum (blue curve, see Methods). The inset in (e) highlights the offset between the PDCS frequency comb and the comb around the P2 pump frequency. The arrows across (d) and (e) highlight the red-shift of the pump P1. (f) Heterodyne beat note observed in the PDCS regime (instrument resolution bandwidth is 10 kHz). (g) Photodetector signal as the low-frequency pump is tuned across a resonance, revealing a step feature that coincides with the emergence of the smooth PDCS comb envelope in (e).
erate FWM frequency \(\omega_{0}\) is suppressed by about 40 dB compared to neighbouring lines, which is in stark contrast with results in Fig. 4(a), in which the degenerate FWM component is dominant. This suppression is indicative of a relative phase shift of \(0.992\pi\) between the two solitons [Fig. 4(d)] - a clear signature of PDCSs.
## Discussion
We have shown theoretically, numerically, and experimentally that dispersive resonators with a purely Kerr-type \(\chi^{(3)}\) nonlinearity can support parametrically driven cavity solitons under conditions of bichromatic driving. Our theoretical analysis has revealed the salient conditions that the system dispersion must meet to allow for PDCS persistence, with approximation-free numerical simulations confirming the fundamental viability of the scheme. Experimentally, we realise suitable dispersion conditions in a chip-integrated silicon nitride microresonator, observing low-noise frequency comb states that evidence PDCS generation. Significantly, our measurements show spectral interference patterns that indicate the co-existence of two localized structures with opposite phase - a defining feature of PDCSs.
Our work fundamentally predicts and demonstrates that dispersive Kerr resonators can support a new type of dissipative structure - the PDCS - in addition to conventional Kerr CSs. We envisage that studying the rich nonlinear dynamics [54, 55, 56], interactions [57, 58, 59], and characteristics (including quantum [60, 61, 62, 63]) of pure-Kerr PDCSs will draw substantial future research interest, echoing the extensive exploration of conventional Kerr CS dynamics over the past decade [54, 55, 56, 57, 58, 59, 60, 61, 62, 63]. In this context, to the best of our knowledge, the results reported in our work represent the first prediction and observation of dispersive wave emission by PDCSs in any physical system.
From a practical vantage, our scheme offers a route to generate PDCS frequency combs in foundry-ready, chip-integrated platforms with characteristics that are fundamentally different from those associated with conventional Kerr CSs. For example, forming between the two input frequencies, PDCSs could permit comb generation at spectral regions where direct pump lasers may not be available. Moreover, the lack of a dominating CW component at the PDCS carrier frequency alleviates the need for careful spectral shaping, and could result in fundamental advantages to noise characteristics. We emphasise that PDCSs are underpinned by phase-sensitive amplification [34], which can theoretically offer a sub-quantum-limited (squeezed) noise figure [64, 65, 66, 67, 68]. Finally, the fact that PDCSs come in two forms with opposite phase opens the doors to a new range of applications that require a binary degree of freedom, including all-optical random number generation and realisations of coherent optical Ising machines. Whilst the potential of PDCSs for such applications has been noted earlier [28], our work provides for the first time a route for chip-integrated realisations with potential to CMOS-compatible mass manufacturing.
Figure 4: **Observations of multi-soliton interference.** (a, b) Comb spectra corresponding to coherent states with two PDCSs simultaneously circulating in the resonator. The red and green shaded curves in (a) and (b) depict out-coupled spectral envelopes of the two-soliton fields whose real part is shown in (c) and (d), respectively, created by linearly superposing two PDCS solutions with different relative delay and phase [Methods]. In (c), the solitons are in-phase and have a relative temporal separation of 533 fs, whilst in (d) the solitons are out-of-phase (relative phase \(0.992\pi\)) and have a temporal separation of 525 fs. Insets in (a) and (b) show a zoomed-in view of the measured comb spectra around the degenerate FWM frequency 253 THz, highlighting how the degenerate FWM component at 253 THz is (a) maximised and (b) minimised.
## Methods
Simulation models.We first describe the theoretical models that describe the dynamics of bichromatically driven Kerr resonators and that underpin simulation results in our work. Our starting point is a polychromatic Ikeda-like map, which we will use to derive an extended mean-field Lugiato-Lefever equation that has been used in previous studies [42; 43; 44; 45; 69; 40]. To this end, we consider a Kerr resonator made out of a dispersive waveguide [with length \(L\) and propagation constant \(\beta(\omega)\)] that is driven with two coherent fields with angular frequencies \(\omega_{\pm}\) [see Fig. 1(c)]. The evolution of the electric field envelope [referenced against the degenerate FWM frequency \(\omega_{0}=(\omega_{+}+\omega_{-})/2\)] during the \(m\)th transit around the resonator is governed by the generalized nonlinear Schrodinger equation:
\[\frac{\partial E^{(m)}(z,\tau)}{\partial z}=i\hat{\beta}_{\mathrm{S}}\left(i \frac{\partial}{\partial\tau}\right)E^{(m)}+i\gamma|E^{(m)}|E^{(m)}. \tag{2}\]
Here \(z\) is a coordinate along the waveguide that forms the resonator, \(\tau\) is time in a reference frame that moves with the group-velocity of light at \(\omega_{0}\), \(\gamma\) is the Kerr nonlinearity coefficient and the dispersion operator
\[\hat{\beta}_{\mathrm{S}}\left(i\frac{\partial}{\partial\tau}\right)=\sum_{k \geq 2}\frac{\beta_{k}}{k!}\left(i\frac{\partial}{\partial\tau}\right)^{k}, \tag{3}\]
with \(\beta_{k}=d\beta/d\omega|_{\omega_{0}}\) the Taylor series expansion coefficients of \(\beta(\omega)\) around \(\omega_{0}\). Note that the single electric field envelope \(E^{(m)}(\tau,z)\) contains all the frequency components pertinent to the nonlinear interactions, including the fields at the pump frequencies \(\omega_{\pm}\) and the signal frequency at \(\omega_{0}\). Note also that the Taylor series expansion coefficients \(\beta_{k}\) are linked to the resonance frequency expansion coefficients in Eq. (1) as \(D_{k}\approx-D_{1}^{k+1}L\beta_{k}/(2\pi)\)[6], such that
\[D_{\mathrm{int}}(\mu)\approx-\frac{D_{1}L}{2\pi}\hat{\beta}_{\mathrm{S}}(\mu D _{1}). \tag{4}\]
The Ikeda map consists of Eq. (2) together with a boundary equation that describes the coupling of light into the resonator. Considering bichromatic driving, the boundary equation reads [see also Supplementary Note 1]:
\[E^{(m+1)}(0,\tau) =\sqrt{1-2\alpha}E^{(m)}(L,\tau)e^{-i\delta_{0}}\] \[+\sqrt{\theta_{+}}E_{\mathrm{in},+}e^{-i\Omega_{\mathrm{p}}\tau+ imb_{+}}\] \[+\sqrt{\theta_{-}}E_{\mathrm{in},-}e^{i\Omega_{\mathrm{p}}\tau+ imb_{-}}. \tag{5}\]
Here \(\alpha\) is half of the fraction of power dissipated by the intra-cavity field over one round trip, \(\delta_{0}=2\pi k-\beta(\omega_{0})L\) is the linear phase detuning of the reference frequency \(\omega_{0}\) from the closest cavity resonance (with order \(k\)), \(E_{\mathrm{in},\pm}\) are the complex amplitudes of the driving fields at \(\omega_{\pm}\), respectively, \(\Omega_{\mathrm{p}}=pD_{1}\) with \(p\) a positive integer represents the angular frequency shifts of the pumps from the reference frequency \(\omega_{0}\), and \(\theta_{\pm}\) are the power transmission coefficients that describe the coupling of the driving fields into the resonator. The coefficients \(b_{\pm}\) allow us to introduce the phase detunings \(\delta_{\pm}\) that describe the detunings of the pump frequencies from the cavity resonances closest to them (thus accounting for the fact that the frequency shift \(\omega_{0}-\omega_{\pm}\) may not be an exact integer multiple of \(D_{1}\)):
\[b_{\pm}=\delta_{\pm}-\delta_{0}+\hat{\beta}_{\mathrm{S}}(\pm\Omega_{\mathrm{p }})L. \tag{6}\]
Note that the phase detunings \(\delta\) described above are related to the frequency detunings of the corresponding carrier frequency \(\omega\) from the closest cavity resonances at \(\omega^{\prime}\) as \(\delta\approx 2\pi(\omega^{\prime}-\omega)/D_{1}\).
Before proceeding, we note that, in our specific configuration, only two out of the three detuning terms introduced above (\(\delta_{0}\) and \(\delta_{\pm}\)) are independent. This is because the degenerate FWM frequency is completely determined by the pump frequencies viz. \(\omega_{0}=(\omega_{+}+\omega_{-})/2\); therefore, the signal detuning \(\delta_{0}\) can be written in terms of the pump detunings \(\delta_{\pm}\) as [see Supplementary Note 2]:
\[\delta_{0}=\frac{\delta_{+}+\delta_{-}+L[\hat{\beta}_{\mathrm{S}}(\Omega_{ \mathrm{p}})+\hat{\beta}_{\mathrm{S}}(-\Omega_{\mathrm{p}})]}{2}. \tag{7}\]
Substituting this expression for \(\delta_{0}\) into Eq. (6) yields \(b_{\pm}=\pm b\), where
\[b=\frac{\delta_{+}-\delta_{-}+L[\hat{\beta}_{\mathrm{S}}(\Omega_{\mathrm{p}})- \hat{\beta}_{\mathrm{S}}(-\Omega_{\mathrm{p}})]}{2}. \tag{8}\]
It can be shown [see Supplementary Note 3] that this coefficient describes the offset, \(\Delta f\), between the frequency combs forming around \(\omega_{0}\) and \(\omega_{\pm}\) viz.
\[\Delta f=\frac{|b|D_{1}}{(2\pi)^{2}}. \tag{9}\]
PDCS theory.All of the simulations presented in our work use the full Ikeda-like map defined by Eqs. (2) and (5). However, the system's ability to sustain PDCSs can be inferred more readily from the mean-field limit, obtained under the assumption that the intracavity envelope \(E^{(m)}(z,\tau)\) evolves slowly over a single round trip (i.e., the cavity has a high finesse, and the linear and nonlinear phase shifts are all small). In this case, the Ikeda-like map described above can be averaged into the generalized Lugiato-Lefever mean-field equation similar to the one used, e.g., in refs. [42; 43; 44; 45]. We write the equation in normalized form as [see Supplementary Note 4]:
\[\frac{\partial E(t,\tau)}{\partial t} =\left[-1+i(|E|^{2}-\Delta_{0})+i\hat{\beta}\left(i\frac{\partial} {\partial\tau}\right)\right]E \tag{10}\] \[+S_{+}e^{-i\Omega_{\mathrm{p}}\tau+iat}+S_{-}e^{i\Omega_{\mathrm{ p}}\tau-iat}.\]
Here \(t\) is a slow time variable that describes the evolution of the intracavity field over consecutive round trips (and is thus directly related to the index \(m\) of the Ikeda-like map), \(S_{\pm}=E_{\mathrm{in},\pm}\sqrt{\gamma L\theta_{\pm}/\alpha^{3}}\) are the normalized strengths of the driving fields, \(\Delta_{0}=\delta_{0}/\alpha\) is the normalized detuning of the signal field, and the normalized dispersion operator \(\hat{\beta}\) is defined as Eq. (3) but with normalized Taylor series coefficients \(\beta_{k}\to d_{k}=[2\alpha/(|\beta_{2}|L)]^{k/2}\beta_{k}L/\alpha\). Finally, the coefficient
\[a=\frac{b}{\alpha}=\frac{\Delta_{+}-\Delta_{-}+[\hat{\beta}(\Omega_{\mathrm{p}})- \hat{\beta}(-\Omega_{\mathrm{p}})]}{2}, \tag{11}\]
where \(\Delta_{\pm}=\delta_{\pm}/\alpha\) are the normalized detunings of the external driving fields. To avoid notational clutter, we use the symbol \(\Omega_{\mathrm{p}}\) to represent pump frequency shifts both in our dimensional and normalized equations.
We now make the assumption that the intracavity fields \(E_{\pm}\) at the pump frequencies are homogeneous and stationary. (Note: this assumption is not used in any of our simulations.) To this end, we substitute the ansatz
\[E(t,\tau) =E_{0}(t,\tau) \tag{12}\] \[+E_{+}e^{-i\Omega_{\mathrm{p}}\tau+iat}+E_{-}e^{i\Omega_{\mathrm{ p}}\tau-iat}\]
into Eq. (10). We then assume further that the (soliton) spectrum around the degenerate FWM frequency (the Fourier transform of \(E_{0}(t,\tau)\)) does not exhibit significant overlap with the pump frequencies. This allows us to separate terms that oscillate with different frequencies, yielding the following equation for the signal field:
\[\frac{\partial E_{0}(t,\tau)}{\partial t} = \left[-1+i(|E_{0}|^{2}-\Delta_{\rm eff})+i\hat{\beta}\left(i\frac{ \partial}{\partial\tau}\right)\right]E_{0} \tag{13}\] \[+ 2iE_{+}E_{-}E_{0}^{*},\]
where the effective detuning \(\Delta_{\rm eff}=\Delta_{0}-2(Y_{+}+Y_{-})\) with \(Y_{\pm}=|E_{\pm}|^{2}\) includes both linear and nonlinear (cross-phase modulation) phase shifts. Equation (13) has the precise form of the parametrically-driven nonlinear Schrodinger equation [70] with effective detuning \(\Delta_{\rm eff}\) and parametric driving coefficient \(\nu=2iE_{+}E_{-}\). Accordingly, assuming that the resonator group-velocity dispersion is anomalous at the signal frequency (\(\beta_{2}<0\)), the equation admits exact (parametrically-driven) soliton solutions of the form [28]:
\[E_{0}(\tau)=\sqrt{2}\zeta{\rm sech}(\zeta\tau)e^{i(\phi+\theta)}, \tag{14}\]
where \(\cos(2\phi)=1/|\nu|\), \(\zeta=\sqrt{\Delta_{\rm eff}+|\nu|\sin(2\phi)}\), and \(\theta={\rm arg}[iE_{+}E_{-}]\). It should be clear from the last term of Eq. (13) that _all_ of the frequency components of \(E_{0}\) are parametrically driven. This is particularly evident when expanding the field as a Fourier series, \(E_{0}(t,\tau)=\sum_{n}c_{n}(t)e^{-inD_{1}\tau}\): the equation of motion for each modal amplitude \(c_{n}\) will include a parametric driving term \(2iE_{+}E_{-}c_{-n}^{*}\).
Of course, the viability of sustaining the PDCS solution described by Eq. (14) in an actual bichromatically-driven Kerr resonator system is contingent on the applicability of the assumptions outlined above. As described in the main text, the assumption that the intracavity fields \(E_{\pm}\) at the pump frequencies are homogeneous and stationary lead to the requirements of dispersive walk-off and suppression of modulation instabilities. The requirement for phase-matching of the degenerate FWM process ensues from the fact that stable PDCS solutions generically exist only if the effective detuning \(\Delta_{\rm eff}\) is sufficiently small [70]. Indeed, recalling Eq. (7), we have
\[\Delta_{\rm eff}=\frac{\Delta_{+}+\Delta_{-}+\hat{\beta}(\Omega_{\rm p})+\hat {\beta}(-\Omega_{\rm p})}{2}-2\left(Y_{+}+Y_{-}\right). \tag{15}\]
Considering typical parameters, \(\Delta_{\rm eff}\) and \(|\nu|=2\sqrt{Y_{+}Y_{-}}\) are of the order of unity for stable solitons to exist [28; 70], while the detunings \(\Delta_{\pm}\) can be assumed small to ensure that sufficient intracavity powers \(Y_{\pm}\) can be attained without excessive driving powers \(X_{\pm}=|\hat{S}_{\pm}|^{2}\). This implies, then, that the pump frequency shift \(\Omega_{\rm p}\) must satisfy \([\hat{\beta}(\Omega_{\rm p})+\hat{\beta}(-\Omega_{\rm p})]\approx 0\). Unpeeling the normalization, and converting to the integrated dispersion defined as Eq. (1) of the main text, shows that this condition is equivalent with the linear phase-matching of degenerate FWM: \(D_{\rm int}(p)+D_{\rm int}(-p)\approx 0\).
Resonator used in experiments.The chip-integrated microring resonator used in our experiments was fabricated in a commercially-available foundry service. The resonators are made of a 690 nm-thick layer of silicon nitride that is fully embedded in fused silica. The ring has a width of 850 nm and a radius of 23 \(\mu\)m, thus yielding a round trip length \(L=144.5\)\(\mu\)m. Light is coupled into the ring via a 460 nm-wide integrated bus waveguide, with a 32 \(\mu\)m-long pulley-coupler ensuring good coupling at all the different frequencies of interest (\(\omega_{0}\), \(\omega_{\pm}\)). The resonator has intrinsic and loaded \(Q\)-factors of \(1.5\times 10^{6}\) and \(0.75\times 10^{6}\), respectively, corresponding to a finesse of \(\mathcal{F}\approx 3000\) and a resonance linewidth of \(\Delta f_{\rm r}\approx 300\) MHz. The chip has an input-to-output insertion loss of about 5.6 dB at 980 nm and 8.4 dB at 1550 nm.
Resonator dispersion and thermal nonlinearity.The theoretically estimated resonator dispersion [orange curve shown in Fig. 3(b)] was obtained in two steps. We first calculated the theoretical resonance frequencies using finite-element modelling, and then slightly modified that data [see Supplementary Figure 3 for a comparison of the two integrated dispersion curves] to match the PDCS simulations to experimentally obtained spectra. Experimentally, we characterized the dispersion at various spectral regions by measuring the resonance frequencies using a set of widely tunable lasers and a high-resolution wavemeter. Unfortunately, the unavailability of a suitable laser around the degenerate FWM frequency (253 THz) prevented us from directly probing the dispersion at that frequency.
Because we are not able to probe the dispersion around 253 THz, it is not possible to unequivocally compare experimentally measured dispersion with our theoretical estimate. This is because the integrated dispersion \(D_{\rm int}\) depends upon the precise resonance frequency \(\omega_{0}^{\prime}\) and the free-spectral range \([D_{1}/(2\pi)]\) at \(\omega_{0}^{\prime}\), which we are unable to probe experimentally. To nonetheless show that our measurements at different spectral regions are _consistent_ with our theoretical estimate, we can use nonlinear least-squares to fit our experimental data to the theoretical data, and in doing so obtain experimental estimates for \(\omega_{0}^{\prime}\) and \(D_{1}\), which then allows us to compute the integrated dispersion. The blue dots in Fig. 3(a) were obtained using this procedure. The fitting also provides the one-standard-deviation errors for the parameter estimates, \(\Delta\omega_{0}^{\prime}\) and \(\Delta D_{1}\), which then allows us to compute the fitting errors for \(\Delta D_{\rm int}(\mu)\) and \(\Delta\delta\omega(\mu)\). We find that the maximum (across relative mode order \(\mu\)) error in the estimated \(D_{\rm int}\) is \(\max[\Delta D_{\rm int}(\mu)/(2\pi)]\approx 0.50\) GHz, yielding \(\max[\Delta\delta\omega(\mu)/(2\pi)]\approx 0.35\) GHz. These errors are smaller than the markers used in Figs. 3(b) and (c), which is why errorbars are not shown.
Due to the resonator's small size, it exhibits a strong thermal nonlinearity [71]. We leverage this effect to achieve self-stabilization, such that the input lasers can remain free-running but still maintain near-constant detunings. In addition, the thermal nonlinearity causes the resonance frequencies to shift over several GHz as the pump laser(s) are tuned into resonance [see e.g. Fig. 3(f)], which we suspect is key to achieving phase-matched operation (and thus PDCS generation). We also note that the thermal nonlinearity may influence the resonator dispersion directly [72]; whilst this effect is generally weak (and under-examined), it is possible that it also influences the precise phase-matching conditions, thus playing a role in our experiments. A detailed study on the impact of the thermal nonlinearity to PDCS generation is beyond the scope of our present work.
Simulation parameters.The simulations in Fig. 2 assume a critically-coupled (\(\alpha=\theta\)) resonator with a round trip length \(L\approx 8.3\) mm, nonlinearity coefficient \(\gamma=1.2\) W\({}^{-1}\)km\({}^{-1}\), and finesse \(\mathcal{F}=\pi/\alpha=5000\). The driving fields are positioned at an angular frequency shift \(\pm\Omega_{\rm p}=2\pi\times 30.4\) THz with respect to the degenerate FWM frequency,
corresponding to relative mode number \(p=1217\). The dispersion coefficients are \(\beta_{2}=-5\) ps\({}^{2}\)/km, \(\beta_{3}=0.45\)ps\({}^{3}\)/km and \(\beta_{4}=1.6\times 10^{-3}\)ps\({}^{4}\)/km, corresponding to \(D_{2}/(2\pi)=4.06\) kHz, \(D_{3}/(2\pi)=-57.90\) Hz and \(D_{4}=-0.03\) Hz.
The above parameter yield and effective (normalised) driving strength \(|\nu|=1.37\) and detuning \(\Delta_{\rm eff}=1.2\) which are known to be in the regime of soliton existence [28]. As a matter of fact, the above parameters were found by looking for the driving powers and frequency shifts that yield these particular values for the driving strength and detuning.
The simulations in Fig. 3 and 4 use experimental values quoted in the main text or in the resonator description above, with the addition that the nonlinearity coefficient was set to \(\gamma=1\) W\({}^{-1}\)m\({}^{-1}\). The pump detunings were chosen such that, in Fig. 3, the effective driving strength \(|\nu|=1.28\) and \(\Delta_{\rm eff}=6\), and in Fig. 4, \(|\nu|=1.15\) and \(\Delta_{\rm eff}=5\). The effective detunings were coarsely tuned so as to match the simulations to the experimentally measured spectra. The simulation outcomes are not sensitive to the particular values of the driving strength \(\nu\) used.
With the parameters used to obtain the simulation results in Fig. 3, the coefficient \(b\) defined in Eq. (8) was \(b=-0.307\), yielding a comb frequency offset of \(\Delta f=49\) GHz from Eq. (9).
Frequency-dependent coupling.All the simulations reported in our manuscript have been obtained using the model defined by Eqs. (2) and (5). However, as explained in the main text [see also Supplementary Figure 2], when comparing against experimentally measured spectra [Figs. 3 and 4], the simulation outputs were post-processed to account for the frequency-dependent coupling, thus providing an estimate for the out-coupled spectrum. This was achieved by multiplying the simulated intracavity spectra with the frequency-dependent coupling coefficient [Supplementary Figure 2] obtained from rigorous coupled-mode simulations [52]. These coupled-mode simulations assumed the coupler length to be 31.25 \(\mu\)m, which was found to provide a better agreement with our experiments compared to the design value of 32 \(\mu\)m. This discrepancy is reasonable in terms of fabrication tolerances given the high sensitivity to the phase mismatch between the ring and waveguide modes and that any small discrepancy in the side-wall angle or waveguide width could cause a smaller effective pulley. However we note that the obtained length is well within fabrication tolerance of deep-UV stepper fabrication. Note that the frequency-dependent coupling was not included explicitly in our numerical simulation model for the sake of simplicity.
Multi-soliton states.Because of pump depletion and finite dispersive walk-off, the PDCSs carve a depletion region onto the intracavity fields at the pump frequencies [see Fig. 1(f)]. These depletion regions are the time-domain manifestations of the frequency combs that form around the pump frequencies, and they give rise to long-range soliton interactions. Compounded by the system's periodic boundary conditions, stable multi-soliton states only exist at selected relative delays (or not at all) in our simulations. On the other hand, it is well-known (from studies of conventional Kerr CSs) that experimental systems exhibit imperfections (e.g. avoided mode crossings) which, along with oscillatory tails from dispersive waves, force multi-soliton states to only manifest themselves at some prescribed relative delays [58; 59]. Because the PDCSs in our simulations exhibit long-range coupling, it is not possible to obtain a simulation of a multi-soliton state with the same relative delays as in our experiments, unless one has access to full details of the experimental system (including dispersion that captures possible avoided mode crossings), which we do not have.
Because of the above, the theoretical PDCS fields in Fig. 4(c) and (d) were created from a single steady-state PDCS - obtained via simulations of Eqs. (2) and (5). Specifically, the two-soliton fields were obtained by linearly adding together two replicas of the single steady-state PDCS state, with the relative delay (\(\Delta\tau\)) and phase (\(\Delta\phi\)) between the replicas inferred from nonlinear least squares fitting to the experimentally observed spectral interference pattern. For both in- and out-of-phase states, our fitting algorithm yields two possible configurations (\(\Delta\tau,\Delta\phi\)) that identically minimise the sum of the squared residuals. For the in-phase configuration, these are (533 fs, \(1\times 10^{-3}\pi\)) and (467 fs, \(3\times 10^{-4}\pi\)), and for the out-of-phase configuration we have (525 fs, \(0.99\pi\)) and (475 fs, \(1.01\pi\)). In Figs. 4(c) and (d), we plot the configurations associated with the larger delay. The one-standard-deviation errors for the fits are all smaller than (0.4 fs, \(0.01\pi\)).
## Acknowledgements
G. M. and K. S. acknowledge support from the NIST-on-a-chip program. J. F. acknowledges the CNRS (IRP WALL-IN project).
## Author contributions
G. M. performed all the experiments and assisted in the interpretation of the results. M. L. and D. P contributed to the theoretical development of the scheme and performed initial simulations to confirm the fundamental viability of the scheme. N. E. and F. L. provided guidance on parametrically-driven soliton theory. J. F. assisted in the interpretation of Kerr cavity physics. K. S. supervised and obtained funding for the experiments. M. E. developed the theory, performed the simulations, and wrote the manuscript with input from all the authors.
## Data availability
The data that support the plots within this paper and other findings of this study are available from M.E. upon reasonable request.
## Competing financial interests
The authors declare no competing financial interests.
## References
* (1) M. L. and D. P. contributed to the theoretical development of the scheme and performed initial simulations to confirm the fundamental viability of the scheme. N. E. and F. L. provided guidance on parametrically-driven soliton theory. J. F. assisted in the interpretation of Kerr cavity physics. K. S. supervised and obtained funding for the experiments. M. E. developed the theory, performed the simulations, and wrote the manuscript with input from all the authors.
* (2) M. L. and D. P. contributed to the theoretical development of the scheme and performed initial simulations to confirm the fundamental viability of the scheme. N. E. and F. L. provided guidance on parametrically-driven soliton theory. J. F. assisted in the interpretation of Kerr cavity physics. K. S. supervised and obtained funding for the experiments. M. E. developed the theory, performed the simulations, and wrote the manuscript with input from all the authors.
## Data availability
The data that support the plots within this paper and other findings of this study are available from M.E. upon reasonable request.
## Competing financial interests
The authors declare no competing financial interests.
## References
* ( |
2305.05636 | Nanocavity enhanced photon coherence of solid-state quantum emitters
operating up to 30 K | Solid-state emitters such as epitaxial quantum dots have emerged as a leading
platform for efficient, on-demand sources of indistinguishable photons, a key
resource for many optical quantum technologies. To maximise performance, these
sources normally operate at liquid helium temperatures ($\sim 4~\mathrm{K}$),
introducing significant size, weight and power requirements that can be
impractical for proposed applications. Here we experimentally resolve the two
distinct temperature-dependent phonon interactions that degrade
indistinguishability, allowing us to demonstrate that coupling to a photonic
nanocavity can greatly improve photon coherence at elevated temperatures up to
$30~\mathrm{K}$ that are compatible with compact cryocoolers. We derive a
polaron model that fully captures the temperature-dependent influence of
phonons observed in our experiments, providing predictive power to further
increase the indistinguishability and operating temperature of future devices
through optimised cavity parameters. | Alistair J. Brash, Jake Iles-Smith | 2023-05-09T17:29:19Z | http://arxiv.org/abs/2305.05636v2 | Towards Generating Indistinguishable Photons from Solid-State Quantum Emitters at Elevated Temperatures
###### Abstract
Indistinguishable photons are a key resource for many optical quantum technologies. Efficient, on-demand single photon sources have been demonstrated using single solid-state quantum emitters, typically epitaxially grown quantum dots in III-V semiconductors. To achieve the highest performance, these sources are typically operated at liquid helium temperatures (\(\sim 4\) K), introducing significant significant size, weight and power (SWAP) considerations that are often impractical for emerging applications such as satelite quantum communications. Here we experimentally verify that coupling a solid-state emitter to a photonic nanocavity can greatly improve photon coherence at higher temperatures where SWAP requirements can be much lower. Using a theoretical model that fully captures the phonon-mediated processes that compromise photon indistinguishability as temperature increases, we reproduce our experimental results and demonstrate the potential to further increase the operating temperature in future generations of optimised devices.
## 1 Introduction
Single, indistinguishable photons are a vital building block for many proposed optical quantum technologies such as optical quantum computing [1, 2, 3], long range secure quantum networks [4, 5] and optical quantum metrology [6]. Devices based upon III-V semiconductor quantum dots (QDs) coupled to micro-/nano-photonic structures have emerged as a leading single photon source (SPS), owing to their potential to generate single photons "on-demand" with high efficiency, purity and indistinguishability [7, 8, 9, 10]. Furthermore, these methods may be extended to producing more complex entangled graph states [11, 12, 13], with mutual indistinguishability of the photons comprising the state essential to achieve high fidelities. Beyond QDs, the cavity-emitter concept has also been applied to realise photon sources using quantum emitters in other solid-state hosts such as diamond [14], silicon [15] and 2D materials [16]. At present, III-V QDs offer the
most attractive platform due to their large dipole moment and relatively weak phonon coupling at low temperatures, enabling high brightness and indistinguishabilities.
Owing to a desire to minimise interactions with phonons, studies of indistinguishable photon emission from QD-based SPSs have generally focused on temperatures around 4K in either open- or closed-cycle helium cryostat systems. Whilst significantly smaller and less complex than the mK dilution refrigerator systems that house superconducting circuits for quantum computing research, these systems still have significant associated size, weight and power (SWAP) costs. The importance of SWAP requirements becomes particularly clear when considering potential usage cases for optical quantum technologies, for instance the tight space and thermal constraints of data centres, or the SWAP-critical environment of satellite communications. An alternative approach is to use a device such as a compact Stirling cryocooler, which are often specified for satellite instruments due to SWAP and maintenance considerations. In a proof-of-concept demonstration with a QD sample, the mean base temperature of such a cryocooler was found to be 28.8 K [17]. As such, with a view to future applications, it is highly desirable to increase the temperature at which QD SPSs can generate indistinguishable photons into the region where such cryocoolers operate.
### Real and Virtual Phonon Processes
For self-assembled III-V semiconductor QDs, the dominant influence of phonons on the spectrum of a QD two level system (TLS) comprising a ground state and the lowest-energy exciton state (s-shell) is electron-phonon coupling through the deformation potential [18, 19]. This interaction occurs between QD-confined electrons and longitudinal acoustic (LA) phonons of the bulk semiconductor material, exhibiting a continuum of phonon states up to a cut-off energy governed by the QD size [17, 20, 21], typically on the order of a few meV [19, 22, 23, 24]. A detailed theoretical treatment of this coupling is described in section 2.1. The phonon coupling gives rise to two different processes which are shown in Fig. 1, namely virtual and real phonon-mediated transitions [21, 25]. In the case of the real transitions (linear in phonon operators), the system decays from excited to ground state, with the emitted photon energy (red arrow in Fig. 1(a)) reduced or increased by the corresponding emission or absorption of a phonon (purple curly arrow in Fig. 1(a)). At 4 K, there are very few phonons to absorb and therefore phonon emission processes dominate, giving rise to a broad, asymmetric phonon sideband (PSB). With increasing temperature, both phonon emission and absorption become more probable, but the difference between the two probabilities reduces, leading to a sideband whose area increases and asymmetry decreases with temperature [21], as shown in Fig. 1(b). In the absence of a photonic structure, the fraction of light emitted through the PSB is given by \((1-B^{2})\), where \(B^{2}\) is termed the Frank-Condon factor.
Meanwhile, virtual processes correspond to virtual transitions (quadratic in phonon operators) between the QD excited state and higher energy electronic states (e.g. p-shell
- dashed green arrow in Fig. 1(a)) [21, 23, 25]. The effect of these transitions is to produce a temperature-dependent pure dephasing effect, leading to homogeneous broadening of the zero phonon line (ZPL), the well-known Lorentzian spectrum associated with a TLS as shown in Fig. 1(c). The width of the ZPL is governed by its coherence time \(T_{2}\):
\[\frac{1}{T_{2}}=\frac{1}{2T_{2}}+\frac{1}{T_{2}^{*}}, \tag{1}\]
where \(T_{1}\) is the transition radiative lifetime and \(T_{2}^{*}\) is the dephasing time associated with the pure dephasing rate. From Eq. 1 it can be seen that in the absence of any pure dephasing, the coherence time reaches a maximum value \(T_{2}=2T_{1}\), often termed _radiatively limited_. In this limit, photons emitted through the ZPL are perfectly indistinguishable, highlighting the importance of achieving radiatively limited coherence. Since photons emitted into the phonon sideband are completely distinguishable in frequency, the contributions of both types of phonon process can be combined into a general expression for the visibility of two photon interference for photons emitted
Figure 1: Influence of phonons on the optical transitions of a QD TLS: (a) Energy level diagram of the QD TLS comprising ground \(|0\rangle\) and exciton \(|X\rangle\) states. Direct decay of the exciton to the ground state results in the familiar zero phonon line with a probability given by the Frank-Condon factor \(B^{2}\). Real transitions corresponding to emission/absorption of a phonon during exciton relaxation lead to emission of a photon with distinguishable frequency, forming a phonon sideband with relative area \((1-B^{2})\). Meanwhile, virtual transitions to higher energy states \(|p\rangle\) occur through scattering of thermal phonons, broadening the ZPL. (b) Log-linear theoretical spectrum of the QD, showing the narrow ZPL and broad PSB. The PSB area and symmetry both increase noticeably at 30 K (red line) compared to 4 K (blue line). Spectra are produced using the experimental parameters found in this work. (c) Linear-linear close-up of the ZPL, showing thermal broadening at 30 K (red line) compared to 4 K (blue line). The ZPL at 4 K is already significantly radiatively broadened by the inclusion of a Purcell factor of 43.
from a single QD [26]:
\[V=B^{4}\frac{T_{2}}{2T_{1}}, \tag{2}\]
where \(V=1\) and \(V=0\) correspond to completely indistinguishable and distinguishable photons respectively. A spectral filter whose width and centre frequency matches the ZPL can remove the PSB, increasing to \(V=T_{2}/2T_{1}\) at the cost of a minimum reduction in efficiency of \((1-B^{2})\).
### Phonon Processes in Quantum Dots
The influence of phonon processes on the emission properties of III-V QDs is well studied. Whilst it was established that phonon broadening of the ZPL was essentially negligible at \(\sim 4\) K, it rapidly becomes significant as \(T\) increases, leading to a broadening which exceeds the radiative limit by more than a factor of 10 by 50 K [25]. However, studies mainly focused on achieving the radiative limit in the low temperature regime where phonon broadening could be neglected, with this ultimately being successful through material quality improvements removing other unwanted environmental effects such as charge noise [27]. With essentially radiatively limited ZPL emission in the low temperature limit, attention turned to PSB processes as the limit to photon indistinguishability [26, 28]. For InGaAs QDs, a typical value of \(B^{2}\) is around 0.9, limiting \(V\) to 0.81. To overcome this, QDs were integrated with optical micro-/nano-cavities, where the combination of Purcell enhancement and spectral filtering can remove some of the sideband photons with lower losses than simple spectral filtering [26, 29]. It is important to note however that even with such cavity coupling, there remains a fundamental trade-off between efficiency and indistinguishability, even for ideal cavity parameters [26].
Several studies have considered the temperature-dependent coherence of photons emitted by QDs in the absence of any significant Purcell enhancement, with all studies observing a rapid decrease in indistinguishability as temperature is increased [25, 30, 31, 32]. Theoretical modelling has revealed that both real and virtual phonon processes contribute to this trend [21, 23]. A strategy to reduce these temperature-dependent effects is to couple the QD to an optical cavity. In addition to the aforementioned filtering of the PSB photons, for appropriate parameters, the cavity also induces a Purcell enhancement (\(F_{P}\)) of the QD emission rate (\(F_{P}/T_{1}\)). From Eq. 1, it can be seen that this enhancement reduces the degredation of the coherence time (\(T_{2}\)) for a given pure dephasing rate (\(1/T_{2}^{*}\)), offering the potential to suppress the influence of the virtual phonon transitions. Measurements of a QD-micropillar device with a Purcell factor of 20 exhibited significantly weaker degradation of the emitted photon coherence in the 9 - 20 K range [29], supporting this prediction.
In this work, the photon coherence of a QD-nanocavity device with large Purcell enhancement (\(F_{P}=43\)) is studied over the range 4 - 30 K. By using a novel technique based on time-domain measurement of the first-order correlation function under weak
resonant excitation, we simultaneously resolve the real and virtual phonon contributions in a single experiment. Owing to the large Purcell enhancement, \(T_{2}/2T_{1}\) at 25 K is only 7.5 % lower than at 4 K. A theoretical model based upon the polaron master equation (ME) formalism fully reproduces the experimental results and provides predictive power for the performance of a future optimised cavity-QD system.
## 2 Methods
### Theoretical Model
In this section we will outline the theoretical models used in the analysis of the coherence properties of the QD sample. We start by considering a two level system, with ground and single exciton states \(|0\rangle\) and \(|X\rangle\) respectively, and exciton energy \(\hbar\omega_{X}\). The system is driven by a monochromatic continuous wave laser, with frequency \(\omega_{L}\) and Rabi frequency \(\Omega\), which in the dipole and rotating wave approximation can be described by the time-dependent system Hamiltonian [33]:
\[H_{\mathrm{S}}(t)\approx\hbar\omega_{X}\sigma^{\dagger}\sigma+\frac{\hbar \Omega}{2}\left(\sigma e^{i\omega_{\mathrm{L}}t}+\sigma^{\dagger}e^{-i\omega_{ \mathrm{L}}t}\right), \tag{3}\]
where \(\sigma=|0\rangle\,\langle X|\) is the system dipole operator, and \(\sigma_{x}=\sigma^{\dagger}+\sigma\).
The QD optical properties are strongly influenced by interactions with two environments: a low-Q cavity mode, which induces strongly Purcell enhanced emission, and a phonon environment which describes the lattice vibrations of the surrounding material. In both cases, we can describe the environments as collection of bosonic modes, with Hamiltonian of the system and environment of the form:
\[H(t)=H_{0}(t)+H_{\mathrm{I}}^{\mathrm{EM}}+H_{\mathrm{I}}^{ \mathrm{Ph}}, \tag{4}\] \[H_{0}(t)=H_{\mathrm{S}}(t)+\sum_{\mathbf{k}}\hbar\nu_{\mathbf{k }}b_{\mathbf{k}}^{\dagger}b_{\mathbf{k}}+\sum_{j}\hbar\omega_{j}a_{j}^{\dagger }a_{j},\] (5) \[H_{\mathrm{I}}^{\mathrm{EM}}=\sum_{j}(f_{j}\sigma^{\dagger}a_{j} +f_{j}^{*}\sigma a_{j}^{\dagger}),\] (6) \[H_{\mathrm{I}}^{\mathrm{Ph}}=\sigma^{\dagger}\sigma\sum_{\mathbf{ k}}g_{\mathbf{k}}(b_{\mathbf{k}}^{\dagger}+b_{-\mathbf{k}})+\sigma^{\dagger} \sigma\sum_{\mathbf{k}}\tilde{g}_{\mathbf{k},\mathbf{k}^{\prime}}(b_{\mathbf{ k}}^{\dagger}+b_{-\mathbf{k}})(b_{\mathbf{k}^{\prime}}^{\dagger}+b_{-\mathbf{k}^{ \prime}}), \tag{7}\]
where we have introduced the bosonic annihillation operators \(a_{j}\) and \(b_{\mathbf{k}}\) associated with the normal modes of the electromagnetic and vibrational environments respectively. The coupling to the optical environment is assumed to be of rotating-wave form, and is fully characterised by the spectral density \(\mathcal{J}(\omega)=\sum_{j}|f_{j}|^{2}\delta(\omega-\omega_{j})\), which for the low-\(Q\) cavity studied here takes the form:
\[\mathcal{J}(\omega)=\frac{1}{\pi}\frac{2g^{2}\kappa}{(\omega-\omega_{c})^{2}+ (\kappa/2)^{2}}, \tag{8}\]
where \(g\) is the light-matter coupling strength, \(\kappa\) is cavity linewidth, and \(\omega_{c}\) is its resonance.
The electron-phonon interaction, \(H_{\mathrm{I}}^{\mathrm{Ph}}\), contains two contributions. The first is linear in phonon operators, and corresponds to real phonon processes [25], that is, the
processes that involve the exchange of energy between the electronic states and the phonon environment. The strength of this interaction is determined by the matrix elements [34]\(g_{\mathbf{k}}=M_{e,\mathbf{k}}^{11}+M_{h,\mathbf{k}}^{11}\) for electrons (\(e\)) and holes (\(h\)), where for deformation potential coupling we have [35]:
\[M_{a,\mathbf{k}}^{ij}=\sqrt{\frac{\nu_{\mathbf{k}}}{2\varrho c_{s}^{2}\mathcal{ V}}}D_{a}\int\psi_{ia}^{*}(\mathbf{r})\psi_{ja}(\mathbf{r})\mathrm{d}^{3}r, \tag{9}\]
which is the matrix element corresponding to the phonon induced transition between the \(i^{\mathrm{th}}\)- and \(j^{\mathrm{th}}\)-electronic state. Here, \(\varrho\) is the mass density, \(c_{s}\) is the speed of sound in the material, and \(\mathcal{V}\) is the phonon normalization volume. The matrix element depends on the wave function \(\psi_{i,e/h}(\mathbf{r})\) of the confined electron/hole and the corresponding deformation potential \(D_{a}\).
The second term, which is quadratic in phonon operators, describes virtual phonon transitions between the first exciton state (\(s\)-shell) and higher lying excited states (\(p\)-shell) of the QD [25]. Intuitively, we may understand this term as a virtual scattering of a phonon with wavevector \(\mathbf{k}\) into \(\mathbf{k}^{\prime}\). This scattering process imparts a random phase kick to the exciton, the cumulative effect of which is a temperature dependent broadening of the zero phonon line [25] and consequently a loss of photon coherence [23, 36]. This is governed by the effective coupling strength \(\tilde{g}_{\mathbf{k},\mathbf{k}^{\prime}}=\sum_{a=e,h}\sum_{j>1}M_{a,\mathbf{ k}}^{1j}M_{a,\mathbf{k}^{\prime}}^{j1}[\omega_{j}^{a}-\omega_{1}^{a}]^{-1}\), where \(\omega_{j}^{e/h}\) is the energy of the \(j^{\mathrm{th}}\)-electron/hole state. For a detailed derivation and discussion of the quadratic coupling term, we refer the reader to Refs. [23, 25, 36].
It is important to note that while historically the linear electron-phonon coupling has been referred to as a pure-dephasing interaction [37], it does not lead to a temperature-dependent homogeneous broadening of the zero phonon line in the limit of weak driving [38]. For such processes, one must include the virtual phonon processes governed by the quadratic interaction.
#### 2.1.1 Polaron transformation and master equation
In order to accurately describe the optical properties of a QD, we use the polaron framework [26], where a unitary transformation \(\mathcal{U}=\exp(\sigma^{\dagger}\sigma\otimes S)\), with \(S=\sum_{\mathbf{k}}\nu_{\mathbf{k}}^{-1}g_{\mathbf{k}}(b_{\mathbf{k}}^{ \dagger}-b_{-\mathbf{k}})\), is applied to the system-environment Hamiltonian [39, 40, 41]. This leads to a displaced representation of the phonon environment, providing an optimized basis for a perturbative description of the QD dynamics [41]. Importantly, this transformation naturally captures the non-Markovian relaxation behavior of the phonon environment during exciton recombination [26, 42, 43]. In the polaron frame, we obtain the second-order master equation for the time evolution of the reduced state of the QD:
\[\frac{\partial\rho(t)}{\partial t}=-i\left[\frac{\Omega_{\mathrm{R}}}{2} \sigma_{x},\rho(t)\right]+\mathcal{K}[\rho(t)]+\frac{\Gamma}{2}\mathcal{L}_{ \sigma}[\rho(t)]+\frac{\gamma(T)}{2}\mathcal{L}_{\sigma^{\dagger}\sigma}[\rho (t)], \tag{10}\]
where \(\mathcal{L}_{O}[\rho]=2O\rho O^{\dagger}-\{O^{\dagger}O,\rho\}\) is the Lindblad dissipator. In Eq. 10, we have transformed the system into a rotating frame with respect to the laser frequency \(\omega_{\mathrm{L}}\), which is assumed to be resonant with the polaron shifted transition frequency
\(\tilde{\omega}_{\rm X}=\omega_{\rm X}-\sum_{\bf k}\nu_{\bf k}^{-1}|g_{\bf k}|^{2}\). The Rabi frequency, \(\Omega_{\rm R}=\Omega B\), is renormalised by the Frank-Condon factor, which may be written as
\[B=\exp(-\frac{1}{2}\int_{0}^{\infty}{\rm d}\nu\ \frac{J(\nu)}{\nu^{2}}\coth( \frac{\nu}{2k_{\rm B}T})), \tag{11}\]
where \(T\) is the temperature and \(k_{\rm B}\) Boltzmann's constant. Note we have taken the continuum limit of the phonon modes by introducing the phonon spectral density, \(J(\nu)=\alpha\nu^{3}\exp(-\nu^{2}/\nu_{c}^{2})\), where \(\alpha\) is the electron-phonon coupling strength and \(\nu_{c}\) is the phonon cut-off frequency [35].
There are three dissipative mechanisms to consider in Eq. 10. The second term in Eq. 10, is the polaron frame dissipator, \({\cal K}[\rho(t)]=-(\Omega/2)^{2}(\Gamma_{0}^{x}\left[\sigma_{x},\sigma_{x} \rho(t)\right]+[\sigma_{y},(\Gamma_{s}^{y}\sigma_{z}+\Gamma_{c}^{y}\sigma_{y} )\rho(t)]+{\rm h.c.})\), where the terms \(\Gamma_{0}^{a}=\int_{0}^{\infty}\Lambda_{aa}(\tau)d\tau\), \(\Gamma_{c}^{a}=\int_{0}^{\infty}\Lambda_{aa}(\tau)\cos(\eta\tau)d\tau\), \(\Gamma_{s}^{a}=\int_{0}^{\infty}\Lambda_{aa}(\tau)\sin(\eta\tau)d\tau\) may be understood as the rates at which transitions occur between the eigenstates of the system (i.e. the dressed states) induced by phonons [41]. These rates are set by the energy splitting of the system, and the correlation functions of the phonon environment in the polaron frame, \(\Lambda_{xx}(\tau)=B^{2}(e^{\varphi(\tau)}+e^{-\varphi(\tau)}-2)\quad\mbox{ and}\quad\Lambda_{yy}(\tau)=B^{2}(e^{\varphi(\tau)}-e^{-\varphi(\tau)})\), where \(\varphi(\tau)=\int_{0}^{\infty}\nu^{-2}J(\nu)(\cos(\nu\tau)\coth(\nu/2k_{\rm B }T)-i\sin(\nu\tau))\). The overall contribution of these phonon assisted transitions is scaled by the driving strength \(\Omega^{2}\)[41].
The third term in Eq. 10 gives the pure dephasing due to virtual phonon processes with rate [23, 44],
\[\gamma(T)=\frac{\alpha\mu}{4\nu_{c}^{4}}\int_{0}^{\infty}{\rm d}\nu\ \nu^{10}e^{-\nu^{2}/\nu_{c}^{2}}\left(\coth^{2}\left(\frac{\nu}{2k_{\rm B}T} \right)-1\right), \tag{12}\]
where \(\mu\) depends on the deformation potential coupling strength and spacing of the QD energy levels. This dephasing rate is strongly temperature dependent and decays rapidly to zero for low temperatures. Physically this corresponds to an absence of phonons present to drive virtual transitions.
The final term in Eq. 10, describes the optical emission through the cavity mode. Though in-principle this emission rate \(\Gamma\) will be temperature dependent [45, 46], for typical QD phonon parameters and for the cavity parameters of the current sample, this change is \(\sim 3\) ps between 0-30 K, which is comparable to the uncertainty in the lifetime measurements. We therefore neglect this effect, such that the spontaneous emission rate is \(\Gamma\approx F_{\rm p}\Gamma_{0}\), where we have assumed the QD transition is on resonance with the cavity mode resulting in a Purcell factor \(F_{\rm P}=4g^{2}/\kappa\), and we have introduced the bulk emission rate \(\Gamma_{0}\).
#### 2.1.2 Coherent and incoherent scattering in the polaron frame
We are interested in understanding the impact that phonon coupling has on the optical properties of the QD. The emission spectrum is defined as:
\[(\omega)=H(\omega)S_{0}(\omega), \tag{13}\]
where \(H(\omega)=8\pi g^{2}\kappa/[(\omega-(\omega_{\rm C}-\omega_{\rm X}))^{2}+( \kappa/2)^{2}]\) is the cavity filter function [26]. The spectrum of the QD is related to the first-order correlation function through
\(S_{0}(\omega)=\mathrm{Re}\left[\int_{0}^{\infty}g^{(1)}(\tau)e^{-i\omega\tau}\ \mathrm{d}\tau\right]\), which in the polaron frame, can be divided into two contributions:
\[g^{(1)}(\tau)=g^{(1)}_{\mathrm{opt}}(\tau)+g^{(1)}_{\mathrm{SB}}(\tau). \tag{14}\]
The first contribution is associated to purely optical processes, and takes the form \(g^{(1)}_{\mathrm{opt}}(\tau)=B^{2}\lim_{t\to\infty}\langle\sigma^{\dagger}(t+ \tau)\sigma(t)\rangle\), which leads to the ZPL in the emission spectrum [42], and can be calculated using the quantum regression theorem [33]. Under CW driving, the optical contribution can be further sub-divided into a coherent and incoherent contribution: the coherent scattering is defined by the steady state \(g^{(1)}_{\mathrm{coh}}=\lim_{\tau\to\infty}g^{(1)}_{\mathrm{opt}}(\tau)\), with the incoherent scattering naturally following as \(g^{(1)}_{\mathrm{inc}}(\tau)=g^{(1)}_{\mathrm{opt}}(\tau)-g^{(1)}_{\mathrm{coh}}\). In addition to direct optical scattering, there are also processes where a phonon is emitted or absorbed during the photon emission process, which leads to a broad spectral feature termed the phonon sideband [42]. This is captured by the \(g^{(1)}_{\mathrm{PSB}}(\tau)=(\mathcal{G}(\tau)-B^{2})g^{(1)}_{\mathrm{opt}}(\tau)\), where \(\mathcal{G}(\tau)=B^{2}\exp(\varphi(\tau))\) is the phonon correlation function.
To compare with experiment, we are interested in the fractions of light emitted into the ZPL and through the coherent scattering channel [43]. We therefore consider the partial powers, which are defined as the integral over the filtered spectrum associated with each emission channel, for example, the power through the PSB is given by \(\mathcal{P}_{\mathrm{PSB}}=\int_{\infty}^{\infty}H(\omega)S_{\mathrm{SB}}( \omega)\ \mathrm{d}\omega\), where \(S_{\mathrm{PSB}}(\omega)=\mathrm{Re}[\int_{0}^{\infty}g^{(1)}_{\mathrm{PSB}}( \tau)e^{-i\omega\tau}\ \mathrm{d}\tau]\). This allows us to define the filtered ZPL fraction as, \(\mathcal{F}_{\mathrm{ZPL}}=\mathcal{P}_{\mathrm{opt}}/\mathcal{P}_{\mathrm{ Tot}}\), where \(\mathcal{P}_{\mathrm{Tot}}=\mathcal{P}_{\mathrm{opt}}+\mathcal{P}_{\mathrm{PSB}}\) is the total power emitted. In the absence of any spectral filtering from the cavity, the ZPL fraction reduces to the Frank-Condon factor \(\mathcal{F}_{\mathrm{ZPL}}=B^{2}\). To calculate the fraction of light emitted through coherent scattering processes we consider only photons emitted through the ZPL, such that \(\mathcal{F}_{\mathrm{coh}}=\mathcal{P}_{\mathrm{coh}}/\mathcal{P}_{\mathrm{ opt}}\), where \(\mathcal{P}_{\mathrm{coh}}=\pi H(0)g^{(1)}_{\mathrm{coh}}\).
### Sample Characterisation
Fig. 2(a) shows a schematic of the experimental setup. The sample comprises of self-assembled InGaAs QDs embedded within a suspended 170 nm thick GaAs membrane. The membrane incorporates n and p-doped GaAs layers, as well as AlGaAs tunnelling barriers, forming a p-i-n diode that can tune the QD emission by several meV using the quantum-confined Stark effect (QCSE). Using electron beam lithography and chemical etching nanofabrication techniques, H1 photonic crystal cavities (PhCCs) are fabricated, consisting of a single point defect in a lattice of air holes (see inset in Fig. 2(a)). The device under study here comprises the neutral exciton state (\(|X\rangle\)) of a QD, weakly coupled to a resonant H1 PhCC (linewidth \(2\hbar\kappa=2.51\) meV) that induces a significant Purcell enhancement. Further details of the sample and device under study may be found in Ref. [9].
The sample is located within a liquid helium bath cryostat at a base temperature of \(T=4.2\) K. A feedback loop incorporating a resistive heater and a calibrated temperature sensor in the sample holder allows the temperature to be varied up to 50 K. The sample is excited by a tuneable single mode laser, with the emission
separated from the laser by the use of orthogonal polarisers, producing a typical signal-to-background ratio of 100:1 for resonant excitation. The emission from the sample is then analysed either in the frequency domain with a grating spectrometer or in the time domain by a Mach-Zehnder interferometer that records the absolute value of the first-order correlation function (\(|g^{(1)}(\tau)|\)). Full details of the time domain measurement are presented in Section 2.3.
Fig. 2(b) shows a typical spectrum of the device under study with the heater switched off. The narrow ZPL and broad asymmetric PSB are both clearly visible when plotted on a logarithmic scale. To verify the Purcell enhanced lifetime of the QD transition under study, a pump-probe measurement is performed, plotting the ZPL intensity as a function of the separation of two resonant \(\pi\)-pulses in Fig. 2(c) according to the method described in Ref. [9]. An exponential fit to this data produces a value
Figure 2: (a) Schematic of the experiment: BS - beam splitter, CCD - charge-coupled device (camera), LP - linear polarizer aligned either parallel (\(\parallel\)) or perpendicular (\(\perp\)) to input laser polarisation, SM - single mode fiber, SPAD - single photon avalanche diode, \(\Delta\phi\) - phase shift, \(\tau\) - path length difference. (b) Experimental log-linear spectrum of the QD-cavity device under study at 4 K, showing the zero phonon line and phonon sideband. (c) Pump-probe measurement of the cavity-enhanced QD radiative lifetime (green diamonds) fitted with an exponential decay (solid green line). (d) Measurement of the ZPL energy shift as a function of temperature (red circles) with a fit of a Bose-Einstein model according to eq. 15 (solid red line). (e) Measurement of the ZPL energy shift as a function of the bias voltage applied to the sample diode (blue triangles) with a quadratic fit (solid blue line).
of \(T_{1}=22.9\pm 1.2\) ps, in excellent agreement with the value of \(22.7\pm 0.9\) ps previously measured in Ref. [9] that corresponds to a Purcell enhancement of \(F_{P}=43\).
To begin to investigate the behaviour of this device, the redshift of the ZPL with temperature is first characterised by fitting temperature-dependent spectra. The results are plotted in Fig. 2(d) and show the characteristic non-linear behaviour where the redshift increases exponentially beyond an activation energy. The data agrees very well with a fit to a Bose-Einstein type model derived in refs. [47, 48]:
\[\Delta(T)=-SE_{ph}\left(\coth\left(\frac{E_{ph}}{2k_{b}T}\right)-1\right), \tag{15}\]
where \(S\) is a dimensionless coupling constant and the coth term describes the coupling of electrons to phonons of energy \(E_{ph}\). The fit gives values of \(S=0.6\) and \(E_{ph}=8.0\) meV, comparable to previous values found in studies of InGaAs QDs [48].
To independently study the influence of temperature on the emission properties of the cavity-QD system, it is necessary to compensate for the redshift of the QD with increasing \(T\), such that the cavity remains resonant with the QD and maintains a constant Purcell enhancement. To achieve this, Fig. 2(e) shows a plot of the ZPL energy as a function of the bias voltage applied to the p-i-n diode. We observe a characteristic quadratic shift with voltage [49] over a total range of around 2 meV. As the QD-cavity resonance condition lies close to the centre of this range at the base temperature, we are able to compensate over 1 meV of redshift by increasing the applied voltage as the temperature increases.
### Experimental Method
To investigate the coherence of the emitted photons as a function of temperature, we make a time-domain measurement of the first order correlation function \(g^{(1)}(\tau)\) using a similar method to that described in Ref. [43]. This is performed using a Mach-Zehnder interferometer as shown in Fig. 2(a). At each point in time (\(\tau\)), the phase between the two arms (\(\Delta\phi\)) is scanned, producing a set of interference fringes. The contrast (\(v\)) of these fringes is then evaluated according to
\[v=\frac{I_{max}-I_{min}}{I_{max}+I_{min}}, \tag{16}\]
by using a generalised peak fitting routine to find the intensity at the local maximas (\(I_{max}\)) and minimas (\(I_{min}\)). The maximum resolvable contrast (defined as \(1-\epsilon\)) is limited by factors including imperfect mode overlap at the second beamsplitter, imperfect polarisation matching between the interferometer arms, and detector dark counts. As such, this varies depending upon experimental conditions but is around 0.95. The measured fringe visibility as a function of \(\tau\) can then be related to \(g^{(1)}(\tau)\) by [43]:
\[v(\tau)=(1-\epsilon)\frac{|g^{(1)}(\tau)|}{g^{(1)}(0)}, \tag{17}\]
demonstrating that once the interferometer imperfections are accounted for by the \((1-\epsilon)\) term, \(v(\tau)\) corresponds to the absolute value of the normalised coarse grained first-order correlation function.
## 3 Results
Figs. 3(a,b) show example measurements of fringe contrast as a function of time for temperatures of 15 K (a) and 30 K (b) respectively. The QD, laser and cavity are all mutually resonant, with this condition maintained as the temperature is increased by increasing the applied bias according to Figs. 2(d,e). The measurements are equivalent to a Fourier transform of the spectrum and exhibit 3 stage dynamics, a fast initial decay associated with the real PSB transitions [43], an exponential decay with time constant \(T_{2}\) corresponding to incoherent radiative decay of the ZPL, and a plateau at long timescales from coherent scattering. The coherently scattered photons inherit the coherence time of the laser [50, 51], which is sufficiently long to appear flat on this scale. To extract the phonon parameters required by the polaron model, we fit the short-time dynamics of the \(g^{(1)}\) function at \(T=30\) K to the full correlation function derived in Sec. 2.1.2. By focusing on times \(\leq 10\) ps, we can consider only real phonon processes and neglect any virtual dephasing or optical decay. This allows us to do a two-parameter fit to the \(g^{(1)}\) function, extracting an electron-phonon coupling strength \(\alpha=0.046\) ps\({}^{2}\) and phonon cut-off frequency \(\nu_{c}=1.35\) ps\({}^{-1}\). These parameters agree closely with those independently extracted in a previous study on the same device [43] and are used for all other theoretical curves that follow.
By evaluating the mean values of the plateaus in the data (\(100-500\) fs, \(5-10\) ps and \(200-1000\) ps), the amplitudes of each component (\(A\)) can be found as visualised by the arrows in Fig. 3(b). From this, the ZPL fraction can be found directly as
\[\mathcal{F}_{\mathrm{ZPL}}=\frac{A_{inc}+A_{coh}}{A_{PSB}+A_{inc}+A_{coh}}. \tag{18}\]
Fig. 3(c) compares the theoretical predictions for \(\mathcal{F}_{\mathrm{ZPL}}\) with the full experimental data-set of fringe contrast measurements from \(4-30\) K. In this range, \(\mathcal{F}_{\mathrm{ZPL}}\) varies in an almost linear manner [21], reducing from \(0.94\pm 0.01\) at 4 K to \(0.71\pm 0.01\) at 30 K due to the increasing probability of the real phonon transitions at elevated temperatures.
Whilst the ZPL fraction is invariant with excitation conditions [24, 42, 43], the coherent fraction is very sensitive to both the driving strength (the phonon renormalised Rabi frequency - \(\Omega_{\mathrm{R}}\)) and the emitter coherence [52]:
\[\mathcal{F}_{coh}=\frac{A_{coh}}{A_{inc}+A_{coh}}=\frac{T_{2}}{2T_{1}}\frac{1 }{1+\Omega_{\mathrm{R}}^{2}T_{1}T_{2}}. \tag{19}\]
Therefore, Eq. 19 illustrates that by maintaining constant values of \(\Omega_{\mathrm{R}}\) and \(T_{1}\), the coherently scattered fraction can be a sensitive probe of the QD coherence time \(T_{2}\). \(T_{1}\) is kept constant at the value of 22.9 ps measured in Fig. 2(c) by the aforementioned technique of balancing the QD redshift with temperature (Fig. 2(d)) with an equivalent
blueshift from an increased applied bias (Fig. 2(e)), keeping the QD resonant with the laser and cavity. Meanwhile, the Rabi frequency is calibrated at the beginning of each measurement by recording a series of Mollow triplet [53] spectra at different excitation powers. Plotting half of the Mollow side-peak splitting (equal to \(\Omega_{\mathrm{R}}\)) vs. the square root of the laser power (\(P^{1/2}\)) allows for a linear fit linking laser power to Rabi frequency. To give high sensitivity through a large coherent fraction, a Rabi energy of \(\hbar\Omega_{\mathrm{R}}=5.11\)\(\upmu\)eV is used throughout.
Applying this approach, Fig. 3(d) shows the coherent fraction as a function of temperature, evaluated according to Eq. 19. The dashed horizontal line corresponds to the theoretical maximum coherent fraction of \(0.940\pm 0.007\), evaluated from the
Figure 3: (a-b) Experimental fringe contrast measurement of the first order correlation function (\(g^{(1)}(\tau)\)) for temperatures of (a) 15 and (b) 30 K. Solid lines are from the Polaron model, using independently measured values aside from fitting to extract the phonon parameters \(\alpha=0.0446\) ps\({}^{2}\), \(\nu_{c}=1.35\) ps\({}^{-1}\) and \(\mu=0.005293\) ps\({}^{2}\). (c-f) Coherence measures extracted from the temperature-dependent \(g^{(1)}(\tau)\)) measurements: (c) ZPL fraction, (d) Coherent fraction, (e) \(T_{2}/2T_{1}\) and (f) Pure dephasing rate \(h\gamma\) as a function of temperature with results of the Polaron model (solid lines). Dashed lines in (d,e) indicate the “ideal” values for \(T_{2}=2T_{1}\) with the grey shading in (d) representing the uncertainty. The dashed line in (f) indicates the small additional non-thermal pure dephasing implied by the measurements. For all data without visible error bars, errors are comparable to the symbol size.
RHS of Eq. 19 by taking \(T_{2}=2T_{1}\). The experimental values begin at \(0.906\pm 0.014\) at 4 K, falling to \(0.758\pm 0.016\) at 30 K as dephasing of the ZPL becomes more significant. Whilst the value at 4 K is not quite transform-limited, we note that our measurement technique is a particularly stringent test of coherence as it is sensitive to any dephasing within the experiment duration (seconds). Most previous studies have used two photon interference methods that exclude any processes on timescales greater than the nanosecond separation between subsequent photons [29, 30, 31, 47]. When the timescale is extended in such measurements, a small decay in visibility is often observed [7], including in previous two photon interference measurements on this sample [9]. This effect likely originates from charge or spin noise [27], phenomena which may also explain the small non-thermal dephasing observed at low temperatures here.
With the measurement of coherent fraction, it is now possible to rearrange Eq. 19 to find
\[\frac{T_{2}}{2T_{1}}=\frac{\mathcal{F}_{coh}}{1-2\Omega_{\mathrm{R}}^{2}T_{1} ^{2}\mathcal{F}_{coh}}. \tag{20}\]
Using this equation with the previously found values of \(T_{1}\), \(\mathcal{F}_{coh}\) and \(\Omega_{\mathrm{R}}\), Fig. 3(e) shows \(T_{2}/2T_{1}\) as a function of temperature. At 4 K, \(T_{2}/2T_{1}=0.961\pm 0.014\), decreasing to \(T_{2}/2T_{1}=0.796\pm 0.018\) by 30 K. It is also then possible to extract the pure dephasing rate \(\gamma=1/T_{2}^{*}\) from Eq. 1, with the results plotted in Fig. 3(f). To extract the prefactor \(\mu\) for the virtual phonon dephasing described by Eq. 12, we fit to this experimental data, adding an additional constant value (3.5\(\upmu\)eV - dashed line in Fig. 3(f)) to describe the small non-thermal dephasing implied by Fig. 3(d/e). The extracted value is \(\mu=0.00529\) ps\({}^{2}\).
## 4 Discussion
In the results section, the temperature dependence of the ZPL fraction and ZPL coherence (\(T_{2}/2T_{1}\)) were measured in the range of 4 - 30 K. In this range, it was found that the ZPL fraction decayed almost linearly from 0.937 to 0.7, whilst \(T_{2}/2T_{1}\) decreases from 0.961 to 0.798 with the gradient increasing at higher \(T\). Fig. 4(a) presents a comparison of our measurements of \(T_{2}/2T_{1}\) vs. \(T\) to previous results obtained by measuring two photon interference visibility through a spectral filter that removes the PSB component 1. Thoma et al. [30] and Gerhardt et al. [31] (grey triangles and blue inverted triangles respectively in Fig. 4(a)) consider QDs without any significant Purcell enhancement, therefore it is unsurprising that their values for \(T_{2}/2T_{1}\) rapidly fall away from those measured here (green diamonds) as \(T\) increases. For comparison, at \(T=30~{}K\), Thoma et al. [30] measure \(T_{2}/2T_{1}=0.39\), half the value measured here.
Meanwhile, Grange et al. [29] (red circles) measured values ranging between 0.99 and 0.96 in the range 9 - 18 K for a QD micropillar system with a Purcell factor of 20, compared to 43 for the device studied here. Whilst direct comparisons are difficult due to the smaller temperature range, the lower local gradient in \(T_{2}/2T_{1}\) despite the lower Purcell factor suggests that the underlying thermal ZPL broadening may be lower than the QD studied here. Furthermore, the pure dephasing rates extracted in 3(f) appear significantly larger than in previous studies, with a value of 11.8 \(\upmu\)eV at \(T=25\) K compared to \(\sim 4\)\(\upmu\)eV obtained at the same temperature in four-wave mixing studies [25]. Meanwhile, Ref. [23] extracted a value of \(\mu\) from two photon interference data that is an order of magnitude smaller than in this work. We note that the theoretical thermal
Figure 4: Comparison of (a) \(T_{2}/2T_{1}\) and (b) ZPL fraction from this study (green diamonds) with prior work (red circles, blue triangles and grey inverted triangles) and the Polaron model (lines). For the results from prior studies, \(T_{2}/2T_{1}\) values are equated to two photon interference visibilities measured through a narrow spectral filter that removes the PSB. The polaron model as shown in Fig. 3(c-f) is the light green solid line, whilst the dark green solid line is the same parameters but with the additional non-thermal dephasing removed for comparison. The dark green dotted line shows the results of the Polaron model with the same parameters but without any Purcell enhancement. The dark green dashed line uses the same phonon parameters but reduces \(\kappa\) to increase the Purcell factor to \(F_{P}=200\), chosen to be just below the onset of strong QD-cavity coupling.
dephasing rate \(\gamma(T)\) given by Eq. 12 contains both the cut-off frequency \(\nu_{c}\) and the prefactor \(\mu\) that varies with the QD energy level spacing. As both of these quantities depend upon the QD size and shape, significant variation in the thermal dephasing rate is not unexpected. Whilst detailed consideration of QD structure is beyond the scope of this work, it may provide an interesting direction for further study.
Exploiting the excellent agreement between the polaron model (solid green lines in Fig. 4) and experimental results, we now model two additional scenarios; an optimised QD-cavity device with \(F_{P}\) increased to 200 by reducing \(\kappa\) and a bare QD without any Purcell enhancement. These models use the QD and phonon parameters found from fitting the experimental data, varying only the cavity parameters and setting any non-thermal dephasing to zero. Considering first the case without Purcell enhancement (dotted green lines in 4), we note that in Fig. 4(a) the ZPL coherence falls rapidly, reaching \(T_{2}/2T_{1}=0.11\) at \(T=30\) K. This illustrates the importance of the Purcell enhancement - our QD-cavity device improves on this value by more than a factor of 7. In addition, it is noticeable that without Purcell enhancement, \(T_{2}/2T_{1}\) falls much faster with increasing temperature than the measurements of Refs. [30, 31], providing further evidence that the underlying thermal dephasing rate of this QD appears significantly greater than previous studies.
Meanwhile, the dashed line in Fig. 4 shows the same model but for an optimised cavity with \(F_{P}=200\) by reducing \(\kappa\). This value is chosen to maximise the Purcell factor whilst ensuring that the cavity-QD system does not enter the strong coupling regime where photon coherence begins to decrease again [26]. For these parameters, the increased Purcell enhancement significantly improves \(T_{2}/2T_{1}\) from 0.83 to 0.92 at \(T=30\) K when compared to the model for the sample cavity parameters. The magnitude of this difference continues to increase with temperature. When considering the fraction of light emitted into the ZPL (Fig. 4(b)), a small difference (\(\sim 0.04\) at 30 K) is observed between the sample parameters and the "no Purcell" model. This is due to the photonic spectral density of the cavity (Eq. 8) removing some of the PSB contribution according to Eq. 13. The effect is relatively small as the half-width of the cavity (\(\kappa\)) is comparable to the phonon cut-off frequency \(\nu_{c}\). For the optimised system with reduced \(\kappa\), the ZPL fraction at 30 K increases significantly from 0.70 to 0.83 due to the five-fold reduction in cavity linewidth. Whilst it seems intuitive that further reducing \(\kappa\) will continue to be advantageous in this way, the onset of strong QD-cavity coupling ultimately degrades the photon coherence, leading to a fundamental trade-off between indistinguishability and efficiency [26]. Unlike Fig. 4(a), it is not possible to easily compare ZPL fraction with previous studies as two photon interference measurements cannot easily isolate the PSB contribution.
## 5 Conclusion
In conclusion, we have demonstrated a QD-nanocavity device that exploits a large Purcell enhancement to achieve a high degree of photon coherence at elevated
temperatures. Our novel experimental approach based upon time-domain measurement first-order correlation function is able to distinguish between contributions from real and virtual phonon-mediated transitions in a single measurement. Exploiting this, at a temperature of 30 K that is compatible with the operational temperature of compact cryocoolers, we measure a ZPL coherence of \(T_{2}/2T_{1}=0.80\) with a ZPL fraction of 0.71, compared to the \(T_{2}/2T_{1}=0.11\) predicted by our model in absence of the Purcell enhancement. We note that these experimental results are achieved despite the studied QD device exhibiting significantly stronger thermal dephasing than was observed in previous QD studies, a result that indicates that the QD size/shape may play a role in determining the magnitude of phonon dephasing. We have also developed a theoretical model based upon the polaron framework that fully reproduces our experimental results. The excellent agreement between theory and experiment provides predictive power, allowing us to simulate an optimised cavity-QD device that can achieve \(T_{2}/2T_{1}=0.92\) with a ZPL fraction of 0.83, while fully accounting for electron-phonon processes using experimentally measured parameters.
Whilst indistinguishability requirements are application specific, we note that experiments have successfully demonstrated the quantum interference phenomenon of boson sampling with a QD source exhibiting indistinguishabilities in the range \(0.5-0.7\)[54], suggesting that even our current device could perform such experiments at 30 K when combined with a spectral filter to remove some of the PSB. We believe that the theoretical and experimental methods developed here can support the development of a new generation of cavity-QD quantum light sources, meeting both the photon coherence and SWAP requirements of emerging optical quantum technologies. Furthermore, with some adaptations to the specifics of phonon interactions in different materials, our methods can readily be applied to other emerging solid-state quantum emitter systems in materials such as diamond [14], silicon [15] and 2D materials [16].
## Acknowledgements
The authors acknowledge Edmund Clarke and Ben Royal for sample growth and nanofabrication. The authors also thank Mark Fox, Catherine Philips, Maksym Sich and Scott Dufferwiel for insightful conversations. A.J.B. gratefully acknowledges the support of the EPSRC (UK) through the Quantum Technology Fellowship EP/W027909/1 and Programme Grant EP/N031776/1, in addition to support from Research England through the National Productivity Investment Fund.
|
2306.13504 | Existence and Uniqueness of Solutions of the Koopman--von Neumann
Equation on Bounded Domains | The Koopman--von Neumann equation describes the evolution of a complex-valued
wavefunction corresponding to the probability distribution given by an
associated classical Liouville equation. Typically, it is defined on the whole
Euclidean space. The investigation of bounded domains, particularly in
practical scenarios involving quantum-based simulations of dynamical systems,
has received little attention so far. We consider the Koopman--von Neumann
equation associated with an ordinary differential equation on a bounded domain
whose trajectories are contained in the set's closure. Our main results are the
construction of a strongly continuous semigroup together with the existence and
uniqueness of solutions of the associated initial value problem. To this end, a
functional-analytic framework connected to Sobolev spaces is proposed and
analyzed. Moreover, the connection of the Koopman--von Neumann framework to
transport equations is highlighted. | Marian Stengl, Patrick Gelß, Stefan Klus, Sebastian Pokutta | 2023-06-23T14:03:50Z | http://arxiv.org/abs/2306.13504v2 | # Existence and Uniqueness of Solutions of the Koopman-von Neumann Equation on Bounded Domains
###### Abstract
The Koopman-von Neumann equation describes the evolution of a complex-valued wavefunction corresponding to the probability distribution given by an associated classical Liouville equation. Typically, it is defined on the whole Euclidean space. The investigation of bounded domains, particularly in practical scenarios involving quantum-based simulations of dynamical systems, has received little attention so far. We consider the Koopman-von Neumann equation associated with an ordinary differential equation on a bounded domain whose trajectories are contained in the set's closure. Our main results are the construction of a strongly continuous semigroup together with the existence and uniqueness of solutions of the associated initial value problem. To this end, a functional-analytic framework connected to Sobolev spaces is proposed and analyzed. Moreover, the connection of the Koopman-von Neumann framework to transport equations is highlighted.
**Keywords:** dynamical systems, transfer operators, evolution equations, Koopman-von Neumann mechanics, Perron-Frobenius-Sobolev space
**MSC:** 35A05, 35F10, 37C30, 46E35, 47D06
## 1 Introduction
Quantum computing has the potential to enhance the way information is processed and enables us to solve problems that are beyond the capabilities of classical computers. Since the introduction of this paradigm by Benioff [1], Manin [2], and in particular Feynman [13] in the early 1980s, the interest in quantum computation and simulation has been growing continuously. Recent years have seen rapid progress not only in terms of technical realizations but also by opening up new applications including cryptography [14, 15, 16], financial modeling [17, 18], materials science [1, 19, 20], and machine learning [10, 11].
Quantum computers are also of interest to simulate dynamical systems modeled by ordinary differential equations (ODEs). This problem class covers a variety of complex physical, chemical, and biological systems. As quantum algorithms are written in terms of unitary operations, the realization of such an algorithm is a challenging task. While the solution of high-dimensional systems of _linear_ ODEs has been extensively studied and various quantum algorithms have been developed in the past decades [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], the simulation of _nonlinear_ ODEs is a more challenging task that has only been addressed recently [14, 21, 22, 23, 24].
One possible approach towards the analysis of dynamical systems is the use of _transfer operators_. In this context, the Perron-Frobenius and the Koopman operators [25, 11, 12, 26, 27, 28] are of particular interest, as they enable the analysis of the global behavior of complex dynamical systems. Instead of solving nonlinear ODEs through numerical integration, transfer operators describe how probability densities and observables, respectively, are propagated forward in time by using linear and infinite-dimensional operators and their corresponding generators. Transfer operator theory has been successfully applied in various research areas including fluid dynamics [29, 28, 25], molecular dynamics [26, 27, 28, 29], and ergodic theory [25, 26, 27, 28, 29, 30, 31, 32]. It lays the foundation for a wide range of data-driven methods which can be used, e.g., for the detection of metastable or coherent sets as well as model reduction [15, 22, 23, 24, 25, 26, 27, 28], but also for control [22, 23, 24, 25, 26, 27, 28, 29].
For the operator-based numerical simulation and analysis of dynamical systems on a quantum computer, the _Koopman-von Neumann_ (KvN) framework is of particular interest. In this formulation, the conservation of the probability distribution, expressed by the classical Liouville equation, is recast as a Schrodinger equation on a Hilbert space. The solution of this equation yields a complex-valued wave function \(\psi\), whose propagation is described by a semigroup of unitary operators. Using Born's rule, one can extract the probability density \(\rho=|\psi|^{2}\). This probability density then satisfies the Liouville equation, see [22, 23, 24]. In its original formulation, Koopman and von Neumann only considered the case of Hamiltonian dynamics [25, 26, 27], but it has been extended to general dynamical systems, see [22, 23, 24]. The KvN framework thus establishes a connection between classical mechanics and quantum mechanics. See also [20] for a detailed comparison of the different operators and their properties. If the KvN Hamiltonian is sparse, the quantum-based simulation is exponentially more efficient than the Euler discretization of the Liouville equation [22]. All these considerations motivate an in-depth analysis of the mathematical theory behind the KvN equation.
The primary goal of this paper is the mathematical investigation of the KvN framework. We consider dynamical systems whose trajectories are completely contained in a bounded, closed, and sufficiently regular set for all times. We will derive a mathematically rigorous existence theory for the KvN equation.
In addition to the mathematical analysis, our goal is also to bridge the gap between mathematicians, quantum physicists, and (quantum) computer scientists. To this end, key concepts such as semigroup theory, function spaces, transfer operators and their connections to the KvN framework will be introduced in detail and many additional references are provided for the interested reader. We will in particular highlight the relationship with the transport equation literature.
The rest of this paper is organized as follows: In Section 2 we will introduce selected tools from functional analysis including semigroup theory in Subsection 2.1, function spaces in Subsection 2.2, and transfer operators in Subsection 2.3. A function space framework related to Sobolev spaces will be introduced and analyzed in Section 3. In Section 4 these results will be used to derive the existence of a strongly continuous semigroup associated with the KvN generator and hence to prove the existence and uniqueness of solutions.
## 2 Notation and Preliminaries
For the following abstract notions in functional analysis, we refer to [1]. Let \(X\) be a real (or complex) Banach space. Its _(topological) dual space_ is denoted by \(X^{*}\) and is defined as the set of all bounded linear operators taking values in \(\mathbb{R}\) or \(\mathbb{C}\). The _dual pairing_ of elements \(x^{*}\in X^{*}\) and \(x\in X\) is defined by \(\langle x^{*},x\rangle_{X^{*},X}:=x^{*}(x)\). If the corresponding spaces are clear from the context, we may just write \(\langle x^{*},x\rangle\). Let \(Y\) be another (real or complex) Banach space. The set of all bounded linear operator from \(X\) to \(Y\) is denoted by \(\mathcal{L}(X,Y)\) and forms a Banach space when equipped with the _operator norm_
\[\|A\|_{\mathcal{L}(X,Y)}:=\sup_{\begin{subarray}{c}x\in X\\ \|x\|_{X}=1\end{subarray}}\|Ax\|_{Y}\]
for \(A\in\mathcal{L}(X,Y)\). Clearly, we have \(X^{*}=\mathcal{L}(X,\mathbb{R})\) for real, and \(X^{*}=\mathcal{L}(X,\mathbb{C})\) for complex Banach spaces. If \(Y=X\) we will sometimes simply write \(\mathcal{L}(X):=\mathcal{L}(X,X)\). The identity on \(X\) is denoted by \(\mathrm{id}_{X}\). The _dual_ of an operator \(A\in\mathcal{L}(X,Y)\) is the uniquely determined operator \(A^{*}\in\mathcal{L}(Y^{*},X^{*})\) with
\[\langle A^{*}y^{*},x\rangle_{X^{*},X}=\langle y^{*},Ax\rangle_{Y^{*},Y},\quad \text{for all }x\in X,y^{*}\in Y^{*}.\]
Let \(Y\subseteq X\) be a subspace equipped with its own norm \(\|\cdot\|_{Y}\). If \((Y,\|\cdot\|_{Y})\) is a real (respectively complex) Banach space and there exists a constant \(C>0\) with \(\|y\|_{X}\leq C\|y\|_{Y}\) for all \(y\in Y\), then we say that \(Y\)_embeds continuously_ into \(X\) and we write \(Y\hookrightarrow X\).
Finally, let \(H\) be a complex Hilbert space with scalar product \((\cdot,\cdot)_{H}\). Then, according to the Riesz representation theorem [1, Section 6.1], the canonical map \(\Lambda\colon H\to H^{*}\) defined by \(u\mapsto(v\mapsto(u,v)_{H})\) is an isometric, conjugate linear map, i.e., \(\Lambda(u_{0}+u_{1})=\Lambda u_{0}+\Lambda u_{1}\) and \(\Lambda(\alpha u)=\bar{\alpha}\Lambda u\) for all \(u,u_{0},u_{1}\in H\) and \(\alpha\in\mathbb{C}\).
### Semigroup Theory
The major goal of this work is to prove the existence of solutions of the KvN equation, which can be classified as a linear evolution equation. Their treatment is tightly connected to semigroup theory, of which we now introduce the following notions and results.
For more details, we refer to [10, 11]. A _one-parameter semigroup_ (on \(X\)) or just _semigroup_ is a family of operators \((T(t))_{t\geq 0}\subseteq\mathcal{L}(X)\) such that the following properties hold:
* \(T(0)=\mathrm{id}_{X}\),
* \(T(s+t)=T(s)T(t)\) for all \(s,t\geq 0\).
A semigroup is called a _\(C_{0}\)-semigroup_ if for all \(x\in X\) the mapping \(t\mapsto T(t)x\) is continuous on \([0,\infty)\) and it is called a semigroup of _contractions_, if it holds that \(\|T(t)\|_{\mathcal{L}(X)}\leq 1\) for all \(t\geq 0\). The _(infinitesimal) generator_ of \((T(t))_{t\geq 0}\) is defined by
\[Ax:=\lim_{t\searrow 0}\frac{1}{t}(T(t)-\mathrm{id}_{X})x\]
for all \(x\in\mathcal{D}(A)\) with
\[\mathcal{D}(A):=\left\{x\in X:\lim_{t\searrow 0}\frac{1}{t}(T(t)-\mathrm{id}_{X})x \text{ exists}\right\}.\]
This set is called the _domain_ of \(A\). In general, \(\mathcal{D}(A)\) is not a closed subspace of \(X\) and \(A\) is an _unbounded operator_. Such an operator is called _closed_, if its graph
\[\mathrm{gph}(A):=\{(x,y)\in X\times X:x\in\mathcal{D}(A)\text{ and }y=Ax\}\]
is a closed subset of \(X\times X\). If \(A\colon\mathcal{D}(A)\subseteq X\to X\) is a closed operator, then the space \(\mathcal{D}(A)\) equipped with the _graph norm_
\[\|x\|_{A}:=\|x\|_{X}+\|Ax\|_{X}\]
is a Banach space and \(A\in\mathcal{L}(\mathcal{D}(A),X)\) is a bounded linear operator. For a complex Hilbert space \(H\), an operator \(A\colon\mathcal{D}(A)\subseteq H\to H\) is called _dissipative_, if all \(u\in\mathcal{D}(A)\) satisfy \(\operatorname{Re}(Au,u)_{H}\leq 0\).
Consider the following _Cauchy problem_
\[\partial_{t}u =Au\text{ for all }t\geq 0, \tag{1}\] \[u(0) =u_{0}.\]
Of particular interest for us is the following result.
**Theorem 1** (Solution of the Cauchy problem).: _Let \(A\colon\mathcal{D}(A)\subseteq X\to X\) be the infinitesimal generator of a \(C_{0}\)-semigroup \((T(t))_{t\geq 0}\) with \(T(t)\in\mathcal{L}(X)\) for all \(t\geq 0\), then the mapping \(u\colon t\mapsto T(t)u_{0}\) is the unique solution \(u\in C^{1}([0,\infty),X)\cap C([0,\infty),\mathcal{D}(A))\) of (1)._
For more details, see, e.g., [10, Chapter 4, Theorem 1.3] as well as [11, Proposition 2.2.2(i)]. The existence and uniqueness of solutions of (1) can thus be guaranteed if the existence of a \(C_{0}\)-semigroup for a \(A\colon\mathcal{D}(A)\subseteq X\to X\) can be established. There are several results that guarantee the existence of a \(C_{0}\)-semigroup for a given unbounded operator \(A\) with domain \(\mathcal{D}(A)\), e.g., the Hille-Yosida theorem [10, Chapter 2, Generation Theorem 3.5] and Lumer-Phillips theorem [10, Chapter 2, Theorem 3.15]. For our case, the following corollary suffices.
**Corollary 2** (Corollary of Lumer-Phillips Theorem, see [10, Corollary 4.4]).: _Let \(H\) be a complex Hilbert space and \(A\) a densely defined closed linear operator on \(H\). If both \(A\) and \(A^{*}\) are dissipative, then \(A\) is the infinitesimal generator of a \(C_{0}\)-semigroup of contractions on \(H\)._
### Function Spaces
We will frequently use several results regarding function spaces, which are introduced in this subsection. For further details, we refer to [1, 1, 2] as well as to [1, Chapter 1, SS2].
Throughout this work, let \(\Omega\subseteq\mathbb{R}^{d}\), \(d\in\mathbb{N}\), be a bounded, open domain with _Lipschitz boundary_. The latter means that for every \(x\in\partial\Omega\) there exists a neighborhood \(Q\) such that after a change of coordinates the intersection \(Q\cap\partial\Omega\) can be identified with the graph of a Lipschitz continuous function on a neighborhood of \(\mathbb{R}^{d-1}\). By \(\lambda^{d}\) we denote the _Lebesgue measure_ and by \(\mathcal{H}^{d-1}\) the \((d-1)\)-dimensional Hausdorff measure. Since \(\Omega\) has a Lipschitz boundary, the _outward unit normal_\(\nu\colon\partial\Omega\to\mathbb{R}^{d}\) is well-defined up to a Hausdorff null set.
#### 2.2.1 Continuous and Differentiable Functions
The set of all _continuous_ functions on \(\Omega\) is denoted by \(C(\Omega)\) and the set of all such functions that are _continuous to the boundary_ is denoted by \(C(\bar{\Omega})\). As \(\Omega\subseteq\mathbb{R}^{d}\) is bounded, its closure is compact. Hence, all functions in \(C(\bar{\Omega})\) are bounded and uniformly continuous on \(\bar{\Omega}\). For \(k\in\mathbb{N}\), \(k\geq 1\), let \(C^{k}(\Omega)\) and \(C^{k}(\bar{\Omega})\) denote the space of all \(k\)_-times differentiable_ functions such that all partial derivatives of order up to \(k\) are in \(C(\Omega)\) or \(C(\bar{\Omega})\), respectively. For \(k=0\), we define \(C^{0}(\Omega)=C(\Omega)\) (and \(C^{0}(\bar{\Omega})=C(\bar{\Omega})\)). The spaces
\[C^{\infty}(\Omega):=\bigcap_{k\in\mathbb{N}}C^{k}(\Omega)\text{ and }C^{\infty}(\bar{\Omega}):=\bigcap_{k\in\mathbb{N}}C^{k}(\bar{ \Omega})\]
are the sets of _smooth functions_ on \(\Omega\) or \(\bar{\Omega}\), respectively. The _support_ of a function \(u\colon\Omega\to\mathbb{R}\) is defined as
\[\operatorname{supp}u:=\overline{\{x\in\Omega:u(x)\neq 0\}},\]
where the closure is taken in \(\mathbb{R}^{d}\). A function is said to be _compactly supported_ (in \(\Omega\)), if its support is contained in \(\Omega\). The set of all _smooth compactly supported_ functions is denoted by
\(C_{0}^{\infty}(\Omega)\). For a Banach space \(X\), we analogously define the spaces \(C^{k}(\Omega,X)\) and \(C_{(0)}^{\infty}(\Omega,X)\) as the set of all \(k\)-times differentiable and smooth (as well as compactly supported) functions, respectively, on \(\Omega\) with values in \(X\).
#### 2.2.2 Lebesgue Spaces
In the remainder of the text, we always identify a function \(u\colon\Omega\to\mathbb{R}\) with its equivalence class \([u]=\left\{v\colon\Omega\to\mathbb{R}:\;\lambda^{d}(\{x\in\Omega:u(x)\neq v(x) \})=0\right\}\) and write \(\mathrm{d}x=\mathrm{d}\lambda^{d}(x)\) as a shorthand. Given the measure space \((\Omega,\mathcal{L}(\Omega),\lambda^{d})\) with Lebesgue \(\sigma\)-algebra \(\mathcal{L}(\Omega)\), the \(L^{p}\) space for \(p\in[1,\infty)\) can be defined as the set of equivalence classes of Lebesgue measurable functions
\[L^{p}(\Omega)=\left\{u\colon\Omega\to\mathbb{R}:\;\int_{\Omega}|u|^{p}\mathrm{ d}x<\infty\right\}.\]
For \(p=\infty\), we define the space
\[L^{\infty}(\Omega):=\{u\colon\Omega\to\mathbb{R}:\;\exists\,C>0\text{ with }|u|\leq C\;\lambda^{d}\text{-a.e. on }\Omega\}.\]
The above spaces are Banach spaces when equipped with the norms
\[\|u\|_{L^{p}(\Omega)}:=\left(\int_{\Omega}|u(x)|^{p}\mathrm{d}x\right)^{1/p} \text{for }p\in[1,\infty)\text{ and }\|u\|_{L^{\infty}(\Omega)}:=\inf_{ \begin{subarray}{c}N\subseteq\Omega,\\ \lambda^{d}(N)=0\end{subarray}}\sup_{x\in\Omega\setminus N}|u(x)|,\]
respectively. For \(p=2\), the norm is induced by the inner product
\[(u,v)_{L^{2}(\Omega)}:=\int_{\Omega}uv\,\mathrm{d}x,\quad\text{for all }u,v\in L^{2}(\Omega)\]
and therefore \(L^{2}(\Omega)\) is a Hilbert space. The space \(L^{p}(\Omega,\mathbb{R}^{m})\), \(m\in\mathbb{N}\), is the set of functions \(u\colon\Omega\to\mathbb{R}^{m}\) whose components are in \(L^{p}(\Omega)\). For \(p,q\in(1,\infty)\) with \(\frac{1}{p}+\frac{1}{q}=1\), the dual of \(L^{p}(\Omega)\) is isomorphic to \(L^{q}(\Omega)\). Moreover, we have \((L^{1}(\Omega))^{*}=L^{\infty}(\Omega)\), but in general \((L^{\infty}(\Omega))^{*}\neq L^{1}(\Omega)\), see [1, Theorems 2.44-2.46, Remark 2.47].
#### 2.2.3 Sobolev Space and \(H(\mathrm{div})\) Space
The _distributional derivative_ of \(u\in L^{p}(\Omega)\) (with respect to the \(j\)-th coordinate) is the linear functional \(\partial_{j}u\colon C_{0}^{\infty}(\Omega)\to\mathbb{R}\) defined by
\[\langle\partial_{j}u,\varphi\rangle:=-\int_{\Omega}u\,\partial_{j}\varphi\, \mathrm{d}x\ =-(u,\partial_{j}\varphi)_{L^{2}(\Omega)}\]
for \(j=1,\ldots,d\) and for all \(\varphi\in C_{0}^{\infty}(\Omega)\). (Here, the use of the dual pairing is some slight abuse of notation.) By definition, the distributional derivative _always_ exists. The Sobolev space (for \(p=2\)) is defined as
\[H^{1}(\Omega):=\left\{u\in L^{2}(\Omega):\partial_{j}u\in L^{2}(\Omega)\text{ for all }j=1,\ldots,d\right\}\]
and is a Hilbert space when equipped with the inner product
\[(u,v)_{H^{1}(\Omega)}:=(u,v)_{L^{2}(\Omega)}+(\nabla u,\nabla v)_{L^{2}( \Omega;\mathbb{R}^{d})}\text{ for all }u,v\in H^{1}(\Omega).\]
In this case, the distributional derivative is called _weak derivative_. Sobolev functions exhibit a range of practical properties, one of which is a simple product rule highlighted in the following theorem.
**Theorem 3** (Product rule, cf. [1, Sec. 4.25]).: _Given two functions \(u\in H^{1}(\Omega)\) and \(f\in C^{1}(\bar{\Omega})\), it holds that_
\[\nabla(uf)=f\nabla u+u\nabla f\in L^{2}(\Omega;\mathbb{R}^{d}).\]
There exist other versions of the product rule for Sobolev functions, but within the scope of this work Theorem 3 suffices. Another important feature of Sobolev functions pertaining to boundary conditions is the existence of _traces_. This is covered in the next theorem.
**Theorem 4** (Traces of Sobolev functions, cf. [15, Theorem 4.6]).: _Let \(\Omega\subseteq\mathbb{R}^{d}\) be a bounded, open domain with Lipschitz boundary. Then, there exists a linear, continuous operator \(\operatorname{tr}:H^{1}(\Omega)\to L^{2}(\partial\Omega)\) that satisfies \(\operatorname{tr}(u)=u|_{\partial\Omega}\) for all \(u\in C^{1}(\bar{\Omega})\)._
Of particular importance are functions that are zero on the boundary. This space is defined by \(H^{1}_{0}(\Omega)=\ker\operatorname{tr}\). Under the conditions of Theorem 4, one can show that this space is the completion of \(C^{\infty}_{0}(\Omega)\) with respect to the \(H^{1}\)-norm, cf. [1, Theorem A8.10 and 3.29]. Its dual space is referred to as \(H^{-1}(\Omega)\). The trace operator is _not_ surjective and its range is the space \(H^{\frac{1}{2}}(\partial\Omega)\) equipped with the quotient norm
\[\|z\|_{H^{1/2}(\partial\Omega)}=\inf_{\begin{subarray}{c}u\in H^{1}(\Omega),\\ \operatorname{tr}(u)=z\end{subarray}}\|u\|_{H^{1}(\Omega)}.\]
The dual of this space is denoted by \(H^{-1/2}(\partial\Omega)\).
Similar to the definition of the distributional derivative, we define the _distributional divergence_ of a vector field \(u\in L^{p}(\Omega;\mathbb{R}^{d})\) by
\[\langle\operatorname{div}u,\varphi\rangle:=-\int_{\Omega}u\cdot\nabla\varphi \,\mathrm{d}x. \tag{2}\]
Analogously, we are then interested in the existence of _weak divergences_. This motivates the definition of the space
\[H(\operatorname{div},\Omega):=\left\{u\in L^{2}(\Omega;\mathbb{R}^{d}): \operatorname{div}u\in L^{2}(\Omega)\right\}.\]
Hence, we have \(u\in H(\operatorname{div},\Omega)\) if and only if \(u\in L^{2}(\Omega;\mathbb{R}^{d})\) and there exists a \(y\in L^{2}(\Omega)\) such that
\[(u,\nabla\varphi)_{L^{2}(\Omega;\mathbb{R}^{d})}=-(y,\varphi)_{L^{2}(\Omega)} \text{ for all }\varphi\in C^{\infty}_{0}(\Omega).\]
This space is a Hilbert space when equipped with the inner product
\[(u,v)_{H(\operatorname{div},\Omega)}:=(u,v)_{L^{2}(\Omega;\mathbb{R}^{d})}+( \operatorname{div}(u),\operatorname{div}(v))_{L^{2}(\Omega)}.\]
It can then be shown that a function whose components are in \(H^{1}(\Omega)\) is indeed in \(H(\operatorname{div},\Omega)\). Returning to the product rule described in Theorem 3, we would like to highlight the following frequently encountered case: Take a function \(F\in C^{1}(\bar{\Omega};\mathbb{R}^{d})\) with entries \(F_{j}\in C^{1}(\bar{\Omega})\). By Theorem 3, all components of \(uF\) are in \(H^{1}(\Omega)\) and its weak divergence reads
\[\operatorname{div}(uF)=\sum_{j=1}^{d}\partial_{j}(uF_{j})=F\cdot\nabla u+( \operatorname{div}F)u. \tag{3}\]
We will also refer to (3) as a _product rule_.
Analogously to the Sobolev space case, one can ask for an extension of the trace. It turns out that it only makes sense to ask for an extension of the _trace in normal direction_.
**Theorem 5** (Traces of \(H(\operatorname{div})\) functions, see [11]).: _Let \(\Omega\subseteq\mathbb{R}^{d}\) be a bounded, open domain with Lipschitz boundary, then there exists a bounded linear operator \(\operatorname{tr}_{\nu}\colon H(\operatorname{div},\Omega)\to H^{-1/2}( \partial\Omega)\) such that for all \(p\in C^{1}(\bar{\Omega};\mathbb{R}^{d})\) it holds that \(\operatorname{tr}_{\nu}(p)=p|_{\partial\Omega}\cdot\nu\), where \(\nu\colon\partial\Omega\to\mathbb{R}^{d}\) denotes the outward unit normal of \(\Omega\)._
One should note that the values of this trace are of fairly low regularity as they are only distributions in general. An interesting feature is the extension of Green's formula.
**Theorem 6** (Green's formula, see [12]).: _Let \(\Omega\subseteq\mathbb{R}^{d}\) be a bounded, open domain with Lipschitz boundary. Then it holds that_
\[\left\langle\operatorname{tr}_{\nu}p,\operatorname{tr}u\right\rangle_{H^{-1/2}( \partial\Omega),H^{1/2}(\partial\Omega)}=(\operatorname{div}p,u)_{L^{2}( \Omega)}+(p,\nabla u)_{L^{2}(\Omega;\mathbb{R}^{d})}\]
_for all \(u\in H^{1}(\Omega)\) and \(p\in H(\operatorname{div},\Omega)\)._
Theorem 6 facilitates an extension of Green's formula from \(C^{1}\) functions to Sobolev and \(H(\operatorname{div})\) functions. Moreover, it involves the normal trace, which is in general only a distribution.
### Derivation of the Koopman-von Neumann Generator
In this subsection, we return to the ODE setting from the introduction. Consider the following autonomous initial value problem (IVP)
\[\dot{x} =F(x), \tag{4}\] \[x(0) =x_{0}\]
for a vector field \(F\colon\bar{\Omega}\to\mathbb{R}^{d}\). Within the scope of this work, we do not need the vector field to be defined on the whole Euclidean space as we are only interested in solutions with their trajectories contained in \(\bar{\Omega}\). In what follows, we make the standing smoothness assumption that
\[F\in C^{1}(\bar{\Omega}) \tag{5}\]
throughout. It then holds that \(\operatorname{div}F\in L^{\infty}(\Omega)\) and \(\operatorname{div}F\in C(\bar{\Omega})\). The latter implies that \(\operatorname{div}F\) and \(F\) themselves are uniformly continuous on \(\bar{\Omega}\).
Let us briefly discuss the existence and uniqueness of solutions of (4). Assumption (5) guarantees that \(F\) is globally Lipschitz continuous (on \(\bar{\Omega}\)). Hence, there exists a Lipschitz continuous extension to \(\mathbb{R}^{d}\) according to Kirszbraun's theorem with the same Lipschitz constant, see [13]. However, this extension does not need to be unique, but it suffices to apply the standard existence and uniqueness theory for dynamical systems. This guarantees for every such extension the existence of a unique solution \(x\colon[0,\infty)\to\mathbb{R}^{d}\).
These solutions may, however, depend on the chosen extension, if they ever leave \(\bar{\Omega}\). To prevent this, we need to ensure trajectories starting in \(\bar{\Omega}\) to remain in \(\bar{\Omega}\) for all \(t>0\). In other words, we require the set \(\bar{\Omega}\) to be _(positively) invariant_ with respect to (4). Solutions with this property are also called _viable_. There are necessary and sufficient criteria to guarantee the viability of solutions, which can be found in the theorems by Nagumo and Bony-Brezis, see [1, Theorem 16.5], [1, Chapter 4, Section 2, Theorem 2], and [20, Chapter III, SS10, XV]. Here, we propose the following _no-outflow condition_: For all \(x\in\partial\Omega\) we require
\[F(x)\cdot\nu(x)\leq 0. \tag{6}\]
This resembles the results by Bony, see the aforementioned references as well as [14, 12]. However, the outward unit normal therein has a different definition compared to the one for Lipschitz boundaries, see [1, 13].
There is another interpretation of (6). As it turns out, the KvN equation resembles a transport equation. For the latter, see for instance [1]. Therefore, we need to impose a boundary condition on the so-called _inflow boundary_. This will be explained in more detail in Section 4. Condition (6) will only be occasionally used in the article. Therefore, it is _not_ a standing assumption and will be explicitly stated, where needed.
Assuming (5) and (6) hold, the ODE in (4) has a unique solution on \([0,\infty)\) that takes only values in \(\bar{\Omega}\) for each initial value \(x_{0}\in\bar{\Omega}\). Consider the associated _semiflow_, cf. [1, Section 10 and Remark (10.2)(c)] and [16, Theorem 6.1], \(\Phi\colon[0,\infty)\times\bar{\Omega}\to\bar{\Omega}\) with \(\Phi_{t}(x_{0})=x(t)\), where \(x\) is the solution of (4). Clearly, \(\Phi\) has the semigroup properties \(\Phi_{0}=\operatorname{id}_{\bar{\Omega}}\) and \(\Phi_{t+s}=\Phi_{t}\circ\Phi_{s}\).
#### 2.3.1 Perron-Frobenius and Koopman Operators
Next, we introduce transfer operators. Our exposition is predominantly based on the corresponding sections in [13]. The Perron-Frobenius operator for a given time \(t\geq 0\) is the bounded linear operator \(\mathcal{P}(t)\colon L^{1}(\Omega)\to L^{1}(\Omega)\) defined by
\[\int_{A}(\mathcal{P}(t)\rho)(x)\,\mathrm{d}x=\int_{\Phi_{t}^{-1}(A)}\rho(x)\, \mathrm{d}x\text{ for all }A\in\mathcal{B}(\Omega),\]
where \(\mathcal{B}(\Omega)\) denotes the Borel \(\sigma\)-algebra on \(\Omega\). Here, the map \(\Phi_{t}\) is nonsingular as it is Lipschitz continuous. Using the properties of the semiflow \(\Phi\), it can be shown that the family \((\mathcal{P}(t))_{t\geq 0}\) is a semigroup of bounded linear operators on \(L^{1}(\Omega)\). This operator is in fact connected to the propagation of probability densities in the Liouville equation.
In addition, we consider the _Koopman operator_\(\mathcal{K}(t)\colon L^{\infty}(\Omega)\to L^{\infty}(\Omega)\) defined by
\[\mathcal{K}(t)f:=f\circ\Phi_{t}.\]
Again, the family \((\mathcal{K}(t))_{t\geq 0}\) is a semigroup of bounded linear operators on \(L^{\infty}(\Omega)\). The application of the Koopman operator is connected to the time evolution of an observable along a trajectory. It can be shown that the Koopman operator is in fact the dual of the Perron-Frobenius operator with \((L^{1}(\Omega))^{*}=L^{\infty}(\Omega)\), see [13]. Hence, we have
\[\langle\mathcal{K}(t)f,\rho\rangle_{L^{\infty}(\Omega),L^{1}(\Omega)}=\langle f,\mathcal{P}(t)\rho\rangle_{L^{\infty}(\Omega),L^{1}(\Omega)}\]
for all \(f\in L^{\infty}(\Omega),\rho\in L^{1}(\Omega)\) and \(t\geq 0\).
The generators of the Perron-Frobenius and Koopman operators are given by
\[\mathcal{L}^{*}\rho :=\lim_{t\searrow 0}\frac{1}{t}(\mathcal{P}(t)\rho-\rho)=-\, \mathrm{div}(\rho F)\quad\text{and}\] \[\mathcal{L}f :=\lim_{t\searrow 0}\frac{1}{t}(\mathcal{K}(t)f-f)=F\cdot\nabla f,\]
respectively, for sufficiently smooth functions \(f\) and \(\rho\). For now, we do not specify the function spaces and keep it at the formal level. These operators will be called _Perron-Frobenius generator_ and _Koopman generator_ to distinguish them from the operators in the associated semigroup.
#### 2.3.2 Koopman-von Neumann Operator
The aim of KvN mechanics is the formulation of a quantum system whose solution \(\psi\) satisfies \(\rho=|\psi|^{2}\) according to Born's rule. In other words, a quantum mechanical system is formulated, whose probability distribution is associated with the solution of the dynamical system in (4). Originally, this framework was proposed as a link between classical and quantum mechanics. Hence, the infinitesimal generator was first proposed for Hamiltonian systems, see [14, 15, 16, 17]. In recent years, this framework was extended to general ODEs, see [18, 19, 20].
In this paper, we introduce the KvN generator as
\[\mathcal{L}^{*}_{\mathrm{KvN}}:=\frac{1}{2}(\mathcal{L}^{*}-\mathcal{L}). \tag{7}\]
This notion diverges slightly from the literature by representing the KvN generator as the skew-symmetric component of the Koopman or Perron-Frobenius generator, respectively. However, our approach makes the skew-symmetry of the generator immediately evident. By using the product rule, we formally derive
\[\mathcal{L}^{*}_{\mathrm{KvN}}\psi=-F\cdot\nabla\psi-\frac{1}{2}(\mathrm{div }\,F)\psi\]
for a sufficiently smooth function \(\psi\). From this we can see that the operator resembles a special case of a _transport equation_. To the best of the authors' knowledge, this connection has not been highlighted yet. Throughout this article, we will hence occasionally point out similarities between our approach and the existing literature on transport equations.
**Remark 1**.: _In the case of Hamiltonian systems, it is worth mentioning that the Perron-Frobenius and Koopman generators coincide. Given a classical Hamiltonian \(H\colon\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\), the equations of motion can be written as_
\[\dot{q}_{j}=\frac{\partial H}{\partial p_{j}},\quad\dot{p}_{j}=-\frac{\partial H }{\partial q_{j}},\]
_with \(j=1,\ldots,d\), where the vector \(q\) contains the generalized coordinates and \(p\) the momenta. Defining \(x=\begin{bmatrix}q\\ p\end{bmatrix}\in\mathbb{R}^{2d}\), we obtain \(F=\begin{bmatrix}\nabla_{p}H\\ -\nabla_{q}H\end{bmatrix}\). The Koopman generator can hence be written as_
\[\mathcal{L}f=\sum_{j=1}^{d}\left(\frac{\partial H}{\partial p_{j}}\frac{ \partial f}{\partial q_{j}}-\frac{\partial H}{\partial q_{j}}\frac{\partial f }{\partial p_{j}}\right)\]
_and the Perron-Frobenius generator as_
\[\mathcal{L}^{*}\rho=-\sum_{j=1}^{d}\left(\frac{\partial H}{\partial p_{j}} \frac{\partial\rho}{\partial q_{j}}-\frac{\partial H}{\partial q_{j}}\frac{ \partial\rho}{\partial p_{j}}\right).\]
_Compare to [14, Ex. 7.6.1 and 7.8.1] for more details. That is, \(\mathcal{L}\) is a skew-adjoint operator with \(\mathcal{L}=-\mathcal{L}^{*}\) and we have \(\mathcal{L}^{*}_{\mathrm{KvN}}=\mathcal{L}^{*}\)._
The main task of this paper is to demonstrate the existence of a strongly continuous semigroup of unitary operators on the set of complex \(L^{2}\) functions. To this end, we wish to apply the results from Subsection 2.1, which requires a suitable framework. This is discussed in the subsequent sections. In the remainder of this subsection, however, we give an example to demonstrate why this is in fact a nontrivial task.
One of the necessary conditions for the existence of a strongly continuous semigroup is for the generator to be a densely defined generator with closed graph. Hence, we need to select an appropriate function space setting.
As we aim at a quantum-physical interpretation of the system, we choose \(X=L^{2}(\Omega)\). For now, we stick to real-valued functions. Since the Koopman generator involves a gradient, we might hence try to use the Sobolev space \(H^{1}_{0}(\Omega)\) as the domain of the KvN generator. This motivates the following framework. Consider \(\mathcal{L}\colon H^{1}_{0}(\Omega)\to L^{2}(\Omega)\) defined by
\[\mathcal{L}\psi:=F\cdot\nabla\psi.\]
In this setting, the Perron-Frobenius generator \(\mathcal{L}^{*}\colon L^{2}(\Omega)\to H^{-1}(\Omega)\) as the dual for arbitrary \(\varphi\in L^{2}(\Omega)\) and \(\psi\in H^{1}_{0}(\Omega)\) reads
\[\langle\mathcal{L}^{*}\varphi,\psi\rangle_{H^{-1}(\Omega),H^{1}_{0}(\Omega)} =(\varphi,\mathcal{L}\psi)_{L^{2}(\Omega)}=(\varphi,F\cdot\nabla\psi)_{L^{2}( \Omega)}=-\langle\operatorname{div}(\varphi F),\psi\rangle_{H^{-1}(\Omega),H^{ 1}_{0}(\Omega)}.\]
Here, \(\operatorname{div}\) is the distributional divergence, see (2). Clearly, if \(\psi,\varphi\in H^{1}_{0}(\Omega)\), then, using the product rule (3) and the regularity assumption (5) on \(F\), we have
\[\operatorname{div}(\varphi F)=F\cdot\nabla\varphi+(\operatorname{div}F) \varphi\in L^{2}(\Omega).\]
This means that
\[\mathcal{L}^{*}\varphi=-F\cdot\nabla\varphi-(\operatorname{div}F)\varphi \text{ for }\varphi\in H^{1}_{0}(\Omega) \tag{8}\]
and the restriction \(\mathcal{L}^{*}\colon H^{1}_{0}(\Omega)\to L^{2}(\Omega)\) is a bounded linear operator. As we have the continuous embeddings \(H^{1}_{0}(\Omega)\hookrightarrow L^{2}(\Omega)\hookrightarrow H^{-1}(\Omega)\), we can define the KvN generator as \(\mathcal{L}^{*}_{\text{KvN}}\colon H^{1}_{0}(\Omega)\to H^{-1}(\Omega)\) by \(\mathcal{L}^{*}_{\text{KvN}}=\frac{1}{2}(\mathcal{L}^{*}-\mathcal{L})\). This yields
\[\langle\mathcal{L}^{*}_{\text{KvN}}\varphi,\psi\rangle_{H^{-1}( \Omega),H^{1}_{0}(\Omega)} =\frac{1}{2}\left((\varphi,\mathcal{L}\psi)_{L^{2}(\Omega)}-( \mathcal{L}\varphi,\psi)_{L^{2}(\Omega)}\right)\] \[=\frac{1}{2}\left((F\cdot\nabla\varphi,\psi)_{L^{2}(\Omega)}-(F \cdot\nabla\psi,\varphi)_{L^{2}(\Omega)}\right).\]
By construction, this formulation of the KvN generator provides a bounded skew-symmetric operator.
To apply semigroup theory, however, we need to deal with unbounded operators on \(L^{2}(\Omega)\) taking values in \(L^{2}(\Omega)\) that are defined on a dense subspace. This motivates in combination with (8) the definition of the KvN generator as the unbounded operator \(\mathcal{L}^{*}_{\text{KvN}}\colon H^{1}_{0}(\Omega)\subseteq L^{2}(\Omega) \to L^{2}(\Omega)\) defined by
\[\mathcal{L}^{*}_{\text{KvN}}\psi=-F\cdot\nabla\psi-\frac{1}{2}(\operatorname{ div}F)\psi, \tag{9}\]
which is the formula that can be found in the existing literature on KvN mechanics. Unfortunately, this approach is incompatible to the semigroup framework. The reason lies in the lack of a closed graph of the KvN generator, even in seemingly trivial cases.
To see this, take the vanishing vector field \(F=0\). Then, the KvN generator reads \(\mathcal{L}^{*}_{\text{KvN}}=0\) and its graph is \(\operatorname{gph}(\mathcal{L}^{*}_{\text{KvN}})=H^{1}_{0}(\Omega)\times\{0\}\). But as \(H^{1}_{0}(\Omega)\subseteq L^{2}(\Omega)\) is dense, the completion of this set is \(L^{2}(\Omega)\times\{0\}\) and therefore the graph cannot be closed. Thus, a different domain space is required.
## 3 Perron-Frobenius-Sobolev Space
As demonstrated in Subsection 2.3, the closedness of the KvN generator may suffer from the behavior of the vector field. In fact, if one would use anything else than a closed subspace of \(L^{2}(\Omega)\), then the graph would not be closed. Therefore, we want to incorporate the vector field into the definition of the domain. This motivates the following modification of the Sobolev space.
**Definition 7** (Perron-Frobenius-Sobolev Spaces).: Let \(\Omega\subseteq\mathbb{R}^{d}\) be a bounded, open domain with Lipschitz boundary. The _Perron-Frobenius-Sobolev space_ is defined as
\[H(\mathcal{L}^{*},\Omega):=\left\{\psi\in L^{2}(\Omega):\psi F\in H( \operatorname{div},\Omega)\right\}.\]
It is a Hilbert space when equipped with the inner product
\[(\psi,\varphi)_{H(\mathcal{L}^{*},\Omega)}:=(\psi,\varphi)_{L^{2}(\Omega)}+( \operatorname{div}(\psi F),\operatorname{div}(\varphi F))_{L^{2}(\Omega)}\text { for all }\psi,\varphi\in H(\mathcal{L}^{*},\Omega).\]
Due to (5), the condition in Definition 7 is equivalent to \(\psi F\) having a weak divergence, i.e., there exists a \(y\in L^{2}(\Omega)\) such that
\[(\psi F,\nabla\varphi)_{L^{2}(\Omega;\mathbb{R}^{d})}=-(y,\varphi)_{L^{2}( \Omega)}\text{ for all }\varphi\in C^{\infty}_{0}(\Omega).\]
The name of this space is derived from the requirement that the Perron-Frobenius generator is well-defined with values in \(L^{2}(\Omega)\). In fact, the norm is equivalent to the graph norm of the Perron-Frobenius generator (with constants independent of the space dimensions). This guarantees the completeness of the space. We refer to this space as _PFS space_ and to its elements as _PFS functions_.
Transport equations have previously been investigated using semigroup theory, see, e.g., [1], where the domain \(\{\psi\in L^{2}(\Omega)\colon F\cdot\nabla\psi\in L^{2}(\Omega)\}\) was considered. This is essentially equivalent to \(H(\mathcal{L}^{*},\Omega)\) under assumption (5), as the product rule (3) explains. In this work,
we will stick to Definition 7 as we inherit more modern tools known from the analysis of \(H(\operatorname{div})\) spaces.
Next, we discuss a few properties of the PFS space. By assuming (5) and using the product rule, we obtain the inclusions \(H^{1}(\Omega)\subseteq H(\mathcal{L}^{*},\Omega)\subseteq L^{2}(\Omega)\). In fact, depending on \(F\) and the space dimension, either one of these inclusions can be an equality. To see this, consider the following two cases: Firstly, for \(d=1\), let \(\Omega\subseteq\mathbb{R}\) be a finite, open interval and set \(F(x)=1\) for all \(x\in\bar{\Omega}\). Then, for a function \(\psi\in H(\mathcal{L}^{*},\Omega)\), we have \(\operatorname{div}(\psi F)=\psi^{\prime}\in L^{2}(\Omega)\). This proves that \(H(\mathcal{L}^{*},\Omega)\subseteq H^{1}(\Omega)\) in this case. Secondly, for arbitrary dimensions \(d\in\mathbb{N}\), \(d\geq 1\), consider the function \(F\) with \(F(x)=0\) for all \(x\in\Omega\). Then, we have \(\operatorname{div}(\psi F)=0\in L^{2}(\Omega)\) for all \(\psi\in L^{2}(\Omega)\) and hence \(L^{2}(\Omega)=H(\mathcal{L}^{*},\Omega)\). In conclusion, the Perron-Frobenius-Sobolev space can be interpreted as an intermediate concept between Sobolev and Lebesgue spaces.
### Trace Operator and Green's Formula for the Perron-Frobenius-Sobolev Functions
Trace operators are a vital tool to treat boundary conditions for Sobolev functions. To declare them on the PFS space, we use the normal trace known for \(H(\operatorname{div})\) functions, see Theorem 5. By definition it holds that \(\psi F\in H(\operatorname{div},\Omega)\) for every \(\psi\in H(\mathcal{L}^{*},\Omega)\). Hence, the normal trace of the latter is well-defined with \(\operatorname{tr}_{\nu}(\psi F)\in H^{-1/2}(\partial\Omega)\).
**Theorem 8** (Trace for Perron-Frobenius-Sobolev Functions).: _Let \(\Omega\subseteq\mathbb{R}^{d}\) be a bounded, open domain with Lipschitz boundary. The map \(\operatorname{tr}_{F\nu}\colon H(\mathcal{L}^{*},\Omega)\to H^{-1/2}( \partial\Omega)\) defined by \(\psi\mapsto\operatorname{tr}_{\nu}(\psi F)\) is a bounded linear operator with_
\[\|\operatorname{tr}_{F\nu}\|_{\mathcal{L}\big{(}H(\mathcal{L}^{*},\Omega),H^{ -1/2}(\partial\Omega)\big{)}}\leq\max(1,\|F\|_{L^{\infty}(\Omega;\mathbb{R}^{ d})})\|\operatorname{tr}_{\nu}\|_{\mathcal{L}\big{(}H(\operatorname{div}, \Omega),H^{-1/2}(\partial\Omega)\big{)}}.\]
Proof.: Take an arbitrary \(\psi\in H(\mathcal{L}^{*},\Omega)\). Then \(\psi F\in H(\operatorname{div},\Omega)\) is well-defined and we have
\[\|\operatorname{tr}_{F\nu}(\psi)\|_{H^{-\frac{1}{2}}(\Omega)} =\|\operatorname{tr}_{\nu}(\psi F)\|_{H^{-\frac{1}{2}}(\Omega)} \leq\|\operatorname{tr}_{\nu}\|_{\mathcal{L}\big{(}H(\operatorname{div}, \Omega),H^{-1/2}(\partial\Omega)\big{)}}\|\psi F\|_{H(\operatorname{div}, \Omega)}\] \[=\|\operatorname{tr}_{\nu}\|_{\mathcal{L}\big{(}H(\operatorname{ div},\Omega),H^{-1/2}(\partial\Omega)\big{)}}\left(\|\psi F\|_{L^{2}( \Omega;\mathbb{R}^{d})}^{2}+\|\operatorname{div}(\psi F)\|_{L^{2}(\Omega)}^{2 }\right)^{\frac{1}{2}}\] \[\leq\|\operatorname{tr}_{\nu}\|_{\mathcal{L}\big{(}H(\operatorname {div},\Omega),H^{-1/2}(\partial\Omega)\big{)}}\max(1,\|F\|_{L^{\infty}(\Omega; \mathbb{R}^{d})})\|\psi\|_{H(\mathcal{L}^{*},\Omega)}.\qed\]
As mentioned above, traces are relevant when incorporating boundary conditions. Of particular interest are homogeneous boundary conditions, meaning the trace being zero. To this end, we define the following space.
**Definition 9**.: (Perron-Frobenius-Sobolev functions with vanishing trace) Let \(\Omega\subseteq\mathbb{R}^{d}\) be a bounded, open domain with Lipschitz boundary. The space \(H_{0}(\mathcal{L}^{*},\Omega)\) is defined by
\[H_{0}(\mathcal{L}^{*},\Omega):=\ker(\operatorname{tr}_{F\nu})=\{\psi\in H( \mathcal{L}^{*},\Omega):\operatorname{tr}_{F\nu}(\psi)=0\}.\]
As this space is the kernel of a linear and bounded operator, it is a closed subspace of \(H(\mathcal{L}^{*},\Omega)\) and hence again a Hilbert space with respect to the same norm.
Next Green's formula is reestablished for PFS and Sobolev functions.
**Theorem 10** (Green's Formula, first version).: _Let \(\Omega\subseteq\mathbb{R}^{d}\) be a bounded, open domain with Lipschitz boundary. For all \(\psi\in H(\mathcal{L}^{*},\Omega)\) and \(\varphi\in H^{1}(\Omega)\) it holds that_
\[\langle\operatorname{tr}_{F\nu}(\psi),\operatorname{tr}(\varphi) \rangle_{H^{-1/2}(\partial\Omega),H^{1/2}(\partial\Omega)} =(\operatorname{div}(\psi F),\varphi)_{L^{2}(\Omega)}+(\psi, \operatorname{div}(\varphi F))_{L^{2}(\Omega)}\] \[\quad-(\operatorname{div}(F)\psi,\varphi)_{L^{2}(\Omega)}.\]
Proof.: By applying Theorem 6 to \(\varphi\in H^{1}(\Omega)\) and \(\psi F\in H(\operatorname{div},\Omega)\), we obtain
\[\langle\operatorname{tr}_{\nu}(\psi F),\operatorname{tr}(\varphi)\rangle_{H^{- 1/2}(\partial\Omega),H^{1/2}(\partial\Omega)}=(\operatorname{div}(\psi F), \varphi)_{L^{2}(\Omega)}+(\psi F,\nabla\varphi)_{L^{2}(\Omega;\mathbb{R}^{d})}. \tag{10}\]
With the product rule in (3), we get \(\operatorname{div}(\varphi F)=F\cdot\nabla\varphi+\operatorname{div}(F)\varphi\) and subsequently obtain from (10):
\[(\operatorname{div}(\psi F),\varphi)_{L^{2}(\Omega)}+(\psi F, \nabla\varphi)_{L^{2}(\Omega;\mathbb{R}^{d})} =(\operatorname{div}(\psi F),\varphi)_{L^{2}(\Omega)}+(\psi, \operatorname{div}(\varphi F))_{L^{2}(\Omega)}\] \[\quad-(\operatorname{div}(F)\psi,\varphi)_{L^{2}(\Omega)}.\]
Together with \(\operatorname{tr}_{\nu}(\psi)=\operatorname{tr}_{F\nu}(\psi)\), this proves the assertion.
Due to the symmetry of the left-hand side of the equation in Theorem 10, one is tempted to interchange the roles of \(\psi\) and \(\varphi\) therein to extend this result to two functions in \(H(\mathcal{L}^{*},\Omega)\). To do so, the density of \(H^{1}(\Omega)\) in \(H(\mathcal{L}^{*},\Omega)\) is required. Clearly, the Sobolev space is dense in \(L^{2}(\Omega)\) with respect to its norm. However, it is not clear yet whether the same holds true for the PFS space. In the following theorem, we address the density of smooth functions in the PFS space.
**Theorem 11** (Density of Smooth Functions).: _Let \(\Omega\subseteq\mathbb{R}^{d}\) be a bounded, open domain. Then it holds that:_
1. _The space_ \(C^{\infty}(\Omega)\cap H(\mathcal{L}^{*},\Omega)\) _is dense in_ \(H(\mathcal{L}^{*},\Omega)\) _with respect to the_ \(H(\mathcal{L}^{*},\Omega)\)_-norm._
2. _If, moreover,_ \(\Omega\) _has a Lipschitz boundary, then the space_ \(C^{\infty}(\bar{\Omega})\) _is dense in_ \(H(\mathcal{L}^{*},\Omega)\) _with respect to the_ \(H(\mathcal{L}^{*},\Omega)\)_-norm._
3. _If, moreover,_ \(\Omega\) _has a Lipschitz boundary, then the space_ \(C^{\infty}_{0}(\Omega)\) _is dense in_ \(H_{0}(\mathcal{L}^{*},\Omega)\) _with respect to the_ \(H(\mathcal{L}^{*},\Omega)\)_-norm._
The statement in Theorem 11 (i) is a direct counterpart to the Meyers-Serrin theorem, see [14]. Due to the technical nature and multiple steps involved in proving Theorem 11, it is included in the appendix. We will just use it here to propose the following extension of Theorem 10.
**Theorem 12** (Trace and Green's formula, final version).: _Let \(\Omega\subseteq\mathbb{R}^{d}\) be a bounded, open domain with Lipschitz boundary. Then the following statements hold:_
1. _The bilinear form_ \(a\colon H(\mathcal{L}^{*},\Omega)\times H^{1}(\Omega)\to\mathbb{R}\) _defined by_ \((\psi,\varphi)\mapsto\langle\operatorname{tr}_{F\nu}(\psi),\operatorname{tr} (\varphi)\rangle\) _can be continuously extended to_ \(H(\mathcal{L}^{*},\Omega)\times H(\mathcal{L}^{*},\Omega)\)_. Then we may write_ \(a(\psi,\varphi)=\int_{\partial\Omega}\psi\varphi F\nu\,\mathrm{d}\mathcal{H}^ {d-1}\) _for all_ \(\psi,\varphi\in H(\mathcal{L}^{*},\Omega)\) _and get_ \[\int_{\partial\Omega}\psi\varphi F\nu\,\mathrm{d}\mathcal{H}^{d-1}=( \operatorname{div}(\psi F),\varphi)_{L^{2}(\Omega)}+(\psi,\operatorname{div}( \varphi F))_{L^{2}(\Omega)}-(\operatorname{div}(F)\psi,\varphi)_{L^{2}(\Omega)}.\]
2. _For all_ \(\psi\in H_{0}(\mathcal{L}^{*},\Omega)\) _and_ \(\varphi\in H(\mathcal{L}^{*},\Omega)\)_, we obtain_ \[(\operatorname{div}(F)\psi,\varphi)_{L^{2}(\Omega)}=(\operatorname{div}(\psi F ),\varphi)_{L^{2}(\Omega)}+(\psi,\operatorname{div}(\varphi F))_{L^{2}(\Omega)}.\]
3. _If the no-outflow condition (_6_) is fulfilled, then there exists a bounded linear operator_ \(\operatorname{tr}\colon H(\mathcal{L}^{*},\Omega)\to L^{2}(\partial\Omega,|F \nu|\,\mathrm{d}\mathcal{H}^{d-1})\) _with_ \(\operatorname{tr}(\psi)=\psi|_{\partial\Omega}\) _for all_ \(\psi\in C^{1}(\bar{\Omega})\)_, where_ \[L^{2}(\partial\Omega,|F\nu|\,\mathrm{d}\mathcal{H}^{d-1}):=\left\{u\colon \partial\Omega\to\mathbb{R}:\int_{\partial\Omega}|u|^{2}|F\nu|\mathrm{d} \mathcal{H}^{d-1}<\infty\right\}.\]
Proof.: ad (i). We define the operator \(A\colon H^{1}(\Omega)\to H(\mathcal{L}^{*},\Omega)^{*}\) by
\[\langle A\varphi,\psi\rangle:=a(\psi,\varphi)=\langle\operatorname{tr}_{F\nu}( \psi),\operatorname{tr}(\varphi)\rangle.\]
By Theorem 10, we obtain
\[|\langle A\varphi,\psi\rangle| \leq|(\psi,\operatorname{div}(\varphi F))_{L^{2}(\Omega)}+( \operatorname{div}(\psi F),\varphi)_{L^{2}(\Omega)}-(\operatorname{div}(F)\psi, \varphi)_{L^{2}(\Omega)}\big{|}\] \[\leq\|\psi\|_{L^{2}(\Omega)}\|\operatorname{div}(\varphi F)\|_{L^{2 }(\Omega)}+\|\operatorname{div}(\psi F)\|_{L^{2}(\Omega)}\|\varphi\|_{L^{2}( \Omega)}\] \[\quad+\|\operatorname{div}F\|_{L^{\infty}(\Omega)}\|\psi\|_{L^{2} (\Omega)}\|\varphi\|_{L^{2}(\Omega)}\] \[\leq(1+\|\operatorname{div}F\|_{L^{\infty}(\Omega)})\|\psi\|_{H( \mathcal{L}^{*},\Omega)}\|\varphi\|_{H(\mathcal{L}^{*},\Omega)}.\]
This guarantees that \(\|A\varphi\|_{H(\mathcal{L}^{*},\Omega)^{*}}\leq(1+\|\operatorname{div}F\|_{L^{ \infty}(\Omega)})\|\varphi\|_{H(\mathcal{L}^{*},\Omega)}\) for all \(\varphi\in H^{1}(\Omega)\). As \(C^{\infty}(\bar{\Omega})\subseteq H(\mathcal{L}^{*},\Omega)\) is dense according to Theorem 11 (ii), we obtain that \(H^{1}(\Omega)\) is dense in \(H(\mathcal{L}^{*},\Omega)\). Hence, there exists a unique, continuous linear extension \(A\) on \(H(\mathcal{L}^{*},\Omega)\), see [13, Theorem 3.1-3].
ad (ii). Let \(\psi\in H_{0}(\mathcal{L}^{*},\Omega)\) and \(\varphi\in H(\mathcal{L}^{*},\Omega)\). Then we get \(\operatorname{tr}_{F\nu}(\psi)=0\) and hence \(\int_{\partial\Omega}\psi\varphi F\nu\,\mathrm{d}\mathcal{H}^{d-1}=0\). The application of Theorem 12 (i) yields the assertion.
ad (iii). We apply Theorem 12 (i) with \(\varphi=\psi\in C^{1}(\bar{\Omega})\). This yields
\[\int_{\partial\Omega}\psi^{2}|F\nu|\,\mathrm{d}\mathcal{H}^{d-1} =-\int_{\partial\Omega}\psi^{2}F\nu\,\mathrm{d}\mathcal{H}^{d-1}\] \[=(\operatorname{div}(F)\psi,\psi)_{L^{2}(\Omega)}-2(\operatorname {div}(\psi F),\psi)_{L^{2}(\Omega)}\] \[\leq\|\operatorname{div}(F)\|_{L^{\infty}(\Omega)}\|\psi\|_{L^{ 2}(\Omega)}^{2}+2\|\operatorname{div}(\psi F)\|_{L^{2}(\Omega)}\|\psi\|_{L^{2} (\Omega)}\] \[\leq\left(\|\operatorname{div}F\|_{L^{\infty}(\Omega)}^{2}+4 \right)^{\frac{1}{2}}\|\psi\|_{H(\mathcal{L}^{*},\Omega)}\|\psi\|_{L^{2}( \Omega)}\] \[\leq\left(\|\operatorname{div}F\|_{L^{\infty}(\Omega)}^{2}+4 \right)^{\frac{1}{2}}\|\psi\|_{H(\mathcal{L}^{*},\Omega)}^{2}.\]
This implies that \(\|\psi|_{\partial\Omega}\|_{L^{2}(\partial\Omega,|F\nu|\mathrm{d}\mathcal{H}) }\leq(\|\operatorname{div}(F)\|_{L^{\infty}(\Omega)}^{2}+4)^{\frac{1}{4}}\| \psi\|_{H(\mathcal{L}^{*},\Omega)}\) for all \(\psi\in C^{1}(\bar{\Omega})\). As Theorem 11 (ii) guarantees \(C^{1}(\bar{\Omega})\) to be dense in \(H(\mathcal{L}^{*},\Omega)\), the trace operator can be extended to \(H(\mathcal{L}^{*},\Omega)\).
It should be mentioned that extensions of the trace operator are well-known for the transport equation, see, e.g., [20, Chapitre 2, Section 4] and [1, Proposition 2.1]. Here, however, our analysis benefits from the use of the normal trace operator defined for \(H(\operatorname{div})\) functions. Moreover, to our best knowledge, the density results in Theorem 11 have not been shown before.
With these results at hand, we close the discussion of the Perron-Frobenius-Sobolev space and draw our attention to the promised existence and uniqueness for the KvN equation.
## 4 Existence and Uniqueness for the Koopman-von Neumann Equation
As semigroup theory is defined for complex Banach spaces, we define the spaces \(L^{2}_{\mathbb{C}}(\Omega)\), \(H_{\mathbb{C}}(\mathcal{L}^{*},\Omega)\), \(H_{\mathbb{C},0}(\mathcal{L}^{*},\Omega)\) to be the sets of complex-valued functions on \(\Omega\) such that both the real and imaginary parts are in \(L^{2}(\Omega)\), \(H^{1}(\Omega)\), etc., respectively. In what follows, we want to apply semigroup theory on \(X:=L^{2}_{\mathbb{C}}(\Omega)\) equipped with the inner product
\[(u,v)_{L^{2}_{\mathbb{C}}(\Omega)}=\int_{\Omega}\bar{u}v\,\mathrm{d}x,\]
where \(\bar{u}\) is the (pointwise) conjugate of \(u\). The differential operators \(\nabla\) and \(\operatorname{div}\) are both separately applied to the real part and imaginary part. For clarification, we mention that the vector field \(F\colon\bar{\Omega}\to\mathbb{R}^{d}\) does _not_ take values in the complex numbers.
### Koopman-von Neumann Generator
For the definition of the KvN generator, we return to the spaces associated with the real-valued functions. By construction, the Perron-Frobenius generator \(\mathcal{L}^{*}\colon H(\mathcal{L}^{*},\Omega)\to L^{2}(\Omega)\) is well-defined and continuous. Together with (9) this admits the following definition of the KvN generator.
**Definition 13**.: Let \(\Omega\subseteq\mathbb{R}^{d}\) be a bounded, open domain with Lipschitz boundary. The _Koopman-von Neumann generator_\(\mathcal{L}^{*}_{\mathrm{KvN}}\colon H(\mathcal{L}^{*},\Omega)\to H(\mathcal{L}^{ *},\Omega)^{*}\) is defined by
\[\begin{split}\langle\mathcal{L}^{*}_{\mathrm{KvN}}\psi,\varphi \rangle_{H(\mathcal{L}^{*},\Omega)^{*},H(\mathcal{L}^{*},\Omega)}&= \frac{1}{2}\left((\mathcal{L}^{*}\psi,\varphi)_{L^{2}(\Omega)}-(\psi,\mathcal{ L}^{*}\varphi)_{L^{2}(\Omega)}\right)\\ &=-\frac{1}{2}(\operatorname{div}(\psi F),\varphi)_{L^{2}(\Omega) }+\frac{1}{2}(\psi,\operatorname{div}(\varphi F))_{L^{2}(\Omega)}.\end{split} \tag{11}\]
Clearly, we have \(\langle\mathcal{L}^{*}_{\mathrm{KvN}}\psi,\varphi\rangle=-\langle\mathcal{L}^ {*}_{\mathrm{KvN}}\varphi,\psi\rangle\). This guarantees that the generator is skew-symmetric. From this, we deduce for the dual of the KvN operator that \(\left(\mathcal{L}^{*}_{\mathrm{KvN}}\right)^{*}=-\mathcal{L}^{*}_{\mathrm{ KvN}}\). Here, we used the reflexivity of the PFS space.
For semigroup theory, we aim to formulate the KvN generator as an operator taking values in \(L^{2}(\Omega)\). For this sake, we use Theorem 12.
**Theorem 14** (Koopman-von Neumann generator on Perron-Frobenius-Sobolev spaces).: _Let \(\Omega\subseteq\mathbb{R}^{d}\) be a bounded, open domain with Lipschitz boundary. Then the following holds:_
1. _For all_ \(\psi,\varphi\in H(\mathcal{L}^{*},\Omega)\)_, we have_ \[\begin{split}\langle\mathcal{L}^{*}_{\mathrm{KvN}}\psi,\varphi \rangle_{H(\mathcal{L}^{*},\Omega)^{*},H(\mathcal{L}^{*},\Omega)}& =-(\operatorname{div}(\psi F),\varphi)_{L^{2}(\Omega)}+\frac{1}{2 }(\operatorname{div}(F)\psi,\varphi)_{L^{2}(\Omega)}\\ &\quad+\frac{1}{2}\int_{\partial\Omega}\psi\varphi F\nu\, \mathrm{d}\mathcal{H}^{d-1}.\end{split}\]
2. _The restriction_ \(\mathcal{L}^{*}_{\mathrm{KvN}}\colon H_{0}(\mathcal{L}^{*},\Omega)\to L^{2}(\Omega)\) _is a bounded linear operator with_ \[\mathcal{L}^{*}_{\mathrm{KvN}}\psi=-\operatorname{div}(\psi F)+\frac{1}{2} \operatorname{div}(F)\psi.\]
Proof.: ad 1. Let \(\psi,\varphi\in H(\mathcal{L}^{*},\Omega)\). By Theorem 12 1, we have
\[(\psi,\operatorname{div}(\varphi F))_{L^{2}(\Omega)}=(\operatorname{div}(F) \psi,\varphi)_{L^{2}(\Omega)}+\int_{\partial\Omega}\psi\varphi F\nu\,\mathrm{d }\mathcal{H}^{d-1}-(\operatorname{div}(\psi F),\varphi)_{L^{2}(\Omega)}.\]
Plugging this into the definition of the KvN generator in (11) yields the desired result.
ad 2. The linearity of the KvN generator is clear by definition. If \(\psi\in H_{0}(\mathcal{L}^{*},\Omega)\), we have \(\int_{\partial\Omega}\psi\varphi\,\mathrm{d}\mathcal{H}^{d-1}=0\) for all \(\varphi\in H(\mathcal{L}^{*},\Omega)\). This yields \(\mathcal{L}^{*}_{\mathrm{KvN}}\psi=-\operatorname{div}(\psi F)+\frac{1}{2} \operatorname{div}(F)\psi\in L^{2}(\Omega)\). Moreover, the estimate
\[\begin{split}\|\mathcal{L}^{*}_{\mathrm{KvN}}\psi\|_{L^{2}( \Omega)}&\leq\|\operatorname{div}(\psi F)\|_{L^{2}(\Omega)}+ \frac{1}{2}\|\operatorname{div}(F)\|_{L^{\infty}(\Omega)}\|\psi\|_{L^{2}( \Omega)}\\ &\leq\sqrt{1+\frac{1}{4}\|\operatorname{div}(F)\|_{L^{\infty}( \Omega)}^{2}}\|\psi\|_{H(\mathcal{L}^{*},\Omega)}\end{split}\]
shows the boundedness of the KvN generator.
Here, we make two important observations: On the one hand, the KvN generator on \(H(\mathcal{L}^{*},\Omega)\) has a part involving the trace. Hence, it cannot take values in \(L^{2}(\Omega)\) if the trace of \(\psi\) does not vanish. It is thus necessary to restrict the generator to the subspace \(H_{0}(\mathcal{L}^{*},\Omega)\). On the other hand, the KvN equation takes the form of a _transport equation_. In the latter context, the boundary is usually divided into three parts: \(\Gamma_{\mp}:=\{x\in\partial\Omega:F\cdot\nu\lessneq 0\}\) and \(\Gamma_{0}:=\{x\in\partial\Omega:F\cdot\nu=0\}\). The first two sets are referred to as _inflow_ and _outflow boundary_, whereas the third is sometimes called _characteristic boundary_, see [1, 1] or _solid wall_, see [1, Subsection 2.2.1]. Existence theory for this class of equations has been derived in, e.g., [1]. It is necessary to provide a condition on the inflow boundary, but it is in general _not_ possible to enforce one on the other parts. Here, we need to force the traces of \(\psi F\cdot\nu\) to take the value zero on the entire boundary for the reasons outlined above. This is only feasible under the condition in (6), which is already relevant to guarantee that the trajectories of (4) are contained in \(\bar{\Omega}\), cf. Subsection 2.3. In a physical sense, this setting can be interpreted as a particle to be confined in the closed domain.
### Koopman-von Neumann Equation
Now, we turn to the evolution equation (1) driven by the KvN operator. First, we define the KvN generator for complex-valued functions \(\mathcal{L}^{*}_{\mathrm{KvN}}\colon H_{\mathbb{C},0}(\mathcal{L}^{*},\Omega) \to L^{2}_{\mathbb{C}}(\Omega)\) by
\[\mathcal{L}^{*}_{\mathrm{KvN}}\psi:=\mathcal{L}^{*}_{\mathrm{KvN}}(\mathrm{Re} \,\psi)+\mathrm{i}\mathcal{L}^{*}_{\mathrm{KvN}}(\mathrm{Im}\,\psi).\]
Here, we use the same notation for the real and complex case. Then, the inner product with \(\varphi\in L^{2}_{\mathbb{C}}(\Omega)\) reads
\[(\mathcal{L}^{*}_{\mathrm{KvN}}\psi,\varphi)_{L^{2}_{\mathbb{C}}(\Omega)}=- \int_{\Omega}\mathrm{div}(\bar{\psi}F)\varphi\,\mathrm{d}x+\frac{1}{2}\int_{ \Omega}\mathrm{div}(F)\bar{\psi}\varphi\,\mathrm{d}x.\]
Based on the discussion in the previous subsection, we formulate the following evolution equation
\[\partial_{t}\psi =\mathcal{L}^{*}_{\mathrm{KvN}}\psi, \text{for all }t\geq 0\text{ in }\Omega,\] \[\psi F\cdot\nu =0, \text{for all }t\geq 0\text{ on }\partial\Omega, \tag{12}\] \[\psi(0) =\psi_{0}, \text{in }\Omega.\]
Our goal is to use Corollary 2. In what follows, we verify its assumptions.
**Lemma 15** (Densely defined, closed graph).: _The KvN generator is a densely defined, closed, linear operator \(\mathcal{L}^{*}_{\mathrm{KvN}}\colon\mathcal{D}(\mathcal{L}^{*}_{\mathrm{KvN}}) \subseteq L^{2}_{\mathbb{C}}(\Omega)\to L^{2}_{\mathbb{C}}(\Omega)\) with \(\mathcal{D}(\mathcal{L}^{*}_{\mathrm{KvN}}):=H_{\mathbb{C},0}(\mathcal{L}^{*},\Omega)\)._
Proof.: The proof is only shown for the real-valued case \(\mathcal{L}^{*}_{\mathrm{KvN}}\colon\mathcal{D}(\mathcal{L}^{*}_{\mathrm{KvN}}) \subseteq L^{2}(\Omega)\to L^{2}(\Omega)\) with \(\mathcal{D}(\mathcal{L}^{*}_{\mathrm{KvN}})=H_{0}(\mathcal{L}^{*},\Omega)\). The complex case is derived by using the following arguments on real and imaginary parts, respectively.
As \(C^{\infty}_{0}(\Omega)\) is dense in \(L^{2}(\Omega)\), we obtain the density of \(H_{0}(\mathcal{L}^{*},\Omega)\) in \(L^{2}(\Omega)\). Thus, it is left to show the closedness of \(\mathrm{gph}(\mathcal{L}^{*}_{\mathrm{KvN}})\) in \(L^{2}(\Omega)\times L^{2}(\Omega)\). Let a sequence \((\psi_{n},y_{n})_{n\in\mathbb{N}}\subseteq\mathrm{gph}(\mathcal{L}^{*}_{ \mathrm{KvN}})\) be given that converges in \(L^{2}(\Omega)\times L^{2}(\Omega)\) to \((\psi,y)\). We have to show that \((\psi,y)\in\mathrm{gph}(\mathcal{L}^{*}_{\mathrm{KvN}})\). To show \(\psi\in H(\mathcal{L}^{*},\Omega)\) we use Definition 7 and take an arbitrary \(\varphi\in C^{\infty}_{0}(\Omega)\). For all \(n\in\mathbb{N}\), we have
\[(\psi_{n}F,\nabla\varphi)_{L^{2}(\Omega;\mathbb{R}^{d})} =-(\mathrm{div}(\psi_{n}F),\varphi)_{L^{2}(\Omega)}=(\mathcal{L}^ {*}_{\mathrm{KvN}}\psi_{n},\varphi)_{L^{2}(\Omega)}-\frac{1}{2}(\mathrm{div}( F)\psi_{n},\varphi)_{L^{2}(\Omega)}\] \[=(y_{n},\varphi)_{L^{2}(\Omega)}-\frac{1}{2}(\mathrm{div}(F)\psi _{n},\varphi)_{L^{2}(\Omega)}.\]
The convergence of \(\psi_{n}\to\psi\) and \(y_{n}\to y\) in \(L^{2}(\Omega)\) yields
\[(\psi F,\nabla\varphi)_{L^{2}(\Omega;\mathbb{R}^{d})} =\lim_{n\to\infty}(\psi_{n}F,\nabla\varphi)_{L^{2}(\Omega;\mathbb{ R}^{d})}=\lim_{n\to\infty}\left((y_{n},\varphi)_{L^{2}(\Omega)}-\frac{1}{2}( \mathrm{div}(F)\psi_{n},\varphi)_{L^{2}(\Omega)}\right)\] \[=(y,\varphi)_{L^{2}(\Omega)}-\frac{1}{2}(\mathrm{div}(F)\psi, \varphi)_{L^{2}(\Omega)}.\]
This implies that \(y\in H(\mathcal{L}^{*},\Omega)\) with \(-\mathrm{div}(\psi F)=y-\frac{1}{2}\,\mathrm{div}(F)\psi\). Moreover, we estimate
\[\|\,\mathrm{div}(\psi_{n}F)-\mathrm{div}(\psi F)\|_{L^{2}(\Omega)} =\left\|\frac{1}{2}\,\mathrm{div}(F)(\psi_{n}-\psi)-(y_{n}-y) \right\|_{L^{2}(\Omega)}\] \[\leq\frac{1}{2}\|\,\mathrm{div}(F)\|_{L^{\infty}(\Omega)}\|\psi_{n }-\psi\|_{L^{2}(\Omega)}+\|y_{n}-y\|_{L^{2}(\Omega)}\to 0\]
as \(n\to\infty\). Together with \(\psi_{n}\to\psi\) in \(L^{2}(\Omega)\) this yields \(\psi_{n}\to\psi\) in \(H(\mathcal{L}^{*},\Omega)\). Since \(H_{0}(\mathcal{L}^{*},\Omega)\subseteq H(\mathcal{L}^{*},\Omega)\) is a closed subspace and \(\psi_{n}\to\psi\) in \(H_{0}(\mathcal{L}^{*},\Omega)\), we obtain \(\psi\in H_{0}(\mathcal{L}^{*},\Omega)\). Moreover, we get
\[\mathcal{L}^{*}_{\mathrm{KvN}}\psi=-\,\mathrm{div}(\psi F)+\frac{1}{2}\, \mathrm{div}(F)\psi=y.\]
Hence, we deduce \((\psi,y)\in\mathrm{gph}(\mathcal{L}^{*}_{\mathrm{KvN}})\).
Next, we show that the KvN operator is dissipative.
**Theorem 16** (Dissipativity).: _Let \(\Omega\subseteq\mathbb{R}^{d}\) be a bounded, open domain with Lipschitz boundary. The KvN operator \(\mathcal{L}^{*}_{\mathrm{KvN}}:H_{\mathrm{C},0}(\mathcal{L}^{*},\Omega) \subseteq L^{2}_{\mathbb{C}}(\Omega)\to L^{2}_{\mathbb{C}}(\Omega)\) is dissipative. In particular, we have \(\mathrm{Re}\left((\mathcal{L}^{*}_{\mathrm{KvN}}\psi,\psi)_{L^{2}_{\mathbb{C} }(\Omega)}\right)=0\)._
Proof.: Let \(\psi\in H_{\mathrm{C},0}(\mathcal{L}^{*},\Omega)\). Using the aforementioned skew-symmetry, we find
\[(\mathcal{L}^{*}_{\mathrm{KvN}}\psi,\psi)_{L^{2}_{\mathbb{C}}( \Omega)} =\int_{\Omega}\overline{(\mathcal{L}^{*}_{\mathrm{KvN}}\psi)} \psi\,\mathrm{d}x\] \[=(\mathcal{L}^{*}_{\mathrm{KvN}}(\mathrm{Re}(\psi)),\mathrm{Re}( \psi))_{L^{2}(\Omega)}+(\mathcal{L}^{*}_{\mathrm{KvN}}(\mathrm{Im}(\psi)), \mathrm{Im}(\psi))_{L^{2}(\Omega)}\] \[\quad+\mathrm{i}(\mathcal{L}^{*}_{\mathrm{KvN}}(\mathrm{Re}( \psi)),\mathrm{Im}(\psi))_{L^{2}(\Omega)}-\mathrm{i}(\mathcal{L}^{*}_{\mathrm{ KvN}}(\mathrm{Im}(\psi)),\mathrm{Re}(\psi))_{L^{2}(\Omega)}\] \[=2\mathrm{i}(\mathcal{L}^{*}_{\mathrm{KvN}}(\mathrm{Re}(\psi)), \mathrm{Im}(\psi))_{L^{2}(\Omega)}.\]
This implies that \(\mathrm{Re}\left((\mathcal{L}^{*}_{\mathrm{KvN}}\psi,\psi)_{L^{2}_{\mathbb{C} }(\Omega)}\right)=0\), which completes the proof.
With these results at hand, we are ready to prove the main result of this work.
**Theorem 17** (Existence and uniqueness of solutions of (12)).: _Let \(\Omega\subseteq\mathbb{R}^{d}\) be a bounded, open domain with Lipschitz boundary and let \(\psi_{0}\in H_{\mathrm{C},0}(\mathcal{L}^{*},\Omega)\) be given. Then the KvN generator induces a \(C_{0}\)-semigroup of contractions \((T(t))_{t\geq 0}\) and (12) has a unique solution \(\psi\in C^{1}([0,\infty),L^{2}_{\mathbb{C}}(\Omega))\cap C([0,\infty),H_{ \mathrm{C},0}(\mathcal{L}^{*},\Omega))\) defined by \(\psi(t)=T(t)\psi_{0}\). Moreover, for all \(t\geq 0\) it holds that \(\|\psi(t)\|_{L^{2}_{\mathbb{C}}(\Omega)}=\|\psi_{0}\|_{L^{2}_{\mathbb{C}}( \Omega)}\)._
Proof.: We verify the conditions of Corollary 2. As shown in Lemma 15, the KvN operator is densely defined in \(L^{2}_{\mathbb{C}}(\Omega)\) and has a closed graph in \(L^{2}_{\mathbb{C}}(\Omega)\times L^{2}_{\mathbb{C}}(\Omega)\). From Theorem 16, we get that \(\mathcal{L}^{*}_{\mathrm{KvN}}\) is dissipative. For \(\psi,\varphi\in H_{\mathrm{C},0}(\mathcal{L}^{*},\Omega)\) we have
\[(\mathcal{L}^{*}_{\mathrm{KvN}}\psi,\varphi)_{L^{2}_{\mathbb{C}}( \Omega)} =(\mathcal{L}^{*}_{\mathrm{KvN}}\,\mathrm{Re}\,\psi,\mathrm{Re}\, \varphi)_{L^{2}(\Omega)}+(\mathcal{L}^{*}_{\mathrm{KvN}}\,\mathrm{Im}\,\psi, \mathrm{Im}\,\varphi)_{L^{2}(\Omega)}\] \[=-(\mathcal{L}^{*}_{\mathrm{KvN}}\,\mathrm{Re}\,\varphi,\mathrm{ Re}\,\psi)_{L^{2}(\Omega)}-(\mathcal{L}^{*}_{\mathrm{KvN}}\,\mathrm{Im}\,\varphi, \mathrm{Im}\,\psi)_{L^{2}(\Omega)}\] \[\quad-\mathrm{i}(\mathcal{L}^{*}_{\mathrm{KvN}}\,\mathrm{Im}\, \varphi,\mathrm{Re}\,\psi)_{L^{2}(\Omega)}+\mathrm{i}(\mathcal{L}^{*}_{\mathrm{ KvN}}\,\mathrm{Re}\,\varphi\,\mathrm{Im}\,\psi)_{L^{2}(\Omega)}\] \[=-(\mathcal{L}^{*}_{\mathrm{KvN}}\varphi,\psi)_{L^{2}_{\mathbb{C} }(\Omega)}.\]
This shows also for the complex case that \((\mathcal{L}^{*}_{\mathrm{KvN}})^{*}=-\mathcal{L}^{*}_{\mathrm{KvN}}\). By Theorem 16, we obtain that \(\mathcal{L}^{*}_{\mathrm{KvN}}\) and its dual are both dissipative. Then, Corollary 2 guarantees the existence of a contractive \(C_{0}\)-semigroup \((T(t))_{t\in[0,\infty)}\). By Theorem 1, the function \(\psi\colon[0,\infty)\to H_{\mathrm{C},0}(\mathcal{L}^{*},\Omega)\) and \(t\mapsto T(t)\psi_{0}\) is the unique solution of (12) and satisfies \(\psi\in C^{1}([0,\infty),L^{2}_{\mathbb{C}}(\Omega))\cap C([0,\infty),H_{ \mathrm{C},0}(\mathcal{L}^{*},\Omega))\). Moreover, we deduce
\[\frac{d}{dt}\|\psi(\cdot)\|^{2}_{L^{2}_{\mathbb{C}}(\Omega)}(t) =(\partial_{t}\psi(t),\psi(t))_{L^{2}_{\mathbb{C}}(\Omega)}+(\psi(t ),\partial_{t}\psi(t))_{L^{2}_{\mathbb{C}}(\Omega)}\] \[=(\mathcal{L}^{*}_{\mathrm{KvN}}\psi(t),\psi(t))_{L^{2}_{\mathbb{ C}}(\Omega)}+(\psi(t),\mathcal{L}^{*}_{\mathrm{KvN}}\psi(t))_{L^{2}_{\mathbb{C}}( \Omega)}\] \[=2\,\mathrm{Re}\left((\mathcal{L}^{*}_{\mathrm{KvN}}\psi(t),\psi( t))_{L^{2}_{\mathbb{C}}(\Omega)}\right)\] \[=0\]
and hence \(\|\psi(t)\|^{2}_{L^{2}_{\mathbb{C}}(\Omega)}=\|\psi_{0}\|^{2}_{L^{2}_{\mathbb{C}} (\Omega)}\) for all \(t\geq 0\).
Conclusion and Outlook
In this article, we have proven the existence and uniqueness of solutions of the KvN equation associated with an autonomous initial value problem on a bounded, open domain with Lipschitz boundary. To this end, we have introduced and analyzed an extension of Sobolev spaces and derived its properties. The analysis turned out to be closely related to the one for the transport equation.
For nonautonomous ODEs, a transition to time-dependent KvN generators is required. We are confident that this goal can be achieved for sufficiently regular vector fields. Particularly, the density of smooth functions pointed out in Theorem 11 will very likely be useful for this purpose. The latter might also be a valuable addition to the theory of transport equations.
As there is a great variety of dynamical systems, we must expect a great variety in the properties of the KvN semigroup, too. For this reason, its spectral properties as well as the ones for the generator are of great interest.
This connection with transport equations opens up new avenues pertaining to the numerical solution of the KvN equation. Aiming at a realization on a quantum computer, _splitting methods_ in combination with numerical methods that preserve conservation laws, such as finite volume methods, are of particular interest. These aspects, however, will be analyzed in future work.
## Acknowledgments
The authors thank Sebastian Knebel and Arwed Steuer for their constructive feedback on an earlier version of this manuscript.
## Funding and Competing Interests
This study was funded within the _Einstein Research Unit Perspectives of a quantum digital transformation: Near-term quantum computational devices and quantum processors_ and the _QuantERA II Programme_ that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 101017733. The authors have no competing interests to declare that are relevant to the content of this article.
|
2307.04351 | MD-HIT: Machine learning for materials property prediction with dataset
redundancy control | Materials datasets are usually featured by the existence of many redundant
(highly similar) materials due to the tinkering material design practice over
the history of materials research. For example, the materials project database
has many perovskite cubic structure materials similar to SrTiO$_3$. This sample
redundancy within the dataset makes the random splitting of machine learning
model evaluation to fail so that the ML models tend to achieve over-estimated
predictive performance which is misleading for the materials science community.
This issue is well known in the field of bioinformatics for protein function
prediction, in which a redundancy reduction procedure (CD-Hit) is always
applied to reduce the sample redundancy by ensuring no pair of samples has a
sequence similarity greater than a given threshold. This paper surveys the
overestimated ML performance in the literature for both composition based and
structure based material property prediction. We then propose a material
dataset redundancy reduction algorithm called MD-HIT and evaluate it with
several composition and structure based distance threshold sfor reducing data
set sample redundancy. We show that with this control, the predicted
performance tends to better reflect their true prediction capability. Our
MD-hit code can be freely accessed at https://github.com/usccolumbia/MD-HIT | Qin Li, Nihang Fu, Sadman Sadeed Omee, Jianjun Hu | 2023-07-10T05:23:43Z | http://arxiv.org/abs/2307.04351v1 | # MD-HIT: Machine learning for materials property prediction with dataset redundancy control +
###### Abstract
Materials datasets are usually featured by the existence of many redundant (highly similar) materials due to the tinkering material design practice over the history of materials research. For example, the materials project database has many perovskite cubic structure materials similar to SrTiO\({}_{3}\). This sample redundancy within the dataset makes the random splitting of machine learning model evaluation to fail so that the ML models tend to achieve over-estimated predictive performance which is misleading for the materials science community. This issue is well known in the field of bioinformatics for protein function prediction, in which a redundancy reduction procedure (CD-Hit [1]) is always applied to reduce the sample redundancy by ensuring no pair of samples has a sequence similarity greater than a given threshold. This paper surveys the overestimated ML performance in the literature for both composition based and structure based material property prediction. We then propose a material dataset redundancy reduction algorithm called MD-HIT and evaluate it with several composition and structure based distance threshold for reducing data set sample redundancy. We show that with this control, the predicted performance tends to better reflect their true prediction capability. Our MD-hit code can be freely accessed at [https://github.com/usccolumbia/MD-HIT](https://github.com/usccolumbia/MD-HIT)
material property prediction materials discovery data redundancy deep learning machine learning
## 1 Introduction
Density functional theory (DFT) level accuracy of material property prediction [2] and >0.95 \(R^{2}\) for thermal conductivity prediction [3] with less than a hundred training samples have been routinely reported recently by an increasing list of machine learning algorithms in the material informatics community. In [4], an AI model was shown to be able to predict formation energy of a hold-out test set containing 137 entries from their structure and composition with a mean absolute error (MAE) of 0.064 eV/atom which significantly outperform the performance of DFT computations for the same task (discrepancies of >0.076 eV/atom). In another related work in Nature Communication by the same group [5], a mean absolute error (MAE) of 0.07 eV/atom was achieved for composition only based formation energy prediction using deep transfer learning, which is comparable to the MAE of DFT-computation. Pasini et al [6] reported that their multitasking neural networks can estimate the material properties (total energy, charge density and magnetic moment) for a specific configuration hundreds of times faster than first-principles DFT calculations while achieving comparable accuracy. In [7], the authors claimed their graph neural network models can predict the formation energies,
band gaps, and elastic moduli of crystals with better than DFT accuracy over a much larger data set. In [8], Farb et al. showed numerical evidence that ML model predictions deviate from DFT less than DFT deviates from experiment for all nine properties that they evaluated over the QM9 molecule dataset. They also claimed the out-of-sample prediction errors with respect to hybrid DFT reference were on par with, or close to, chemical accuracy. In [9], Tian et al reported that current ML models can achieve accurate property-prediction (formation energy, band gap, bulk and shear moduli) using composition alone without using structure information, especially for for compounds close to the thermodynamic convex hull. However, this good performance may be partially due to the over-represented redundancy in their test samples obtained with 6:2:2 random selection from matminer datasets without redundancy control. To illustrate this point, Figure 1 shows the formation energy and band gap landscape over the MP composition space, which is generated by mapping the Magpie features of all MP unique compositions to the 2D space using t-SNE and then plot the surface. Both figures show that there exist a large number of local areas with smooth or similar property values. Random splitting of samples in those areas into training and test sets may lead to information leakage and over-estimation of the prediction performance.
Despite these encouraging successes, the DFT accuracy reports of these ML models for material property prediction should be cautiously interpreted as they are all average performance evaluated over mostly randomly held-out samples that come from unexpectedly highly redundant datasets. Materials databases such as Material Project and OQMD are characterized by the existence of many redundant (highly similar) materials due to the tinkering material design practice over the history of materials research. For example, the materials project database has many perovskite cubic structure materials similar to SrTiO\({}_{3}\). This sample redundancy within the dataset makes the random splitting of machine learning model evaluation to fail so that the ML models tend to achieve over-estimated predictive performance which is misleading for the materials science community. This issue is well known in the area of ecology [10] and bioinformatics for protein function prediction, in which a redundancy reduction procedure (CD-Hit [1]) is required to reduce the sample redundancy by ensuring no pair of samples has a sequence similarity greater than a given threshold e.g. 95% sequence identity. In a recent work in 2023, it was also shown that excellent benchmark score may not imply good generalization performance [11].
The over-estimation of the ML performance for materials has been investigated in a few studies. In [12], Meredig et al. examined extrapolation performance of ML methods for materials discovery. They found that traditional ML metrics (even with cross-validation (CV)) overestimate model performance for materials discovery and introduce the leave-one-(material) cluster-out cross-validation (LOCO CV) to objectively evaluate the extrapolation performance of ML models. They especially highlighted that materials scientists often intend to extrapolate with trained ML models, rather than interpolate to find new functional materials and sampling in materials training data is typically highly non-uniform. So the high interpolation performance of ML models trained with datasets with high sample redundancy
Figure 1: Landscape of material properties. In many continuous landscape areas, there exist crowded samples with similarity properties, which makes it trivial to predict the property if a query sample is located in these areas with multiple neighbors in the training set.
(e.g. due to doping) does not indicate their strong capability to discovery new materials (out of dotmain (OOD) samples). They showed that current ML models have much higher difficulty to generalize from the training clusters to a distinct test cluster. They suggested the use of uncertainty quantification (UQ) on top of ML models to evaluate and explore candidates in new regions of design space. Stanev et al. [13] also discussed this generalization issue across different superconductor family. In [14], Xiong et al. propose K-fold forward cross-validation (FCV) as a new way for evaluating exploration performance in materials property prediction by first sorting the samples by their property values before CV splitting. They showed that current ML models' prediction performance were actually very low as shown by their proposed FCV evaluation method and the proposed exploratory prediction accuracy. A similar study for thermal conductivity prediction [15] also showed that when ML models are trained with low property values, they are usually not good at predicting samples with high property values, indicating the weak extrapolation capability. These studies show the need for the material property model developers to focus more on extrapolative prediction performance rather than average interpolation performance over test samples with high similarity to training samples due to dataset redundancy.
The material datasets redundancy issue has also been studied recently from the point of view of training efficient ML models or achieving sample efficiency. In [16], Magar and Farimani proposed an adaptive sampling strategy to generate/sample informative samples for training machine learning models in the lowest amounts of data. They assumed that informative samples for a model are those with the highest K(e.g. 250) MAE in the test set, which are added to the initial 1000 training set iteratively. Another selection approach is to add samples similar to data points of the train set having the maximum MAE during training. They showed that their sampling algorithms can create smaller training sets that obtain better performance than the baseline CGCNN model trained with all training samples. This approach can be used with active learning to build high performance ML models in a data efficient way. In a more recent work [17], Li et al. studied the redundancy in large material datasets and found that a significant degree of redundancy across multiple large datasets is present for various material properties and that up to 95% of data can be removed from ML model training with little impact on prediction performance for test sets sampled randomly from the same distribution dataset. They further showed that the redundant data is due to over-represented material types and does not help improve the low performance on out-of-distribution samples. They proposed a pruning algorithm similar to [16] which first splits the training set into A and B and then train a ML model on A and evaluates the prediction errors on samples in B. After that the test samples with low MAEs are pruned and the remaining samples are merged and split into A and B again and so on. Both approaches rely on the iterative training of ML models and are specific to a given material property. The also proposed an uncertainty quantification based active learning method to generate sample efficient training set for model training. While these works recognize the possibility to build data-efficient training set, they did not mention the how redundancy can affect the over-estimated ML model performance commonly seen in literature. Moreover, all approaches for building informative training set are material property specific, making it difficult to generate a single non-redundant benchmark dataset for benchmarking material property prediction algorithms for all material properties. Another limitation of these methods is that they show different similarity thresholds when applied to different datasets, which makes the resulting non-redundant datasets to have different minimum distances among the samples.
Since material property prediction research is now pivoting toward developing ML models with high accuracy, that are generalizable and transferable between different materials (including materials of different families), healthy evaluation of ML algorithms is needed to recognize the limitation of existing ML models and to invent new models with essential process. Within this context, reducing the dataset redundancy of both training set and test sets can avoid the over-estimation of the ML model performance, ameliorate the training bias towards samples in crowded areas, and push the model developers to focus on improving extrapolation performance instead of only interpolation performance.
In this paper, we argue the importance of redundancy control in the training and test set selection to achieve objective performance evaluation. Neglecting this has lead to many overestimated ML performances as reported in the literature for both composition based and structure based material property prediction. We then conduct the ML experiments to show that the over-estimated models usually fail for samples that are distant to training samples (lack of extrapolation performance). We then developed two redundancy reducing algorithms (CD-hit-composition and CD-hit-structure) with open-sourced code for reducing the dataset redundancy of both composition datasets and structure datasets. These two algorithms are based on composition and structure based distance metrics, which are used to add samples that are above a defined distance threshold. After this data redundancy control, the dataset can then be splitted randomly into training, validation, and test sets to achieve objective performance evaluation. We show that with this dataset redundancy control, the predicted performance tends to reflect their true prediction capability.
## 2 Method
### MD-HIT-composition algorithm for redundancy reduction of composition datasets
The early version of CD-HIT algorithm [1] of bioinformatics was originally developed to handle large-scale sequence datasets efficiently. It employs a clustering approach to group similar sequences together based on a defined sequence identity threshold. Within each cluster, only one representative sequence, called the "centroid," is retained, while the rest of the highly similar sequences are considered duplicates and removed. However, the clustering approach is still inefficient to deal with datasets with hundreds of thousands of sequences. The next generation of CD-HIT further improved the efficiency by using a greedy algorithm [18]. Both of our MD-HIT-composition and MD-HIT-structure redundancy reduction algorithms are designed based on this idea, which are greedy incremental algorithms. In our case, the MD-HIT starts the selection process with a seed material (default to be H2O). And then it sorts the remaining materials by the number of atoms instead of the formula lengths and then one-by-one classifies them as a redundant or representative material based on its similarities to the existing representatives already selected into the cluster. The composition similarities are estimated using the EIMD (Earth Movers' Distance) package, which provides the options to choose linear, chemically derived, and machine learned similarity measures. By default, we used the mendeleev similarity and the maggie similarity [19] for our non-redundant composition dataset generation. The maggie distance function is defined as the Euclidean distance of a given set of material composition magpie feature vectors such as the widely used magpie features [19]. In the matminer materials informatics package, there are several other material composition descriptors that can also be used as well. Here we focused on EIMD and the maggie feature based distance function for redundancy control of composition datasets for materials property prediction.
The complete composition similarity metrics can be found in Table 1.
### MD-HIT-Structure algorithm for redundancy reduction of structure datasets
MD-HIT-structure algorithm uses the same greedy adding approach of the MD-HIT-composition except it uses a structure based distance metric. However, due to the varying number of atoms of different crystals, it is non-trivial to compare the similarity of two given structures given most of structure descriptors tend to have different dimension for structures of different number of atoms. Here we chose two structure distances for redundancy reduction. One is the distance metric based on XRD features calculated from crystal structures. We used a Gaussian smoothing operation to first smooth the calculated XRD using the Pymatgen XRDCalculator module and then sample 900 points even distributed between 0 and 90 degree, which leads to XRD features of a fixed 900 dimension.
We also selected the OrbitalFieldMatrix feature to calculate the distances of two structures. This feature has also been used in [16] to select informative samples for ML model training. It is a set of descriptors that encode the electronic structure of a material. These features provide information about the distribution of electrons in different atomic orbitals within a crystal structure. These features provide a comprehensive representation of the electronic structure and bonding characteristics of materials and is of fixed dimension (1024).
\begin{table}
\begin{tabular}{|c|c|} \hline
**Category** & **metric** \\ \hline \multirow{3}{*}{Linear} & mendeleev \\ & petti \\ & atomic \\ & mod\_petti \\ \hline \multirow{3}{*}{Chemically Derived} & oliynyk\_sc \\ & jarvis\_sc \\ & magpie \\ & maggie\_sc \\ \hline \multirow{3}{*}{Machine Learnt} & cgcnn \\ & element \\ \cline{1-1} & mat2vec \\ \cline{1-1} & matscholar \\ \cline{1-1} & megnet16 \\ \hline \end{tabular}
\end{table}
Table 1: Composition similarity categories and metrics
Similar to the MD-Hit-composition, MD-Hit-structure algorithm also starts the selection process with a seed material (default to be H2O) put in the non-redundant set. And then it sorts the remaining materials in the candidate set by the number of atoms instead of the formula lengths and then one-by-one classifies them as a redundant or representative material based on its similarities (we use Euclidean distance of XRD features or OFM features) to the existing representatives already selected into the non-redundant set. Redundant samples are discarded while non-redundant ones are added to the non-redundant set until the candidate set is empty.
### Composition based materials property prediction algorithms
We evaluate two state-of-the-art composition based material property prediction algorithms including Roost [20] and Crabnet (the Compositionally Restricted Attention-Based network)[21] to study the impact of dataset redundancy on their performance. The Roost algorithm is a machine learning approach specifically designed for materials property prediction based on the material composition. It utilizes a graph neural network framework to learn relationships between material compositions and their corresponding properties. CrabNet is a transformer self-attention based model for composition only material property prediction. It matches or exceeds current best-practice methods on nearly all of 28 total benchmark datasets.
### Structure based material property prediction algorithms
We evaluate two state-of-the-art structure based material property prediction algorithms including ALIGNN (Atomistic Line Graph Neural Network)[22] and DeeperGATGNN[23] to study the impact of dataset redundancy on their performance. The ALIGNN model addresses a major limitation of the majority of current Graph Neural Network (GNN) models used for atomistic predictions, which only rely on atomic distances while overlooking the bond angles. Actually bond angles play a crucial role in distinguishing various atomic structures and small deviations in bond angles can significantly impact several material properties. ALIGNN is a GNN architecture that conducts message passing on both the interatomic bond graph and its corresponding line graph specifically designed for bond angles. It has achieved state-of-art performances in most benchmark problems of the matbench [24]. The DeeperGATGNN algorithm is a global attention based graph neural network that uses differentiable group normalization and residual connection to achieve high performance deep graph neural networks without performance degradation. It has achieved superior results as shown in a set of material property predictions.
### Evaluation criteria
We use the following performance metrics for evaluating dataset redundancy's impact on model performance, including Mean Absolute Error (MAE), R-squared (\(R^{2}\)), and Root Mean Squared Error (RMSE) Mean Absolute Error (MAE):
\[\text{MAE}=\frac{1}{n}\sum_{i=1}^{n}|y_{i}-\hat{y}_{i}| \tag{1}\]
R-squared (\(R^{2}\)):
\[R^{2}=1-\frac{\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2}}{\sum_{i=1}^{n}(y_{i}-\bar {y})^{2}}\]
Where \(y_{i}\) represents the observed or true values, \(\hat{y}_{i}\) represents the predicted values, and \(\bar{y}\) represents the mean of the observed values. The summation symbol \(\sum\) is used to calculate the sum of values, and \(n\) represents the number of data points in the dataset.
## 3 Results
### Datasets generation
We downloaded 125,619 cif strutures from the Material Project database, which contains 89,354 unique compositions. For compositions that correspond to multiple polymorphs, we choose the average material property values as the default property value for that composition except for formation energy we use the minimum value. We also dropped the mp-101974 (HeSiO2) which has issue to calculate their Mague features. We then remove all formulas with more than 50 atoms and got a non-duplicate composition dataset with 86,741 samples. We then use different similarity (distance)
thresholds to generate non-redundant data sets. For medenleeev similarity, we use distance thresholds of 0.5, 0.8, 1, 1.5, 2, 2.5 and 3 to generate seven non-redundant datasets. The dataset sizes range from 86740 to 3177. Similarly we generate eight matsholar non-redundant datasets. The percentages of total range from 50.82% to 2.33%. We also applied the MD-HIT-structure algorithm to all the 125,619 cir structures and use different thresholds to generate seven XRD non-redundant datasets and eight OFM non-redundant datasets. After removal of redundancy based on varying degree of sample identity using MD-HIT algorithms, the details of all non-redundant datasets are shown in Table 2.
To visually understand the effect of redundancy removal of datasets, Figure 2 shows the material distribution t-SNE maps of the full dataset and two non-redundant datasets. For each dataset, we calculate the mag
\begin{table}
\begin{tabular}{|c c c c c c|} \hline \multicolumn{2}{|c|}{**Mendeleev-nr**} & \multicolumn{4}{|c|}{**Matscholar-nr**} \\ \hline Threshold & Percentage of Total & Dataset size & Threshold & Percentage of Total & Dataset size \\ \hline
0 & 100.00\% & 86740 & 0 & 100.00\% & 86740 \\
0.5 & 46.74\% & 40544 & 0.1 & 50.82\% & 44081 \\
0.8 & 32.23\% & 27958 & 0.12 & 42.56\% & 36917 \\
1 & 24.52\% & 21268 & 0.15 & 32.31\% & 28022 \\
1.5 & 14.65\% & 12706 & 0.2 & 17.86\% & 15494 \\
2 & 8.81\% & 7643 & 0.25 & 9.76\% & 8462 \\
2.5 & 5.68\% & 4930 & 0.3 & 5.50\% & 4775 \\
3 & 3.66\% & 3177 & 0.35 & 3.60\% & 3124 \\ & & & 0.4 & 2.33\% & 2020 \\ \hline \multicolumn{2}{|c|}{**XRD-nr**} & \multicolumn{4}{|c|}{**OFM-nr**} \\ \hline Threshold & Percentage of Total & Dataset size & Threshold & Percentage of Total & Dataset size \\ \hline
0 & 100.00\% & 123108 & 0 & 100.00\% & 123108 \\
0.5 & 50.65\% & 62350 & 0.15 & 46.45\% & 57183 \\
0.6 & 37.12\% & 45703 & 0.2 & 39.32\% & 48409 \\
0.8 & 16.98\% & 20901 & 0.45 & 18.48\% & 22748 \\
0.9 & 11.15\% & 13729 & 0.7 & 10.66\% & 13120 \\ \hline \end{tabular}
\end{table}
Table 2: Generation of non-redundant datasets
Figure 2: Distribution of whole and non-redundant MP composition datasets. (a) whole dataset with 86,740 samples. (b) non-redundant dataset using Matscholar distance with 44,081 samples. (c) Non-redundant dataset with 4,930 samples using Mendeleev distance. All maps are generated using t-SNE with Magpie composition descriptors
### Composition based material property prediction with redundancy control
To examine the material properties prediction performance of ML models using datasets with Mendeleev distance and Matscholar distance based redundancy control, we conducted a series of experiments to explore how the degree of redundancy affects the ML performance for formation energy and band gap prediction. The non-redundant datasets derived from the whole MP composition dataset with 86,740 samples using different thresholds were divided into training, validation, and testing sets with a ratio of 8:1:1, respectively. Figure 3 and 4 show a comparison of the performances of Roost and CrabNet for formation energy and band gap prediction on datasets of different sizes, filtered by Mendeleev distance thresholds of 0, 0.5, 0.8, 1, 1.5, 2, 2.5 and 3 and Matscholar distance thresholds of 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35 and 0.4.
Figure 3(a) shows the prediction performances (MAE and \(R^{2}\)) of Roost and CrabNet for formation energy prediction evaluated over the whole dataset and six non-redundant datasets. It is found that the performance of both models exhibits a deteriorating trend with the increasing thresholds corresponding to lower degree of data redundancy, as evidenced by the diminishing R2 and increasing MAE scores. For band gap prediction (Figure 3(b)), the R2 scores of both models are decreasing gradually with the increase of the threshold. While the MAE scores exhibit a general uptrend, they do not exhibit a consistent decline with respect to the increasing threshold. Instead, they exhibit abrupt jumps at certain points. This could be due to outliers in the band gap-target datasets, which also shows the higher challenges for band gap prediction.
Figure 4 shows the ML performances over the matscholar-controlled non-redundant datasets. In Figure 4 (a), we found that the correlations between prediction performances of both Roost and CrabNet and thresholds (or data redundancy) are much higher than those shown in Figure 3(a), indicating that the matscholar distance tends to generate more evenly distributed non-redundant datasets compared to Mendeleev distance. However, this consistent trends of MAE and \(R^{2}\) do not hold in the bandgap prediction performance shown in Figure 4(b), in which the \(R^{2}\) curves are similar to those found in Figure 3(b) while the band gap prediction performances have large variation across different thresholds. We have checked this phenomenon by running multiple experiments for each threshold and got similar results. One possible reason is that a large percentage of bandgap samples have zero values. Overall, we found that removing redundancy of the datasets allows us to obtain more objective performances of ML models. Through experiments, we observe that without reducing redundancy, a significant portion of test samples are concentrated in crowded areas with low prediction errors. This occurs because the model may overly rely on the information from these redundant samples during the learning process, while disregarding other more diverse data features. Excessive sample redundancy can potentially lead to deceptive phenomena on the test set.
Figure 3: Performance of ML models with material properties using Mendeleev distance controlled dataset redundancy. (a) The \(R^{2}\) (blue lines) and MAE (orange lines) results of two models trained on filtered formation energy-targeted datasets using thresholds 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0. (b) The \(R^{2}\) (blue lines) and MAE (orange lines) results of two models trained on filtered band gap-targeted datasets using thresholds 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0.
### Structure based material property prediction with redundancy control
To investigate the redundancy control of structure-based material datasets, we downloaded the whole Material Project database of 123,108 crystal structures along with their formation energy per atom and band gaps. Then we use the XRD and OFM features of crystal structures to define the similarity between pairs of structures, which is used to control the structure redundancy using the thresholds the minimum XRD/OFM distance between any pair of samples. For XRD based non-redundant datasets, we used the thresholds of 0.5, 0.6, 0.8, and 0.9. We then evaluated the material property prediction performances of two state-of-the-art graph neural network algorithms including DeeperGATGNN and ALIGNN. The results are shown in Figure 5 (a) for formation energy prediction and Figure 5 (b) for band gap prediction.
First we found that the XRD-distance provides a good control of data redundancy as the MAEs of both algorithms gradually increase with the increasing XRD thresholds, corresponding to lower dataset redundancy (Figure 5 (a)). Simultaneously, the \(R^{2}\) scores decrease as the thresholds go up. For band gap prediction result in Figure 5 (b), the degree of dataset redundancy also affects the performance of both algorithms, though with a more complex effect compared to formation energy prediction results. First, it can be found that the \(R^{2}\) scores of both algorithms drop down with the increasing thresholds. However, while the MAEs of the DeeperGATGNN go up overall with increasing thresholds, the MAEs of ALIGNN over the non-redundant with thresholds 0.8 and 0.9 are actually lower than the result over the dataset with threshold of 0.6 while the \(R^{2}\) scores are lower. This discrepancy indicates for the bandgap prediction problem, there is a higher nonlinearity and the outlier band gap values may also play a role here. This phenomenon is also observed in the composition based results in Figure 3 and Figure 4.
We further evaluated how OFM-controlled data redundancy affects the algorithms' performance. Figure 6(a) and (b) show how the performances in terms of MAE and \(R^{2}\) change with the decreasing redundancy (or increasing thresholds). First we found that both algorithms showed high consistency in the formation energy prediction (Figure 6(a)). For both algorithms, the \(R^{2}\) scores decreases in general with the increasing thresholds while the MAE scores increase. This indicates that OFM distance metric can be used as a good redundancy control method for crystal structure dataset. However, for band gap prediction, Figure 6(b) shows a surprising result: the \(R^{2}\) scores go down with the increasing threshold as expected for both algorithms. However,the MAE scores also go down, which is unexpected since lower redundancy should lead to higher challenge for property prediction. To investigate the issue, we count the percentages of near-zero bandgap (<0.01 eV) samples of the test sets for all the five datasets with thresholds 0, 0.15, 0.2, 0.45, 0.7 and found that while the whole redundant dataset contains only 48.64% near-zero bandgap samples, our MD_HIT algorithm accidentally tend to pick higher percentage of near-zero bandgap samples with 64.09%, 67.81%, 84.52%, and 92.43% for thresholds 0.15, 0 2, 0.45, 0.7 respectively, which makes the prediction to be much easier, which explains why the MAEs drop. To further illustrate this data bias, we plotted the scatter plots of the predicted bandgaps by DeeperGATGNN over the whole datasets and two non-redundant datasets. We can clearly see that the dominance (92.43%) of near-zero samples in non-redundant dataset with threshold 0.7, which makes the prediction to be much easier compared to the whole dataset. This data bias may be reduced by choosing a different seed structure rather than
Figure 4: Performance of ML models with material properties using Matcholar distance controlled dataset redundancy. (a) The \(R^{2}\) (blue lines) and MAE (orange lines) results of two models trained on filtered formation energy-targeted datasets using thresholds 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4. (b) The \(R^{2}\) (blue lines) and MAE (orange lines) results of two models trained on filtered band gap-targeted datasets using thresholds 0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4.
the SrTiO\({}_{3}\) as used in this experiment. It also shows the importance to watch for data bias which can easily lead to over-estimated ML model performance in material property prediction.
Figure 5: Property prediction performances of ML models based on XRD distance controlled dataset redundancy. (a) The \(R^{2}\) (blue lines) and MAE (orange lines) results of two models trained on filtered formation energy-targeted datasets using thresholds 0.5, 0.6, 0.8, 0.9. (b) The \(R^{2}\) (blue lines) and MAE (orange lines) results of two models trained on filtered band gap-targeted datasets using thresholds 0.5, 0.6, 0.8, 0.9.
Figure 6: Property prediction performances of ML models based on OFM distance controlled dataset redundancy. (a) The \(R^{2}\) (blue lines) and MAE (orange lines) results of two models trained on filtered formation energy-targeted datasets using thresholds 0.15, 0.2, 0.45, 0.7. (b) The \(R^{2}\) (blue lines) and MAE (orange lines) results of two models trained on filtered band gap-targeted datasets using OFM thresholds 0.15, 0.2, 0.45, 0.7.
## 4 Conclusion
Large material databases such as Materials Project usually contain high degree of redundancy, which causes biased ML models and over-estimated performance evaluations due to the redundancy between randomly selected test samples and the remaining training samples. The claimed DFT accuracy averaged over all data samples from literature deviates from the common needs of material scientists who usually want to discover new materials that are different from the known training samples, which makes it important to evaluate and report the extrapolation rather than interpolation material property prediction performance. Here we propose and develop two material dataset redundancy reducing algorithms based on a greedy algorithm inspired by the peer bioinformatics CD-HIT algorithm. We use two composition distance metrics and two structure distance metrics as the thresholds to control sample redundancy of our composition and structure datasets. Our benchmark results over two composition based and two structure based material property prediction algorithms over two material properties (formation energy and band gap) showed that the prediction performance of current ML models all tend to degrade due to the removal of redundant samples, leading to more realistic measure of prediction performance of current ML material property models. The availability of our easy-to-use open-source code of MD-HIT-composition and MD-HIT-structure makes it easy for researchers to conduct objective evaluation and report realistic peformance of their ML models for material property prediction. It should be also noted that the current multi-threaded implementation of our MD-hit algorithms are still slow and more improvements are highly desirable.
## 5 Data and Code Availability
The source code and the non-redundant datasets can be freely accessed at [https://github.com/usccolumbia/MD-HIT](https://github.com/usccolumbia/MD-HIT)
## 6 Contribution
Conceptualization, J.H.; methodology,J.H. Q.L.,S.L.,E.S.,Y.Z.; software, J.H., S.S.,Y.S., S.O.; resources, J.H.; writing-original draft preparation, J.H., S.S., Y.S.,S.O.,S.L.,E.S.,Y.Z.; writing-review and editing, J.H; visualization, J.H. and S.S.; supervision, J.H.; funding acquisition, J.H.
## Acknowledgement
Qin Li would like to thank for the computing support of the State Key Laboratory of Public Big Data, Guizhou University.
|
2305.15425 | Language Model Tokenizers Introduce Unfairness Between Languages | Recent language models have shown impressive multilingual performance, even
when not explicitly trained for it. Despite this, there are concerns about the
quality of their outputs across different languages. In this paper, we show how
disparity in the treatment of different languages arises at the tokenization
stage, well before a model is even invoked. The same text translated into
different languages can have drastically different tokenization lengths, with
differences up to 15 times in some cases. These disparities persist even for
tokenizers that are intentionally trained for multilingual support.
Character-level and byte-level models also exhibit over 4 times the difference
in the encoding length for some language pairs. This induces unfair treatment
for some language communities in regard to the cost of accessing commercial
language services, the processing time and latency, as well as the amount of
content that can be provided as context to the models. Therefore, we make the
case that we should train future language models using multilingually fair
subword tokenizers. | Aleksandar Petrov, Emanuele La Malfa, Philip H. S. Torr, Adel Bibi | 2023-05-17T14:17:57Z | http://arxiv.org/abs/2305.15425v2 | # Language Model Tokenizers Introduce Unfairness Between Languages
###### Abstract
Recent language models have shown impressive multilingual performance, even when not explicitly trained for it. Despite this, there are concerns about the quality of their outputs across different languages. In this paper, we show how disparity in the treatment of different languages arises at the tokenization stage, well before a model is even invoked. The same text translated into different languages can have drastically different tokenization lengths, with differences up to 15 times in some cases. These disparities persist across the 17 tokenizers we evaluate, even if they are intentionally trained for multilingual support. Character-level and byte-level models also exhibit over 4 times the difference in the encoding length for some language pairs. This induces unfair treatment for some language communities in regard to the cost of accessing commercial language services, the processing time and latency, as well as the amount of content that can be provided as context to the models. Therefore, we make the case that we should train future language models using multilingually fair subword tokenizers.
## 1 Introduction
Language models are becoming increasingly important in natural language processing tasks, as they are capable of understanding and generating human-like language. They have been deployed in numerous applications such as virtual assistants Chen et al. (2021); Ouyang et al. (2022), chatbots Kuhail et al. (2023); Lee et al. (2023), machine translation Stahlberg (2020); Ranathunga et al. (2023), and text summarization Kryscinski et al. (2019); Xu et al. (2020). As general-purpose technologies, it is also projected that Large Language Models (LLMs) will have a significant impact on the economy and the labour market Teubner et al. (2023); Eloundou et al. (2023).
Such LLMs are often trained using large swaths of internet content regardless of language. Hence, these models often end up being multilingual, even if not by design. ChatGPT OpenAI (2022) is a prominent recent example Bang et al. (2023); Jiao et al. (2023); Johnson (2023). This is good: in line with the economic benefits of LLMs and LLM-derived technology, equal access is crucial, with multilingual support serving as a key component.
However, this multilingual is currently treated as a curious emergent phenomenon rather than a carefully designed, controlled and managed process. Less attention is given to ensuring comparable performance in languages other than English. This is a problem as modern LLMs rely not only on data scraped from the internet but also on carefully crafted fine-tuning, _e.g._, via reinforcement learning with human feedback Christiano et al. (2017); Ouyang et al. (2022). Therefore, as long as this human-driven fine-tuning focuses on a handful of languages, the performance of LLMs will be generally lower in non-target languages, a problem especially pronounced for low-resource languages Virtanen et al. (2019); Ahuja et al. (2023).
Such disparities can have severe real-world implications. Providing access to the same technology in different languages but moderation and safety tools only for some has resulted in dire societal consequences before Stecklow (2018); Facebook (2021); Leung (2022). Differing cost of access could also reinforce inequality in opportunities for economic mobility and social participation Lythreatis et al. (2022). LLMs might currently be on a track towards exacerbating such inequalities across language communities. Therefore, as LLM multilingual
ism emerges, we should also pay commensurable attention to ensuring comparable performance and accessibility across the supported languages, regardless of whether supported by design or by chance.
This work demonstrates how the unequal treatment of languages arises at the tokenization stage, well before the language model sees any data at all. For instance, the tokenizer employed by ChatGPT (OpenAI, 2022) and GPT-4 (OpenAI, 2023) uses about 1.6 times more tokens to encode the same text in Italian as it does in English, 2.6 times for Bulgarian and 3 times for Arabic. As some commercial providers charge per token, Arabic language users and businesses would be charged 3 times more than English users for the same task. For other languages, such as Shan -- the native language of people from the Shan State in Myanmar-- that difference can be as high as 15 times. Unicode character and byte-level tokenization have been proposed as a way to universally represent all languages in LLMs. However, we show that they too result in drastically different encoding lengths across languages. Notably, byte-level representation of the same text is over 4 times longer for Burmese or Tibetan than Chinese.
Across 17 tokenizers that power popular language models such as ChatGPT and GPT-4, regardless whether they use a word, subword, character, or byte-level tokenization, there always exists disadvantaged languages that are at least 4 times less efficient than others. We discuss three fairness implications of these differences in tokenization:
1. **Cost:** Commercial services charge users per token or Unicode character. In either case, these discrepancies lead to users of some languages paying at least 4 times more for the same task as users of English.
2. **Latency:** The number of tokens has a direct effect on the processing time for a task. Some languages can require twice the time to process the same content as English. This may be critical for real-time applications like emergency services.
3. **Long-term dependency modelling:** Many models have a fixed-size context. Users of languages that are more token-efficient can therefore use these systems to process or generate texts that may be more than an order of magnitude longer than users of other languages. This may lead to significant discrepancies in the quality of service depending on the language used.
Therefore, we make the case for _multilingual tokenization parity_: tokenizers should produce similar encoded lengths for the same content across languages. This is particularly important for commercial systems that have _per token_ or _per character_ pricing models. Hence, we advocate for multilingually fair tokenizers for the next generation of language models.
## 2 Background on Tokenization
To enable automatic processing of language, it must first be represented in a suitable form. The current practice is to use _tokenization_ which is the process of turning natural language into sequences of _tokens_ coming from a finite and pre-determined set called _vocabulary_(Webster and Kit, 1992). Each token is typically associated with an integer value. Language models process such sequences of integers, rather than sequences of characters or words. In this section, we offer a brief overview of the contemporary tokenization methods. For further details, we recommend the comprehensive survey by Mielke et al. (2021).
Word tokenization.The simplest tokenization method is splitting at white spaces, where each word is assigned its own token (Bengio et al., 2000). This approach, however, requires that all possible words are in the vocabulary which is not possible in practice. Therefore word tokenization often fails to handle cases like "won't", words spelled with accepted characters like "naive" or "acai", speling mistakes and named entities like "Cottonshopeburnfoot" (Sun et al., 2020). This makes it unsuitable for representing _open vocabularies_, where the words encountered are not limited to a predetermined set. Furthermore, languages that do not use spaces to separate words, such as Chinese, Japanese and Burmese, pose additional challenges for this approach (Shao et al., 2018).
Subword tokenization.Hence, most current models use _subword tokenization_, where
complex words are broken down into multiple tokens. Subword tokenization can efficiently handle complex terms by breaking them down into parts, _e.g._, "Cottonshopeburnfoot" "Cotton"+"shop"+"e"+"burn"+"foot". This approach can represent novel words, including misspelled ones, in an open vocabulary setting.
Subword vocabularies are usually data-based approaches which use large corpora to learn which subword sequences occur frequently in practice. Schuster and Nakajima (2012) introduced one of the first subword tokenizers, WordPiece, as a way to handle Japanese and Korean. Sennrich et al. (2016) proposed using Byte-Pair Encoding (BPE) (Gage, 1994) for learning subwords by merging the most frequently occurring pairs. BPE has since been widely used for most of the popular tokenizers. Kudo (2018) proposed an alternative approach via gradually pruning a large vocabulary. It removes tokens that are less likely to improve the performance of a simple unigram language model. Both methods rely on pre-tokenization (splitting on whitespaces, when available), which is not an invertible process. SentencePiece (Kudo and Richardson, 2018) addresses this de-tokenization ambiguity by treating whitespace as a special symbol, including it in the vocabulary, and supports both methods. SentencePiece with BPE is by far the most popular tokenization method for the models considered in this paper.
Unicode support.Even if subword tokenization ensures that individual characters are in the vocabulary, this still leaves the question of which characters are to be included. Simple solution is to take the ASCII characters. However, this means that words in other scripts or accented letters will fall out of it. A common workaround is to represent strings outside the vocabulary as a special UNK token. However, if there are too many UNK tokens in an input, the performance of the model tends to deteriorate (Pfeiffer et al., 2021). Therefore, it is desirable that the number of UNK tokens in the input is kept as low as possible. A simple and commonly used solution is to base the vocabulary building on Unicode.
Unicode is a computing industry standard for representing text characters (The Unicode Consortium, 2022). Unicode supports virtually all languages (including many ancient ones, emojis and special characters) by assigning every grapheme, modifier, punctuation mark, control character or formatting character one of 1,114,112 integer _codepoints_. The codepoints can be represented in binary as the variable-width encoding UTF-8, which encodes every codepoint with one to four bytes, or the fixed-width UTF-32 which encodes all codepoints with four bytes (see Figure 1).
UTF-8 can therefore represent any string in any language as a string of bytes. As each byte can take only one out of 256 values, 256 tokens can be sufficient to encode all texts. In practice this is usually combined with the BPE tokenizer. At first, the corpus is encoded as UTF-8 bytes and then BPE is ran on top of it. As most characters occur frequently, BPE would assign them a dedicated token. If the model encounters a character that didn't exist in the training corpus (_e.g._, the medium skin tone waving hand 3), it can still represent it byte-by-byte (F0+9F+91+8B) for the waving hand and F0+9F+8F+BD for the skin tone modifier). This allows the vocabulary to efficiently represent frequently occurring words and rare characters. For example, the sentence "I love aqai" could be tokenized as "I "+"love "+"a"+C3+A7+"a"+C3+AD.
Byte-level and character-level tokenization.If we can represent any input with just 256 characters, then why bother with subword tokens? A key consideration is sequence length. This is since transformers (Vaswani et al., 2017), the currently predominant deep learning architecture for language models, have attention layers with a quadratic complexity in the input length. Hence, as the number of characters is much longer than the
Figure 1: Comparison of variable width Unicode encoding (UTF-8) and fixed width encoding (UTF-32). Image adapted from (The Unicode Consortium, 2022).
sub-word tokenization, working on the character level has been traditionally considered computationally inefficient. However, Chung et al. (2016), Lee et al. (2017), Gao et al. (2020), Clark et al. (2022) and Xue et al. (2022) proposed various architectures working around this issue and operating directly on characters or UTF-8 bytes.
## 3 Intriguing Properties of Tokenization Across Languages
Subword tokenization is currently the preferred approach for state of the art language models. The subword tokenization process is usually learned in an unsupervised manner from large corpora. However, the representation of different domains, languages and topics is often biased (Joshi et al., 2020) leading to unexpected token choices. In this section, we show how artefacts from data collection might result in technical terms or rare words having dedicated tokens (_glitch tokens_), while more commonly used words and non-Latin characters end up requiring multiple tokens.
### Glitch Tokens
Using large corpora scraped from the internet results in _peculiar_ choices for tokens. For instance, it was discovered that GPT-2 contains _glitch tokens_ which can be usernames or concepts from games (Rumbelow and Watkins, 2023; Miles and Riley, 2023). As an example, the following string, likely coming from an online store backend, has a dedicated token:
[style=] 40242[style=] BuyableInstoreAndOnline
or the following token, similarly to many other glitch tokens possibly originating from Reddit communities (Rumbelow and Watkins, 2023a):
[style=] 30906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 400906[style=] 40906[style=] 400906[style=] 40906[style
Measuring Tokenizer Parity
To demonstrate that the above examples are not anecdotal evidence, we introduce the notion of _tokenizer parity_ to systematically assess how fairly a tokenizer treats equivalent sentences in different languages. Parity occurs when a tokenizer exhibits similar tokenized lengths for the same sentence in different languages. Take a sentence \(s_{A}\) in language \(A\) and its translation \(s_{B}\) to language \(B\). Then, a tokenizer \(t\) achieves parity for \(A\) with respect to \(B\) at \(s_{A}\) and \(s_{B}\) if \(\nicefrac{{|t(s_{A})|}}{{|t(s_{B})|}}\approx 1\), where \(t(s_{A})\) is the tokenization of the sentence \(s_{A}\) and \(|t(s_{A})|\) represents its length. We refer to the ratio \(\nicefrac{{|t(s_{A})|}}{{|t(s_{B})|}}\) as the _premium_ for language \(A\) relative to language \(B\).
## 5 Tokenization Length Differences Across Languages
Languages vary significantly in the number of tokens required to encode the same content, as demonstrated in the examples in Section 3. Hence, following Section 4, we measure the tokenization premium of different tokenizers. To facilitate a fair comparison, we compare the tokenization length for the same content in two different languages. To this end, we use the FLORES-200 parallel corpus, comprising of the same 2000 sentences taken from Wikipedia and human-translated to 200 different languages Guzman et al. (2019); Goyal et al. (2021); Costa-jussa et al. (2022). We look at subword tokenization models which target English, languages other than English, language varieties, multi-lingual tokenizers, as well as tokenizer-free (byte-level) modelling.
### Parity for English-centric Models
Since most state of the art language models target English, we report in Table 1 the tokenization parity for a subset of languages in FLORES-200 with respect to English. The parities for all 200 languages are shown in Appendix A.1 GPT-2 Radford et al. (2019), RoBERTa Liu et al. (2019), as well as the r50k_base, p50k_base and p50k_edit tokenizers OpenAI (2022) have very close2 tokenization lengths, so we report them together. Similarly, ChatGPT OpenAI (2022) and GPT-4 OpenAI (2023) share the same cl100k_base tokenizer hence are reported together. Some models, such as FlanT5 Chung et al. (2022), use a special UNK token to model unknown symbols not encountered during training. Hence, to ensure a fair comparison, we report only languages where no more than 10% of the input characters are mapped to UNK tokens (marked with --).
Footnote 1: An interactive table of all the languages and tokenizers is also available on the project website.
Footnote 2: The largest tokenizer parity difference between them is less than 0.005.
The results in Table 1 show very large variations in the tokenizer parity across languages irrespective of the tokenizer. Looking at languages closest to tokenizer parity, we see that for GPT-2 and RoBERTa, the cheapest language, Pangasinan, is already 66% more expensive to process than English. For ChatGPT and GPT-4, the results are slightly better, likely due to their larger vocabulary size. However, some of the cheapest languages, Portuguese and Italian, which also use the Latin script, still see a premium of 50% when compared to English. This is due to their words being broken into subword tokens. For example, "Have a great day!" is 5 tokens in
\begin{table}
\begin{tabular}{l r r r} \hline \hline & GPT-2 & ChatGPT & \multirow{2}{*}{FlanT5} \\ & RoBERTa & GPT-4 & \\ \hline Bulgarian & 5.51 & 2.64 & — \\ Burmese & 16.89 & 11.70 & — \\ Chinese (Simplified) & 3.21 & 1.91 & — \\ Dzongkha & 16.36 & 12.33 & — \\ English & 1.00 & 1.00 & 1.00 \\ French & 2.00 & 1.60 & 1.60 \\ German & 2.14 & 1.58 & 1.37 \\ Italian & 2.01 & 1.64 & 2.18 \\ Japanese & 3.00 & 2.30 & — \\ Jingpho & 2.65 & 2.35 & 3.41 \\ Maori & 2.45 & 2.35 & 3.28 \\ Norwegian Bokmål & 1.86 & 1.56 & 2.24 \\ Odia & 13.38 & 12.48 & — \\ Pangasinan & 1.66 & 1.57 & 2.18 \\ Portuguese & 1.94 & 1.48 & 2.21 \\ Romanian & 2.48 & 1.88 & 1.50 \\ Santali & 12.86 & 12.80 & — \\ Shan & 18.76 & 15.05 & — \\ Spanish & 1.99 & 1.55 & 2.23 \\ Standard Arabic & 4.40 & 3.04 & — \\ Tumbuka & 2.78 & 2.57 & 3.29 \\ Vietnamese & 4.54 & 2.45 & — \\ \hline \hline \end{tabular}
\end{table}
Table 1: Premiums with respect to English on FLORES-200 for several **English-centric** models. The languages in the top or bottom three for any tokenizer as well as the ones discussed in the text are shown.
English, with each word having its own token:
\begin{tabular}{l r r r r r r} \hline \hline & Arabic & RoCBert & CamemBERT & GottBERT & BERT & PhoBERT \\ & BERT & (Chinese) & (French) & (German) & Japanese & (Vietnamese) \\ \hline Belarusian & 4.74 & — & — & 5.62 & — & 3.46 \\ Bulgarian & 4.30 & — & — & 4.73 & — & 3.09 \\ Catalan & 2.36 & 2.86 & 1.59 & 1.89 & 1.95 & 1.57 \\ Chinese (Simp.) & — & 1.00 & — & 3.95 & 0.82 & — \\ Chinese (Trad.) & — & 0.94 & — & 3.82 & 0.84 & — \\ Dutch & 2.52 & 2.92 & 1.68 & 1.73 & 1.98 & 1.58 \\ Dzongkha & — & — & — & 16.12 & — & — \\ English & 1.83 & 2.60 & 1.20 & 1.35 & 1.49 & 1.20 \\ French & 2.42 & 3.10 & 1.00 & 1.99 & 2.03 & 1.66 \\ Friulian & 2.33 & 2.79 & 1.66 & 1.98 & 1.92 & 1.59 \\ German & 2.63 & 3.12 & 1.85 & 1.00 & 2.04 & 1.67 \\ Greek & 4.93 & 3.00 & — & 6.73 & — & 3.73 \\ Italian & 2.58 & 3.10 & 1.63 & 1.93 & 2.04 & 1.60 \\ Japanese & 1.85 & 1.34 & — & 4.35 & 1.00 & — \\ Jingpho & 3.12 & 3.12 & 2.13 & 2.55 & 2.47 & 1.84 \\ Luxembourish & 2.56 & 2.97 & 1.82 & 1.75 & 1.96 & 1.72 \\ N. Lev. Arabic & 1.00 & — & — & 6.52 & — & — \\ Shan & — & — & — & 16.88 & — & — \\ Standard Arabic & 1.00 & — & — & 7.03 & — & — \\ Tagalog & 2.84 & 3.28 & 2.00 & 2.20 & 2.39 & 1.74 \\ Tosk Albanian & 2.66 & 2.90 & 2.17 & 2.39 & — & 2.02 \\ Tsonga & 3.01 & 3.09 & 2.03 & 2.29 & 2.46 & 1.76 \\ Tumbuka & 3.27 & 3.49 & 2.21 & 2.61 & — & 2.00 \\ Vietnamese & 2.52 & 2.55 & — & 4.12 & — & 1.00 \\ Yue Chinese & — & 0.92 & — & 3.75 & — & — \\ \hline \hline \end{tabular}
Table 2: Tokenizer premiums on the FLORES-200 dataset for **non-English centric models**. The premium is computed with respect to the target language (Modern Standard Arabic was used for Arabic BERT and Simplified Chinese for RoCBert). The languages that are in the top or bottom two for any tokenizer as well as the ones discussed are shown.
els targeting other languages as well. Table 2 shows six such models based on the BERT architecture Devlin et al. (2019). These are ArabicBERT for Arabic Safaya et al. (2020), RoCBert for Chinese Su et al. (2022), CamemBERT for French Martin et al. (2020), GottBERT for German Scheible et al. (2020), BERT Japanese Tohoku NLP Group (2019) and PhoBERT for Vietnamese Nguyen (2020).
GottBERT exhibits a similar range of premium values as RoBERTa, likely because both tokenizers were trained in the same way, albeit on corpora prioritising different languages. The GottBERT premium for English (1.35) is lower than the ones for Dutch (1.73) and Luxembourgish (1.75) which are both linguistically closer to German than English. We observe a similar phenomenon with CamemBERT, where English is the language with the lowest premium (1.20). This is as opposed to languages closer to French which are with a higher premium, _e.g._, Catalan with 1.59 and Friulian with 1.66. The same can be observed with PhoBERT as well, where English has the lowest tokenizer premium (1.20). Hence, even for models trained with target languages other than English, English seems to enjoy preferential treatment.
RoCBert differs from them as the premium for Japanese is the lowest (1.34), likely because of the partially shared script, while English is significantly higher with 2.60. BERT Japanese has lower than unity premiums for Chinese (0.82 and 0.84) also possibly due to the partially shared script (and Chinese being more character-efficient, as discussed later in Section 5.5). ArabicBERT is similar in this regard. Different vernaculars of Arabic have parity relative to Standard Arabic up to 1.14, followed by Central Kanuri (1.27) and Acehnese (1.73) (both written in the Arabic script) with English at 1.82. Hence, sharing writing systems seems to improve the tokenization parity.
Across all six models in Table 2 and their tokenizers, the premium for English relative to the respective target language is significantly lower than the premium for the same target language for RoBERTa. The parity of ArabicBERT for English is 1.83, while the parity for Standard Modern Arabic of RoBERTa is 4.40. For French the difference is 1.20 vs 2.00, for Simplified Chinese 2.60 vs 3.21, for German 1.35 vs 2.14, for Japanese 1.49 vs 3.00, and for Vietnamese 1.20 vs 4.54. This asymmetry between English and all other languages likely stems from the extensive incorporation of English in documents written in other languages Zhang et al. (2022).
We also consider MuRIL which is a BERT-based model trained on 16 Indian languages and English Khanuja et al. (2021). For 14 of the Indian languages it was trained on Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Nepali, Odia, Sanskrit, Sindhi, Tamil, Telugu, Urdu), the premium with respect to English is between 1.01 and 1.26 (see Table 3). For Eastern Panjabi, that is 1.35, while for Kashmiri Arabic and Devanagari scripts) it is 1.75. The results for Kashmiri are likely due to its being significantly underrepresented in the training set. However, despite the model being designed with a focus on Indian languages, it still is most token-efficient in English.
Summary.We observed that the tokenizers targeting French, German and Vietnamese have English as the language closest to parity, rather than more linguistically close languages. On the other hand, the tokenizers for Arabic, Chinese and Japanese have lower premiums for some languages they share a script with. Notably, despite targeting Indian languages, MuRIL still has the lowest tokenization lengths for English. Finally, across all tokenizers, the premium for English is lower than the premium for the same language for the English-centric RoBERTa. Hence, we conclude that tokenizers for other languages give English preferential treatment.
### Parity for Linguistic Varieties
A language can vary according to factors such as geography, history, social class and culture. As a result, different dialects, pidgin and creole language variations emerge, each with its own distinct set of grammar, vocabulary and pronunciation rules.3 Unequal treatment of
certain dialects or languages can lead to social and economic disadvantages for those who speak them. Therefore, it is important to also study the tokenization differences between the "standard" language and its varieties.4 Unfortunately, parallel corpora for dialects, pidgin and creole language variations are far and few in between. In this section, however, we show results on regional Swiss German varieties, Arabic and Japanese dialects, as well as Haitian and Mauritian creoles.
Footnote 4: We refer to the language that the datasets label as “standard”, “official” or “dominant” without necessarily endorsing this designation.
Swiss German dialects.Swiss German is a dialect continuum which significantly differs from the formal High German. German-speaking Switzerland is diglassic:5 High German is used alongside regional dialects (Hogg et al., 1984). In contrast to other dialects, the use of Swiss dialects is increasing (Sieber and Sitta, 1987) especially online (Ludi, 2007). Swiss German dialects are often considered unintelligible to High German speakers and sometimes even speakers of different dialects may find difficulty understanding each other (Russ, 1990). Therefore, ensuring that German-targeting NLP applications can process Swiss German dialects is important.
Footnote 5: Diglosa is the situation of two dialects or languages being used by a single language community (Kaye, 2001).
To this end, we compare the tokenization parity relative to High German of GottBERT (Scheible et al., 2020) on the regional dialects of Aargau, Bern, Basel, Graubunden, Luzern, St. Gallen, Wallis and Zurich. We use Swiss-Dial, a parallel multidialectal corpus, as the basis of comparison (Dogan-Schonberger et al., 2021). It is worth noting, that the dialect of each city and its corresponding region may differ significantly. Therefore there might be large variations within regions as well.
The results in Table 4 show a disparity between the tokenization lengths for High German and the Swiss dialects with a premium ranging from 1.38 for the Zurich dialect, or _Zurituutsch_, to 1.59 for the Bernese _Barndutsch_. In fact, English has a lower premium than any Swiss dialect (1.35 on FLORES-200, Table 2) and the premium for Bernese German is close to the linguistically further Swedish (1.64) and Norwegian Bokmal (1.65). The following example from SwissDial shows how the sentence "Like he's waiting for something" has almost twice as long tokenization in Bernese German compared to High German:
The fact that the GottBERT tokenizer results in better parity for English, Swedish and Norwegian Bokmal than for Swiss German dialects highlights that it does not likely pick out stable linguistic constructs.
Arabic dialects.Similarly to Swiss German, Arabic is usually spoken in diglossic speech communities, where Modern Standard
\begin{table}
\begin{tabular}{l r} \hline \hline Region & GottBERT parity \\ \hline High German & 1.00 \\ Zurich & 1.38 \\ St. Gallen & 1.40 \\ Basel & 1.41 \\ Graubünden & 1.44 \\ Luzern & 1.52 \\ Aargau & 1.53 \\ Wallis & 1.58 \\ Bern & 1.59 \\ \hline \hline \end{tabular}
\end{table}
Table 4: GottBERT tokenizer premiums on the SwissDial dataset for **Swiss German dialects**. The premium is computed with respect to High German.
\begin{table}
\begin{tabular}{l r r} \hline \hline City & ArabicBERT & City & ArabicBERT \\ \hline Jeddah & 0.91 & Sanaa & 1.01 \\ Doba & 0.92 & Beirut & 1.02 \\ Riyadh & 0.92 & Benghazi & 1.02 \\ Muscat & 0.94 & Cairo & 1.03 \\ Basra & 0.95 & Sfax & 1.03 \\ Salt & 0.95 & Tripoli & 1.05 \\ Baghdad & 0.96 & Aswan & 1.06 \\ Damascus & 0.97 & Alexandria & 1.06 \\ Aleppo & 0.97 & Tunis & 1.06 \\ Jerusalem & 0.97 & Algiers & 1.07 \\ Khartoum & 0.98 & Mosul & 1.10 \\ Amman & 0.99 & Fes & 1.11 \\ Std. Arabic & 1.00 & Rabat & 1.17 \\ \hline \hline \end{tabular}
\end{table}
Table 5: ArabicBERT tokenizer premiums on the MADAR dataset for **Arabic dialects**. The premium is computed relative to Standard Arabic.
Arabic is spoken alongside at least one prestigious vernacular particular to the country or region (Bassiouney, 2009). As both Standard Arabic and its dialects are commonly used in written communication, it is vital that tokenizers handle them equally well.
To assess the performance of Arabic tokenizers, we compare the tokenization lengths of ArabicBERT (Safaya et al., 2020) across 25 Arabic dialects. To this end, we use the MADAR parallel corpus of Arabic dialects (Bouamor et al., 2018).
Table 5 shows the premiums relative to Standard Modern Arabic. The premium varies from 0.91 for the Jeddah dialect to 1.17 for the Rabat dialect. This is significantly lower than the premium for English (1.83 on FLORES-200 Table 2). The range is also much smaller than for the Swiss German dialects and approximately half of the considered dialects have a lower premium than Standard Modern Arabic. Therefore, one could say that the tokenizer of ArabicBERT achieves tokenization parity for these 25 Arabic vernaculars. This is likely because the corpus and vocabulary set on which ArabicBERT was trained contained dialectical Arabic. It is also possible that Arabic dialects are closer to Modern Standard Arabic and more mutually intelligible than Swiss German dialects are to High German (Ceplo et al., 2016; Trentman and Shiri, 2020). Still, this difference between the parity for Swiss and Arabic dialects indicates that including a broader set of vernaculars and dialects in the corpus results in improved tokenization parity.
Japanese dialects.Japanese also has a number of regional dialects (Hattori, 1973). We compare the tokenization parity of BERT Japanese (Tohoku NLP Group, 2019) across them. We employ the CPJD dataset by Takamichi and Saruwatari (2018) which contains transcriptions of the voice recordings of 250 sentences across 20 dialects.
The results in Table 6 show that the premium compared to Standard Japanese (Tokyo dialect) ranges from 1.01 (for Saitama prefecture, neighbouring Tokyo) to 1.15 (for Morokata-ben and Okayama-ben). These all are significantly lower than the premium for English (1.49, as shown in Table 2). Therefore, similarly to ArabicBERT, this is an example of the tokenizer being relatively well-aligned with the dialects. This is likely because Japanese dialects are more closely related (and intelligible (Yamagiwa, 1967) to Standard Japanese speakers) than the Swiss dialects are to High German speakers.
mium for Mauritian Creole is 1.20 using the MorisenMT parallel corpus Dabre and Sukhoo (2022). The premium for Haitian Creole is 1.64 when using the QEDv2 corpus Tiedemann (2012); Abdelali et al. (2014). Haitian Creole is also represented in the FLORES-200 dataset where the premium relative to French is 1.58. This is significantly larger than linguistically further languages such as English (1.20), Pangasinan (1.49) and Nigerian Fulfude (1.54). Therefore, CamemBERT is not well-placed to tokenize French-related creoles despite the model being trained for French.
Summary.For Swiss German and the Mauritian and Haitian Creoles, we observed large differences in tokenization lengths compared respectively to High German and French. Therefore subword tokenizers might not be able to generalize to language varieties, such as dialects, pidgins and creoles. The tokenizers of ArabicBERT and BERT Japanese, however, are close to parity across various dialects of both languages and have lower premiums for the dialects than for English. This is most likely due to the good representation of the dialects in the training dataset as well as the dialects being more linguistically close to the respective standard languages.
### Parity for Multilingual Models
There has also been a growing interest in multilingual language models, particularly for translation Dabre et al. (2020). As these models are intended to support a variety of languages, one would expect them to be close to tokenizer parity.
In this section, we compare five different multilingual models. XML-R Conneau et al. (2020) is based on RoBERTa Liu et al. (2019) and has been designed for multilingual masked language modelling or fine-tuning. M2M100 Fan et al. (2021) is a many-to-many translation model across 100 languages. MBart50 Liu et al. (2020); Tang et al. (2020) is a multilingual encoder-decoder model based on BART Lewis et al. (2020) which can handle over 50 languages. We also study mT5 Xue et al. (2020) which is a multilingual modification of T5 Raffel et al. (2020). All of these models use the SentencePiece tokenizer with upsampling for rare languages.
The final model, BLOOM Scao et al. (2022), is trained on 46 natural and 13 programming languages. It uses byte-level BPE instead of SentencePiece and is designed to maintain similar ratios of tokens per word for each language as reference monolingual tokenizers.
Thanks to the byte-level BPE tokenization, BLOOM is the only model encoding all languages without needing too many UNK tokens (see Table 7). The other four models fail to encode at least one language. mT5, for example, fails to encode Santali even though it has the byte_fallback SentencePiece feature enabled.6
Footnote 6: The byte_fallback option enables decomposition of unknown pieces into UTF-8 byte pieces.
All five models have languages with premiums larger than 4. BLOOM does encode all languages but has high premiums for some, such as Dzongkha (7.36), Shan (12.06) and Santali (12.71). Still, all models are better than the English-centric ones in Table 1. Figure 2 shows how XLM-R is much closer to parity than RoBERTa (on which it is based), over all languages it can encode. However, none of the models uniformly reaches parity across all languages. Therefore even models which are intentionally designed to be multilingual suffer from a lack of tokenization parity.
Summary:Multilingual models can improve the tokenization parity for different languages but challenges remain in achieving tokenization parity across all languages.
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline \multicolumn{5}{c}{XLM-R M2M100 MBMert50 mT5 BLOOM} \\ \hline Bulgarian & 1.16 & 1.23 & 1.16 & 1.28 & 2.49 \\ Chinese (Simp.) & 0.97 & 1.05 & 0.97 & 0.92 & 0.95 \\ Dzongkha & — & — & — & 4.24 & 7.36 \\ English & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ Indonesian & 0.94 & 0.98 & 0.94 & 1.08 & 0.96 \\ Italian & 1.19 & 1.25 & 1.19 & 1.34 & 1.62 \\ Japanese & 1.11 & 1.20 & 1.11 & 0.90 & 1.81 \\ Kabije & 2.98 & 2.71 & 2.98 & 2.83 & 3.34 \\ Santali & — & — & — & — & 12.71 \\ Shan & 4.43 & 4.63 & 4.43 & 3.28 & 12.06 \\ Std. Arabic & 1.18 & 1.29 & 1.18 & 1.35 & 1.14 \\ Std. Tibetan & — & — & — & 3.68 & 6.66 \\ Uyghur & 1.41 & 3.00 & 1.41 & 2.57 & 3.67 \\ Yue Chinese & 0.93 & 1.03 & 0.93 & 0.95 & 0.93 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Tokenizer premiums on the FLORES-200 dataset for **multilingual models**. The languages that are in the top or bottom two for any tokenizer as well as the ones discussed in the text are shown. The premium is computed with respect to English.
### Parity for Byte-level Tokenization Models
The previous section showed that BLOOM -- the only model using a tokenizer built up from byte-level representation-- is also the only multilingual model encoding all languages in the FLORES-200 dataset. Therefore, byte-level representation, where the input and output of a model are UTF-8 encodings, seems to be key for multilingual support, as any Unicode codepoint can be represented even if not observed during training.
BLOOM uses a BPE-learned vocabulary on top of the UTF-8 encoding. It is possible, though, to skip the vocabulary-building and directly use the 256 possible byte values, which allows for full end-to-end training. Choe et al. (2019) showed that this approach can have similar performance to word-level tokenized models, given sufficient model capacity. However, a concern is the increased length of the input due to the quadratic complexity of transformers.
Recently, models working around this issue have been proposed. CANINE (Clark et al., 2022) is a large model that operates at the Unicode codepoint level rather than the byte level. The CANINE tokenizer is thus equivalent to the UTF-32 encoding, resulting in an implicit tokenizer with a vocabulary of 1,114,112. ByT5 (Xue et al., 2022), on the other hand, uses the UTF-8 encoding, _i.e._, an implicit vocabulary of 256 tokens.7 This model incorporates architectural modifications that enable efficient handling of byte-level inputs.
Footnote 7: To be consistent, we will refer to the characters and bytes in the encoding of the CANINE and ByT5 tokenizers as _tokens_ as they fulfil a similar role.
Although these byte-level models can represent any Unicode codepoint without an explicit tokenization step, there are still significant tokenization disparities. For instance, CANINE exhibits tokenization premiums ranging from 0.31 for Yue Chinese to 1.42 for Shan, relative to English (see Table 8). However, simply measuring the parity relative to English conceals the fact that Shan has a premium of 4.58 relative to Yue Chinese. This can be attributed to the fact that CANINE provides a single token for each Unicode codepoint, which results in Chinese being more token-efficient (with premiums ranging in 0.31-0.34 relative to English for the three Chinese languages) as each character is treated as a single token. However, this encoding puts Shan at a disadvantage, as its encoding relies on diacritics represented as separate Unicode codepoints. Other languages, such as Tok Pisin and Tumbuka, which use the Latin script but require more characters than English for the same text, also face similar challenges.
Tokenization disparity is also present in the ByT5 model. The tokenization premium for ByT5 ranges from 0.87 (for Yue Chinese) to
\begin{table}
\begin{tabular}{l r r} \hline \hline & \multicolumn{1}{c}{CANINE} & \multicolumn{1}{c}{ByT5} \\ & UTF-32 bytes & UTF-8 bytes \\ \hline Bulgarian & 1.04 & 1.89 \\ Burmese & 1.24 & 3.51 \\ Chinese (Simplified) & 0.34 & 0.93 \\ Chinese (Traditional) & 0.32 & 0.89 \\ Dzongkha & 1.25 & 3.64 \\ English & 1.00 & 1.00 \\ Italian & 1.18 & 1.19 \\ Japanese & 0.44 & 1.27 \\ Shan & 1.42 & 3.94 \\ Standard Arabic & 0.88 & 1.60 \\ Standard Tibetan & 1.13 & 3.31 \\ Tok Pisin & 1.28 & 1.28 \\ Tumbuka & 1.30 & 1.32 \\ Yue Chinese & 0.31 & 0.87 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Tokenizer premiums on the FLORES-200 dataset for **byte-level models**. The languages that are in the top or bottom two for any tokenizer as well as the ones discussed in the text are shown. The premium is computed with respect to English.
Figure 2: Comparison of the tokenization premiums for XLM-R and RoBERTa for the subset of languages that XLM-R encodes with less than 10% to the UNK token.
3.94 (for Shan). The issue with some languages using more characters than others persists as illustrated by Tok Pisin and Tumbuka having similar parities as for CANINE. Moreover, the introduction of the variable-width UTF-8 encoding of Unicode characters in ByT5 creates another issue of unequal treatment. ASCII characters, which are sufficient for English, require only one byte. Other Latin script characters, as well as Greek, Cyrillic, Coptic, Armenian, Hebrew, Arabic and Syiac, require two bytes, while Chinese, Japanese and Korean characters require three bytes (see Figure 1). Therefore, the tokenization of Chinese and Japanese is about three times as long for ByT5 as it is for CANINE (Table 8). Shan's premium of 3.94 is due to the fact that all its consonants and diacritics require three bytes resulting in words being encoded with more tokens. For example, the word "\(\lx@overaccentset{{\circ}}{\circ}\)" --which ChatGPT and GPT-4 encode with 9 tokens (Section 3)-- is encoded by ByT5 as 12 tokens, whereas the corresponding word in English ("you") requires 3 tokens. The situation is similar for other languages like Dzongkha, Tibetan and Burmese.
**Summary.** Byte-level models also fail to achieve parity among the languages from FLORES-200 exhibiting a premium of over 4 times for some language pairs. There are two sources of multilingual tokenizer disparities. First, there are natural differences in the number of characters used in different languages to communicate the same content. Second, the UTF-8 standard uses different number of bytes to encode single codepoints of different scripts.
## 6 Fairness Implications of
**Tokenization Length Differences**
We showed that no matter whether one uses subword, multilingual, or byte-level tokenization, none of the tokenizers gets close to parity for all languages in FLORES-200. Furthermore, there are languages, notable examples being Burmese, Dzongkha, Shan and Tibetan, which consistently have longer encoded sequences across all tokenizers. The variations in tokenization lengths are not merely an interesting phenomenon. They lead to some language communities having more barriers to access LLM services simply because of the language they use. This lack of tokenization parity is not merely a curiosity: it leads to unfairness in the cost to access language models, the latency of the service and the amount of data that can be processed.
### Cost
As LLMs are often too expensive to train and too large to run by most businesses, research institutions and end users, it is becoming increasingly common to access them as paid API services. One pricing approach, employed by OpenAI at the time of writing,8 is to charge per token. The price can be determined by the number of tokens provided by the user, the ones generated by the system, or the sum of both. Therefore, the tokenization premiums discussed in Section 5 directly map to cost premiums. For ChatGPT and GPT-4 the cost to process a text in German or Italian is about 50% higher than to process the same text in English (Table 1). Processing Tumbuka or Bulgarian is more than 150% more costly. Using them in Dzongkha, Odia, Santali or Shan, the most expensive languages for these services, costs more than 12 times more than in English.
Footnote 8: [https://openai.com/pricing](https://openai.com/pricing)
Another pricing strategy is to charge per Unicode character: the approach currently taken by the Google Cloud Natural Language service.9 They charge per 1,000 Unicode characters, hence the tokenization does not affect the pricing. However, as we showed in Section 5.5 and Table 8, the same content can have very different lengths when measured in Unicode characters (as CANINE does). Burmese, Dzongkha, Shan, Tok Pisin or Tumbuka require more than 4 times more characters than Yue Chinese for the same text, resulting in a proportional cost difference. Therefore, both the per-token and the per-character approaches result in large disparities in the cost for users of different languages to use the exact same service.
Footnote 9: [https://cloud.google.com/natural-language/pricing](https://cloud.google.com/natural-language/pricing)
These differences in pricing do not take into account purchasing power disparity between users of various languages. As languages with high tokenization premiums often have low
purchasing power parity, the cost difference can be further exacerbated.
### Latency
The speed of processing is critical for many applications and vital for the end-user experience. High latency for users of certain languages can result in a suboptimal experience, which may negatively impact the effectiveness of communication. High latency can also result in communication breakdowns, particularly in real-time interactions. For customer support or emergency services, delays in response time can lead to miscommunication or delayed assistance.
As some languages have significantly longer tokenized inputs, they would also experience longer processing times. The transformer attention mechanism has a quadratic complexity in the number of input tokens (Keles et al., 2023). However, the full model architecture contains other submodules and therefore the overall complexity might be different.
To assess the effect of the tokenization length on the latency we plot the computation time needed for RoBERTa (Liu et al., 2019) to process the sentences from FLORES-200 against the tokenization lengths across all 200 languages in Figure 3. It appears that the processing time is linear in the tokenization length rather than quadratic, showing a strong correlation between sequence length and execution time. Therefore, tokenization disparities across languages also affect the latency and processing time for text in these languages.
From the range of value on the vertical axis of Figure 3, it is apparent that some languages take almost twice the time to be processed as others. As expected, English is on the left lower corner, having the shortest tokenization and one of the fastest processing times. Shan is on the other extreme with the longest tokenization length and execution time (almost twice that of English). We can also observe clear trends dependent on the script used. Latin script and other Greek-derived scripts show the shortest tokenization lengths and processing times followed by the Chinese-Japanese-Korean (CJK) and Arabic languages. Other predominantly Asian and African scripts have longer tokenization lengths and processing times.
Overall, the differences in processing time closely track the differences in tokenization length. Therefore, tokenization disparities across languages also affect the latency and processing time for text in these languages.
The latency implications of tokenization disparity are not limited to text models. Speech recognition models often produce a series of tokens as their output sequentially. Similarly, speech synthesis takes as an input tokenized text (Latif et al., 2023). Therefore, differences in tokenization affects peach models too.
### Long-term Modelling
Transformers models have difficulty processing long inputs. Given that the size of the input is contingent upon the tokenization process, inputs of greater length may impose a challenge for language models to adequately reason over. Such a predicament may result in reduced abilities or limited applicability for languages with high tokenization premiums.
For example, RoBERTa has a fixed block size of 512, GPT-2 has 768, 1024, 1280, or 1600 (Radford et al., 2019) and GPT-4 comes
Figure 3: Average processing time and length of the tokenized inputs of RoBERTa model for the sentences from the FLORES-200 dataset. For each language, we tokenized each sentence independently, then computed the length and averaged the forward time of RoBERTa across 20 independent experiments. The script family categorization is only for illustration purposes.
in 8,000 and 16,000 context variants.10 These models cannot process inputs longer than that. Therefore, one can process less than a tenth of the content in languages like Burmese and Dzongkha than they can in English.
Footnote 10: [https://openai.com/pricing](https://openai.com/pricing)
Alongside inconveniencing the users of these languages, this can also result in diminished performance on automated systems, such as content moderation. Reliable content moderation is crucial for tackling hate speech and diminished performance has already been shown to fail to prevent its spread (Stecklow, 2018; Facebook, 2021). Therefore, reduced long-term modelling capabilities for some languages could have severe real-world impacts.
## 7 Towards Multilingual Tokenization Fairness
Section 6 showed that high values of tokenization parity for a language lead to increased cost and latency and decreased capacity for long-term modelling. At the same time, balanced tokenizers have been shown to result in better performance on tasks such as translation (Zhang et al., 2022). To ensure equitable treatment of all languages, it is imperative that we devise multilingually fair tokenizers. In this section, we argue that training language models from scratch with a multilingually fair subword tokenizer is the only approach that can effectively address all three aspects of tokenization unfairness: cost, latency and long-term modelling.
Subword tokenization is necessary to achieve parity.In Section 5.5, we showed that neither character-level nor byte-level input representation can achieve tokenization parity as languages require different amounts of characters to represent the same content. Additionally, the UTF-8 standard results in differences in the number of bytes required to encode codepoints from different scripts, further complicating the issue. As a result, neither character-level nor byte-level input representation can be fully fair.
Therefore, a variation of subword tokenization is necessary to achieve tokenization parity. For example, Chinese characters could be individual tokens, Latin characters might be represented as tokens with an average length of about 3 characters while pairs of Burmese characters and their diacritics being assigned single tokens. Such an approach would account for Chinese requiring about three times fewer characters than in English (as shown in Table 8).
A separate tokenizer for determining the processing cost is not sufficient.An easy patch for existing models is to use a separate tokenizer for calculating how much a user should be charged. Using one tokenizer for computing the cost and another to process the input can easily be applied to existing systems without the need to retrain the LLM itself. However, as the tokenizer for the language model is unchanged, this approach would still suffer from the latency or long-term modelling issues. Therefore, to ensure similar processing times and long-term modelling capabilities across languages, the language model has to be trained with a multilingually fair tokenizer.
The tokenization needs to support all Unicode codepoints.Amongst all tokenizers we examine in this paper, the ones which encode all FLORES-200 languages all one thing in common: they build their tokenization on top of Unicode representation, allowing them them to represent all characters. Therefore, a multilingually fair tokenizer should also start from a Unicode (or equivalent) encoding. This could be either the variable-width UTF-8 or the fixed-width UTF-32. However, considering the above point that subword tokenization is necessary, building the vocabulary from UTF-8 would likely result in a smaller dictionary than building it on top of UTF-32. Therefore, the variable-width UTF-8 is likely the more appropriate choice.
Perfect parity might not be possible, but we could improve the status quo.Given the discrete nature of tokenization and its dependence on training corpora, it is unlikely that perfect parity can be achieved across all pairs from a large collection of languages. This may be further complicated by characters and subwords shared amongst languages. For example, imagine an _Englishli_ language that adds "li" after every English word: "Hello li
world li.". This language will never be able to achieve parity with English11. However, we hypothesize that natural languages can reach parity levels close to 1, or at least significantly less than 4.5 which is the lower bound on the worst parity we have seen across the tokenizers in this paper.
Footnote 11: Unless for every English token corresponding to a word there is an Englishiti token, _e.g._, “Hello li”, “world li”. However, that would result in severely bloated vocabulary.
Building a multilingually fair parallel corpus.When constructing the corpus, care should be taken to ensure that there is a diversity of topics included. This is because the tokenizer needs to be able to accurately process a range of subjects, including those that are more technical or specialized in nature. One must ensure that the representation of different topics is balanced, otherwise, the resulting tokenizer might end up being multilingually fair only for a subset of topics. The presence of named entities must also be balanced. For example, in FLORES-200 English-centric names and institutions abound, which might skew the results in favour of English. Additionally, the same sentence can have different translations with varying tokenization lengths. To account for this, a diversity of translations could ensure tokenization fairness across languages. These limitations also hold for the results in this paper. Hence, developing a well-curated and diverse parallel corpus is crucial for the development and evaluation of a multilingually fair tokenizer.
Summary.Consequently, to achieve multilingual tokenization fairness, one would first need to design a well-balanced and representative parallel corpus. Then, the tokenization procedure should start by encoding the input with one of the Unicode standards and then building a subword tokenizer on top of it while ensuring parity across a set of languages. The language model has to be trained using this fair tokenizer. This approach, combined with per-token pricing, will result also in cost-parity. Furthermore, models trained with such a tokenizer will have similar processing times across languages, effectively alleviating the latency unfairness as well. Finally, as the same content would have similar lengths in different languages, this would also result in a similar capability to model long-term dependencies. Unfortunately, that also means that to alleviate these concerns, one needs to train a model from scratch using this more fair tokenizer. While such an approach would possibly be suboptimal for any individual language, we expect that this effect would be negligible due to the diminishing returns of enlarging a tokenizer vocabulary.
## 8 Related Works
Fairness and bias in language models.The rapid increase in the size of language models has raised concerns regarding their biases and unfairness (Bender et al., 2021). For example, Bolukbasi et al. (2016), May et al. (2019) and Nadeem et al. (2021) showed that stereotypes and biases exist in language models, while Magee et al. (2021) identified the presence of intersectional biases which may be resistant to debiasing techniques. Language models were also shown to rely on social biases in question answering (Parrish et al., 2022). Interestingly, Gururangan et al. (2022) point out that datasets consider one type of English as a higher quality depending on the location of the writer rather than on factuality or literary acclaim. Moreover, Ramesh et al. (2023) highlighted the need to consider fairness issues of languages other than English, as they may have distinct sources of bias and solutions for English may not be applicable.
Multilingual performance.One approach to ensure similar multilingual performance is to frame languages as entities as recently proposed by Choudhury and Deshpande (2021). Another method is to separately train vocabularies for different language clusters to balance cross-lingual and language-specific tokens (Chung et al., 2020). Still, multilingual models struggle to deliver on the promises of deep transfer learning for lower-resourced languages (Virtanen et al., 2019) and perform differently depending on the script and resource level of the language (Bang et al., 2023). Ahuja et al. (2023) found that generative models perform better on higher-resource languages and languages that use the Latin script. They hypothesise that this may be due to the length of context that can be provided for some lan
guages. Finally, Zhang et al. (2022) show that a balanced tokenizer corpus results in better translation performance.
Measuring the tokens needed to encode text.Measuring the number of tokens needed to encode text is a complex problem and previous works have proposed different approaches to tackle it. For instance, Zhang et al. (2022) suggest using the ratio of the average sentence length in tokens to the sentence length in characters as a measure of closeness to the character level. However, this method may not be suitable for comparing languages due to differences in sentence length across languages. On the other hand, Acs (2019) and Scao et al. (2022) redefine the notion of fertility12 as the number of tokens created per word, but this method may not be effective for comparing languages due to differences in semantic content per word. It is also difficult to apply to languages where word delineation is less straightforward. Rust et al. (2021) show that mBERT Devlin et al. (2019) has much higher fertility for some languages compared to others, with English having the lowest. This is in line with our findings of English receiving special treatment. They also show that models trained with monolingual tokenizers outperform their models with multilingual tokenizers. However, to the best of our knowledge, we are the first to leverage a parallel corpus to compare tokenization lengths across languages.
Footnote 12: Fertility is a notion from statistical machine translation referring to the phenomenon that one word in the input language may translate into a different number of words in the output language (_e.g._, “I” does not map to any word in the Italian translation “Vado a scuola” of “I go to school”). However, this is a property of the differences between natural languages rather than a phenomenon related to tokenization.
## 9 Conclusion
This paper highlights the significant disparities in tokenization across different languages which can lead to unequal treatment and disadvantages for certain language communities. The findings reveal that even tokenizers explicitly trained for multilingual support exhibit drastic differences in tokenization lengths with variations of up to 13 times. Furthermore, character-level and byte-level models also demonstrate encoding length discrepancies of over 4 times for specific language pairs. These disparities have important real-world implications including increased costs for accessing commercial language services, longer processing times and limitations on the amount of contextual information provided to language models. To address these issues, we propose the development of multilingually fair tokenizers for future language models emphasizing the importance of ensuring comparable performance and accessibility across supported languages. By achieving tokenization parity, we can mitigate inequalities and promote fair access to language technologies across diverse linguistic communities.
## Acknowledgements
We would like to thank Puyu Wang, Francisco Eiras, Ambre Bertrand and Carmen Scheidemann for their linguistic advice. We also extend special gratitude to Shinnosuke Takamichi and Hiroshi Saruwatari for open-sourcing the CPJD corpus for this project.
AB has received funding from the Amazon Research Awards. This work is supported by a UKRI grant Turing AI Fellowship (EP/W002981/1) and the EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines and Systems (EP/S024050/1). We also thank the Royal Academy of Engineering and FiveAI. |
2303.09382 | Exponential Decay Rate of Linear Port-Hamiltonian Systems. A Multiplier
Approach | In this work, the multiplier method is extended to obtain a general lower
bound of the exponential decay rate in terms of the physical parameters for
port-Hamiltonian systems in one space dimension with boundary dissipation. The
physical parameters of the system may be spatially varying. It is shown that
under assumptions of boundary or internal dissipation, the system is
exponentially stable. This is established through a Lyapunov function defined
through a general multiplier function. Furthermore, an explicit bound on the
decay rate in terms of the physical parameters is obtained. The method is
applied to a number of examples. | Luis A. Mora, Kirsten Morris | 2023-03-16T15:12:53Z | http://arxiv.org/abs/2303.09382v1 | # Exponential Decay Rate of Linear Port-Hamiltonian Systems. A Multiplier Approach
###### Abstract
In this work, the multiplier method is extended to obtain a general lower bound of the exponential decay rate in terms of the physical parameters for port-Hamiltonian systems in one space dimension with boundary dissipation. The physical parameters of the system may be spatially varying. It is shown that under assumptions of boundary or internal dissipation the system is exponentially stable. This is established through a Lyapunov function defined through a general multiplier function. Furthermore, an explicit bound on the decay rate in terms of the physical parameters is obtained. The method is applied to a number of examples.
Boundary Dissipation, Decay Rate, Distributed Parameter Systems, Exponential Stability, Partial Differential Equations, Port-Hamiltonian Systems, Infinite-dimensional system
## 1 Introduction
Exponential stability is a desirable property of most systems, including those modelled by partial differential equations. As shown in such works as [1, 2, 3] and [4] exponential decay of a partial differential equation through boundary control or dissipation is directly related to the exact observability of the system. Furthermore, determining not only exponential stability but also an expression for the exponential decay rate in terms of the system parameters is of theoretical interest, and also in practical performance such as analyzing control system performance. An important strategy to obtain an explicit expression for the exponential decay rate of dynamical infinite dimensional systems is the multiplier method [5, 6, 7, 8, 9, 10]. Several approaches of the multiplier method can be found in literature. For example, in [1] the system dynamics are multiplied by \(m(x)\) and the state variables, and integrated in the space and time to derive the exponential decay rate of the state variable norm. Alternatively, the multiplier function is used to build an auxiliary Lyapunov functional whose exponential decay is related to the decay of the system energy; see the exposition in [11].
A Port-Hamiltonian formulation of boundary-controlled distributed parameter systems was initially introduced in [12, 13] and extended to problems with internal dissipation in [14]. Sufficient conditions for the well-posedness of linear PHS in one-dimensional spatial domains were established in [15]. Exponential stability of one-dimensional boundary controlled port-Hamiltonian systems has been studied in a number of works, including [16, 17, 18, 19, 20]. In these works, sufficient conditions that guarantee exponential stability are obtained. An explicit lower bound for the exponential decay rate of the energy a Timoshenko beam with boundary and internal dissipation was obtained in [21].
In this work, we extend the multiplier method to obtain a general lower bound of the exponential decay rate in terms of the physical parameters for general port-Hamiltonian systems in one space dimension with boundary dissipation. The physical parameters of the system may be spatially varying. A formal description of the port-Hamiltonian systems under study is provided in Section 2. The main results are presented in Section 3. We show that under assumptions of boundary or internal dissipation the system is exponentially stable. This is established by considering general multiplier functions \(m(x)\). In previous work [22] a linear multiplier function, \(m(x)=x-a\), was used to obtain the an explicit bound on the decay rate for port-Hamiltonian systems with constant coefficients and \(P_{0}=0\), \(G_{0}=0\). This result was illustrated by obtaining an explicit bound on the exponential decay rate of a boundary damped piezo-electric beam with magnetic effects. The approach was extended in [23] to systems with \(P_{0}\neq 0\) and/or \(G_{0}\neq 0\), such as in a Timoshenko beam. However, the result in [23] is restricted to systems whose physical parameters satisfy several conditions. Here, by considering more general multiplier functions it is shown that a wider class of systems, including those with variable material parameters, are exponentially stable. Furthermore, an explicit bound on the decay rate in terms of the physical parameters is obtained. In Section 4, we apply the method to a number of examples. A preliminary analysis of Example A, the boundary-damped wave equation with a linear multiplier function appeared in [22]; here we compare the use of a linear and exponential multiplier. The use of the linear multiplier function for the simple wave equation is well-known, here we not only compare different multiplier functions, but also, regarding the boundary damping as a control variable, the dissipation is chosen to optimize the decay rate. Example B applies our result to a wave equation with variable cross-section and material parameters. In Example C, the Timoshenko beam, the result in [23] is extended to beams with general parameters. One lesson from these examples is that the decay rate obtained depends on the choice of multiplier function. Conclusions are presented on Section 5.
## 2 Port-Hamiltonian systems (PHS)
Consider an one-dimensional spatial domain \(\Omega=\{x\in[a,b]\ \}\subset\mathbb{R}\). Denote by \(\mathbf{z}(x,t)\) the \(n\) state variables of a system on \(\Omega\). In this work, the following class of linear boundary controlled port-Hamiltonian systems [15] is considered:
\[\frac{\partial\mathbf{z}(x,t)}{\partial t}-\left[P_{1}\frac{\partial}{\partial x}+ [P_{0}-G_{0}]\right]Q(x)\mathbf{z}(x,t)=0 \tag{1}\]
where \(P_{1}=P_{1}^{\top}\in\mathbb{R}^{n\times n}\) is invertible, \(Q(x)=Q^{\top}(x)>0\in H^{1}([a,b],\mathbb{R}^{n\times n})\), \(P_{0}=-P_{0}^{\top}\in\mathbb{R}^{n\times n}\), \(G_{0}=G_{0}^{\top}\geq 0\in\mathbb{R}^{n\times n}\), and \(\frac{n}{2}\times n\) real matrices \(W_{1}\), \(W_{2}\) and \(\tilde{W_{1}}\). Defining
\[\mathbf{u}_{b}(t) =W_{1}Q(b)\mathbf{z}(b,t), \tag{2}\] \[\mathbf{y}_{b}(t) =\tilde{W}_{1}Q(b)\mathbf{z}(b,t)\] (3) \[\mathbf{u}_{a}(t) =W_{2}Q(a)\mathbf{z}(a,t), \tag{4}\]
the boundary conditions are, for some \(K=K^{\top}>0\in\mathbb{R}^{\frac{n}{2}\times\frac{n}{2}}\)
\[\mathbf{u}_{a}(t)=0,\quad\mathbf{u}_{b}(t)+K\mathbf{y}_{b}(t)=0 \tag{5}\]
or equivalently,
\[W_{2}Q(a)\mathbf{z}(a,t)=0,\]
\[W_{1}Q(b)\mathbf{z}(b,t)+K\tilde{W_{1}}Q(b)\mathbf{z}(b,t)=0.\]
The dissipative boundary condition at \(x=b\) can arise through natural boundary dissipation [24, e.g.] or as a controlled feedback with a measurement \(\mathbf{y}_{b}\) and controlled input \(\mathbf{u}_{b}\).
It will be assumed throughout that \(W_{1}\), \(W_{2}\) and \(\tilde{W}_{1}\) satisfy the following rank conditions.
\[\mathrm{rank}\left(\begin{bmatrix}0&W_{2}\\ W_{1}&0\end{bmatrix}\right)= n\quad\text{and} \tag{6}\] \[\mathrm{rank}\left(\begin{bmatrix}W_{1}\\ \tilde{W}_{1}\end{bmatrix}\right)= n. \tag{7}\]
This guarantees that system (1) defines a well-posed control system [15, Theorem 2.4].
The total energy of system (1)-(5) is
\[\mathcal{H}(t)= \int_{a}^{b}\frac{1}{2}\mathbf{z}^{\top}(x,t)Q(x)\mathbf{z}(x,t)\mathrm{d}x. \tag{8}\]
Since \(Q(x)>0\) for all \(x\), this defines a norm on \(L^{2}([a,b],\mathbb{R}^{n})\) equivalent to the standard norm. Differentiating (8) along system trajectories, assuming that \(W_{1}^{\top}\tilde{W}_{1}=-W_{2}^{\top}\tilde{W}_{2}=P_{1}\) for some \(\tilde{W}_{2}\in\mathbb{R}^{\frac{n}{2}\times n}\) and defining \(\eta_{K}=\min\mathrm{eig}(K)\),
\[\frac{\mathrm{d}\mathcal{H}(t)}{\mathrm{d}t}= -\int_{a}^{b}\mathbf{z}^{\top}(x,t)QG_{0}Q\mathbf{z}(x,t)\mathrm{d}x\] \[\quad+\frac{1}{2}(\mathbf{u}_{b}^{T}(t)\mathbf{y}_{b}(t)+\mathbf{y}_{b}^{T}(t )\mathbf{u}_{b}(t))\] \[\leq -c_{1}\mathcal{H}(t)-\eta_{K}\|\mathbf{y}_{b}(t)\|^{2} \tag{9}\]
where \(c_{1}>0\) if internal dissipation \(G_{0}>0\) and \(c_{1}=0\) otherwise. Exponential stability of the system when \(G_{0}\) is not positive definite is not obvious.
## 3 Exponential stability
In this section the multiplier approach, see for example [11], is modified and applied to the class of systems described in the previous section to obtain a explicit expression for the exponential decay rate in terms of the system parameters.
**Lemma 1**: _Let \(\mathbf{z}(x,t)\in L^{2}([a,b],\mathbb{R}^{n})\) be the state of system (1) on interval \(x\in[a,b]\subset\mathbb{R}\) and \(\mathcal{H}(t)\) be the corresponding energy functional. If there exists a scalar functional \(w(t)\) on \([a,b]\) of the state vector \(\mathbf{z}(x,t)\) such that_
\[|w(t)|\leq\frac{1}{\varepsilon_{0}}\mathcal{H}(t) \tag{10}\]
_and_
\[\frac{\mathrm{d}w(t)}{\mathrm{d}t}\leq-\frac{1}{\varepsilon_{1}}\frac{ \mathrm{d}\mathcal{H}(t)}{\mathrm{d}t}-c\mathcal{H}(t) \tag{11}\]
_for some positive \(\varepsilon_{0}\), \(\varepsilon_{1}\) and \(c\), then \(\mathcal{H}(t)\) decays exponentially. Furthermore, defining \(M=\frac{\varepsilon_{0}+\varepsilon}{\varepsilon_{0}-\varepsilon}\) and decay rate \(\alpha=\frac{c\varepsilon\varepsilon_{0}}{\varepsilon_{0}+\varepsilon}\), \(\forall\varepsilon\in[0,\min\{\varepsilon_{0},\varepsilon_{1}\}]\),_
\[\mathcal{H}(t)\leq Me^{-\alpha t}\mathcal{H}(0).\]
Define \(V_{\varepsilon}(t)=\mathcal{H}(t)+\varepsilon w(t)\), where \(\varepsilon\in\mathbb{R}\). Note
\[\mathcal{H}(t)-\varepsilon|w(t)|\leq V_{\varepsilon}(t)\leq\mathcal{H}(t)+ \varepsilon|w(t)|.\]
Also, since \(|w(t)|\leq\frac{1}{\varepsilon_{0}}\mathcal{H}(t)\),
\[\left(1-\frac{\varepsilon}{\varepsilon_{0}}\right)\mathcal{H}(t)\leq V_{ \varepsilon}(t)\leq\left(1+\frac{\varepsilon}{\varepsilon_{0}}\right)\mathcal{H}(t) \tag{12}\]
guaranteeng that \(V_{\varepsilon}(t)\) is non-negative for all \(\varepsilon\in[0,\varepsilon_{0}]\).
Furthermore, using (11),
\[\frac{\mathrm{d}V_{\varepsilon}(t)}{\mathrm{d}t}= \frac{\mathrm{d}\mathcal{H}(t)}{\mathrm{d}t}+\varepsilon\frac{ \mathrm{d}w(t)}{\mathrm{d}t}\] \[\leq \left(1-\frac{\varepsilon}{\varepsilon_{1}}\right)\frac{\mathrm{d }\mathcal{H}(t)}{\mathrm{d}t}-c\mathcal{H}(t)\]
For any \(\varepsilon\leq\varepsilon_{1}\), (12) implies
\[\frac{\mathrm{d}V_{\varepsilon}(t)}{\mathrm{d}t}\leq -\varepsilon c\mathcal{H}(t)\leq-\frac{c\varepsilon}{1+\varepsilon/ \varepsilon_{0}}V_{\varepsilon}(t)\]
obtaining that \(V_{\varepsilon}(t)=V_{\varepsilon}(0)e^{-\alpha t}\) where \(\alpha=\frac{c\varepsilon\varepsilon_{0}}{\varepsilon_{0}+\varepsilon}\). Using again (12), we obtain that \(V_{\varepsilon}(0)\leq\left(1+\frac{\varepsilon}{\varepsilon_{0}}\right)\mathcal{H}(0)\) and \(\mathcal{H}(t)\leq\frac{1}{1-\varepsilon/\varepsilon_{0}}V_{\varepsilon}(t)\). As a consequence,
\[\mathcal{H}(t)\leq\frac{\varepsilon_{0}+\varepsilon}{\varepsilon_{0}- \varepsilon}e^{-\alpha t} \tag{13}\]
for all \(\varepsilon\in[0,\min(\varepsilon_{0},\varepsilon_{1})]\), completing the proof.
**Theorem 1**: _Consider the port-Hamiltonian system with boundary dissipation given by (1)-(5). Define_
\[\Psi(x)= \begin{bmatrix}-K\\ I\end{bmatrix}^{\top}\begin{bmatrix}W_{1}^{\top}\\ \tilde{W}_{1}\end{bmatrix}^{-\top}Q^{-1}(x)\begin{bmatrix}W_{1}\\ \tilde{W}_{1}\end{bmatrix}^{-1}\begin{bmatrix}-K\\ I\end{bmatrix}, \tag{14}\] \[B(x)= \frac{\partial Q(x)}{\partial x}-Q(x)(P_{0}+G_{0})P_{1}^{-1}\] \[+P_{1}^{-1}(P_{0}-G_{0})Q(x),\] (15) \[A_{s}(x)=,\frac{\partial m(x)}{\partial x}Q(x)-m(x)B(x). \tag{16}\]
Also for some \(m(x)\in C([a,b])\) define the auxiliary function of the state \(\mathbf{z}\)
\[w(t)=\frac{1}{2}\int_{a}^{b}m(x)\mathbf{z}^{\top}(x,t)P_{1}^{-1}\mathbf{z}(x,t)\mathrm{d}x \tag{17}\]
Defining \(\varepsilon_{0}=\frac{\eta_{Q}}{\mu_{m}\mu_{P_{1}}}\) and \(\varepsilon_{1}=\frac{2\eta_{K}}{\mu_{m}\mu_{\Psi}}\), if
\[A_{s}(x)>0 \tag{18}\]
then for all \(\varepsilon\in[0,\min\{\varepsilon_{0},\varepsilon_{1}\}]\),
\[\mathcal{H}(t)\leq Me^{-\alpha t},\quad M=\frac{\varepsilon_{0}+\varepsilon} {\varepsilon_{0}-\varepsilon},\quad\alpha=\frac{c\varepsilon\varepsilon_{0}}{ \varepsilon+\varepsilon_{0}}\,. \tag{19}\]
Using the Cauchy-Schwarz inequality,
\[|w(t)|= \frac{1}{2}\left|\left\langle m(x)\mathbf{z}(x,t),P_{1}^{-1}\mathbf{z}(x, t)\right\rangle\right|\] \[\leq \frac{1}{2}\|m(x)\mathbf{z}(x,t)\|_{L^{2}}\|P_{1}^{-1}\mathbf{z}(x,t)\|_{ L^{2}}\] \[\leq \frac{\mu_{m}\mu_{P_{1}}}{2}\|\mathbf{z}(x,t)\|_{L^{2}}^{2}\leq\frac {\mu_{m}\mu_{P_{1}}}{\eta_{Q}}\mathcal{H}(t)\]
Thus, \(|w(t)|\leq\frac{1}{\varepsilon_{0}}\mathcal{H}(t)\). Similarly,
\[\frac{\mathrm{d}w(t)}{\mathrm{d}t}= \int_{a}^{b}m(x)\mathbf{z}^{\top}(x,t)P_{1}^{-1}\frac{\partial\mathbf{z}( x,t)}{\partial t}\ \mathrm{d}x\] \[= \int_{a}^{b}m(x)\mathbf{z}^{\top}(x,t)P_{1}^{-1}(P_{0}-G_{0})Q(x)\mathbf{ z}(x,t)\ \mathrm{d}x\] \[+\int_{a}^{b}m(x)\mathbf{z}^{\top}(x,t)\frac{\partial Q(x)\mathbf{z}(x,t) }{\partial x}\ \mathrm{d}x \tag{20}\]
Using the identity
\[\frac{1}{2}\frac{\partial}{\partial x}\left(m(x)\mathbf{z}(x,t)^{T}Q( x)\mathbf{z}(x)\right)=m(x)\mathbf{z}^{\top}(x,t)\frac{\partial Q(x)\mathbf{z}(x,t)}{ \partial x}\] \[+\frac{1}{2}\mathbf{z}^{\top}(x,t)\left(\frac{\partial m(x)}{ \partial x}Q-m(x)\frac{\partial Q(x)}{\partial x}\right)\mathbf{z}(x,t)\]
(20) is rewritten as
\[\frac{\mathrm{d}w(t)}{\mathrm{d}t}= \left.\frac{1}{2}m(x)\mathbf{z}^{\top}(x,t)Q(x)\mathbf{z}(x,t)\right|_{a}^ {b}\] \[-\frac{1}{2}\int_{a}^{b}\mathbf{z}^{\top}(x,t)A_{s}(x)\mathbf{z}(x,t)\ \mathrm{d}x\]
where \(A_{s}\) is defined in (16). Since \(A_{s}\) is assumed positive, there exists a \(c>0\) such that
\[A_{s}(x)\geq cQ(x)>0.\]
This implies that
\[\frac{\mathrm{d}w(t)}{\mathrm{d}t}\leq \frac{1}{2}m(b)\mathbf{z}^{\top}(b,t)Q(b)\mathbf{z}(b,t)\] \[-\frac{c}{2}\int_{a}^{b}\mathbf{z}^{\top}(x,t)Q(x)\mathbf{z}(x,t)\ \mathrm{d}x.\]
By assumption (7) \(\begin{bmatrix}W_{1}\\ \tilde{W}_{1}\end{bmatrix}\) is full rank and so \(Q(b)\mathbf{z}(b,t)=\begin{bmatrix}W_{1}\\ \tilde{W}_{1}\end{bmatrix}^{-1}\begin{bmatrix}\mathbf{u}_{b}(t)\\ \mathbf{y}_{b}(t)\end{bmatrix}\). Then, including the boundary dissipation (5) leads to, recalling the definition of \(\Psi\) in (14),
\[\frac{\mathrm{d}w(t)}{\mathrm{d}t}\leq \frac{1}{2}m(b)\mathbf{y}_{b}^{\top}(t)\Psi\mathbf{y}_{b}(t)-c\mathcal{H}(t)\] \[\leq \frac{1}{2}\mu_{m}\mu_{\Psi}|\mathbf{y}_{b}|^{2}-c\mathcal{H}(t)\] \[\leq -\frac{1}{\varepsilon_{1}}\frac{\mathrm{d}\mathcal{H}(t)}{\mathrm{ d}t}-c\mathcal{H}(t).\]
where \(\varepsilon_{1}=\frac{2\eta_{K}}{\mu_{m}\mu_{\Psi}}\). Lemma 1 then implies the bound on the exponential decay of \(\mathcal{H}(t)\) in (19).
**Lemma 2**: _Consider the matrices \(A_{s}(x)\) and \(B(x)\) defined in (16) and (15), respectively. Defining \(m(x)=Ce^{\beta(x-a)}\), if \(\beta\) is sufficiently large then_
\[A_{s}>0\,\forall x\in[a,b]. \tag{21}\]
With \(m(x)=Ce^{\beta(x-a)}\), the matrix \(A_{s}\) can be rewritten as
\[A_{s}=m(x)\left(\beta Q(x)-B(x)\right).\]
Since \(m(x)>0\), condition (21) is satisfied if matrix \(\beta Q(x)-B(x)\) is positive; that is if \(\inf\limits_{x\in[a,b]}\mathrm{eig}\left(\beta Q(x)-B(x)\right)>0\). Since \(\eta_{Q}=\inf\limits_{x\in[a,b]}\mathrm{eig}\left(Q(x)\right)>0\) and recalling \(\mu_{B}=\sup\limits_{x\in[a,b]}\mathrm{eig}\left(B(x)\right),\) if \(\beta\) is chosen large enough that
\[\left(\beta\eta_{Q}-\mu_{B}\right)>0\]
then the required condition is satisfied.
Exponential stability of the class of systems described in section 2 now follows immediately, along with a bound on the decay rate. Lemma 2 implies that exists at least one multiplier function, \(m(x)\), such that condition (21) holds and so the system (1),(5) is exponentially stable. Furthermore, Theorem 1 can be used to obtain a lower bound of the exponential decay rate for all systems with the form (1)-(5) on interval \(x\in[a,b]\).
From Lemma 2, there are definitions for \(M(\varepsilon)\) and \(\alpha(\varepsilon)\) for all \(\varepsilon\) on the interval \([0,\min\{\varepsilon_{0},\varepsilon_{1}\}]\), and Theorem 1 provides explicit expressions of \(\varepsilon_{0}\) and \(\varepsilon_{1}\) for system (1)-(5). Using the parametrization \(\varepsilon=\xi\min\{\varepsilon_{0},\varepsilon_{1}\}\) with \(0<\xi<1\) leads to
\[M= \left\{\begin{aligned} &\frac{1+\xi}{1-\xi}&& \text{if }\varepsilon_{0}\leq\varepsilon_{1}\\ &\frac{\eta_{Q}\mu_{\Psi}+2\xi\eta_{K}\mu_{P_{1}}}{\eta_{Q}\mu_{\Psi}-2 \xi\eta_{K}\mu_{P_{1}}}&&\text{otherwise}\end{aligned}\right. \tag{22}\] \[\alpha= \left\{\begin{aligned} &\frac{\xi}{\xi+1}\frac{c\eta_{Q}}{\mu_{P_{1}}\mu_{m}}&& \text{if }\varepsilon_{0}\leq\varepsilon_{1}\\ &\frac{2c\eta_{K}\eta_{Q}\xi}{\mu_{m}\left(\eta_{Q}\mu_{\Psi}+2 \xi\eta_{K}\mu_{P_{1}}\right)}&&\text{otherwise}\end{aligned}\right. \tag{23}\]
Since \(c\) and \(\mu_{m}\) are affected by the multiplier function, an appropriate choice of \(m(x)\) improves the exponential decay rate bound obtained through Theorem 1. Considering \(m(x)=Ce^{\beta(x-a)}\) with \(C>0\), as in the proof of Lemma 2, we obtain \(m(a)=C\), \(\mu_{m}=m(b)=Ce^{\beta(b-a)}\) and
\[A_{s}= m(x)\left(\beta Q(x)-B(x)\right)\] \[\geq m(a)(\beta\eta_{Q}-\mu_{B})\] \[\geq cQ\]
where \(c=\dfrac{C(\beta\eta_{Q}-\mu_{B})}{\mu_{Q}}\). Then, the exponential decay rate bound is given by
\[\alpha=\begin{cases}\dfrac{\xi}{\xi+1}\dfrac{\eta_{Q}e^{-\beta(b-a)}(\beta \eta_{Q}-\mu_{B})}{\mu_{Q}\mu_{P_{1}}}&\text{if }\varepsilon_{0}\leq\varepsilon_{1}\\ \dfrac{2\xi\eta_{K}\eta_{Q}e^{-\beta(b-a)}(\beta\eta_{Q}-\mu_{B})}{\mu_{Q} \left(\mu_{P_{1}}\eta_{K}+2\xi\eta_{Q}\mu_{\Psi}\right)}&\text{otherwise}\end{cases} \tag{24}\]
**Theorem 2**: _The system (1)-(5) is exponentially stable. Furthermore, the decay rate is at least_
\[\alpha=\begin{cases}\dfrac{\xi}{\xi+1}\dfrac{\eta_{Q}^{2}e^{-\left(1+\frac{ \mu_{B}}{\eta_{Q}}(b-a)\right)}}{(b-a)\mu_{Q}\mu_{P_{1}}}&\text{if }\varepsilon_{0}\leq \varepsilon_{1}\\ \dfrac{2\xi\eta_{K}\eta_{Q}^{2}e^{-\left(1+\frac{\mu_{B}}{\eta_{Q}}(b-a) \right)}}{(b-a)\mu_{Q}\left(\mu_{P_{1}}\eta_{K}+2\xi\eta_{Q}\mu_{\Psi}\right)}& \text{otherwise}\end{cases} \tag{25}\]
Using the exponential multiplier function from Lemma 2 along with Theorem 1 yields the conclusion that the system is exponentially stable, along with bounds on \(M\) and \(\alpha\). Since \(M\) is independent of \(\beta\), the optimal decay rate is obtained by choosing \(\beta\) to maximize \(\alpha\), that is
\[\beta_{op}=\arg\max_{\beta}\alpha=\dfrac{\eta_{Q}+(b-a)\mu_{B}}{(b-a)\eta_{Q}}= \dfrac{1}{b-a}+\dfrac{\mu_{B}}{\eta_{Q}} \tag{26}\]
Then, the optimal decay rate is obtaining substituting \(\beta_{op}\) in (24).
As shown Lemma 2, choosing \(m(x)\) as an exponential function, condition (18) can always be satisfied. Depending on the system, other options for \(m(x)\) may be possible. For example, consider the case \(P_{0}=G_{0}=0\), \(Q(x)=Lx+D>0\), \(\forall x\in[a,b]\), with \(D\) and \(L\) defined positive. Choosing \(m(x)=qx+d\) where \(\dfrac{q}{d}\geq-\dfrac{1}{a}\), matrix \(A_{s}\) becomes
\[A_{s}= \dfrac{\partial m(x)}{\partial x}Q(x)-m(x)\dfrac{\partial Q(x)}{ \partial x}\] \[= q(Lx+D)-(qx+d)L=qD-dL \tag{27}\]
Then, (18) is satisfied if \(\dfrac{q}{d}>\max\left\{\dfrac{\mu_{L}}{\eta_{D}},-\dfrac{1}{a}\right\}\). This point is illustrated in an example in the next section.
## 4 Examples
### Wave equation with boundary dissipation
Consider the wave equation in an one-dimensional spatial domain
\[\dfrac{\partial}{\partial t}\left(\rho\dfrac{\partial w(x,t)}{ \partial t}\right)= \dfrac{\partial}{\partial x}\left(\tau\dfrac{\partial w(x,t)}{ \partial x}\right)\quad\forall x\in[a,b] \tag{28}\]
with boundary conditions
\[\dfrac{\partial w(a,t)}{\partial t}= 0 \forall t\geq 0 \tag{29}\] \[\tau\dfrac{\partial w(b,t)}{\partial x}+k\dfrac{\partial w(b,t)} {\partial t}= 0 \forall t\geq 0 \tag{30}\]
and \(w(x,0)\in L^{2}([a,b],\mathbb{R})\), where the density and elasticity parameters, \(\rho\) and \(\tau\) respectively, are constant.
Defining \(z_{1}=\dfrac{\partial w(x,t)}{\partial x}\) and \(z_{2}=\rho\dfrac{\partial w(x,t)}{\partial t}\), the wave equation (28) is expressed as the port-Hamiltonian system
\[\dfrac{\partial\mathbf{z}(x,t)}{\partial t}= P_{1}\dfrac{\partial}{\partial x}\left(Q\mathbf{z}(x,t)\right), \qquad\forall x\in[a,b] \tag{31}\]
where \(\mathbf{z}(x,t)=\begin{bmatrix}z_{1}(x,t)&z_{2}(x,t)\end{bmatrix}^{\top}\), \(P_{1}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix}\) and \(Q=\begin{bmatrix}\tau&0\\ 0&1/\rho\end{bmatrix}\). Similarly, choosing \(W_{1}=W_{2}=\begin{bmatrix}1&0\end{bmatrix}\) and \(\tilde{W}_{1}=\begin{bmatrix}0&1\end{bmatrix}\), the boundary conditions (29)-(30) can be rewritten in the form (2)-(4).
Since \(Q\) is a constant matrix and \(P_{0}=G_{0}=0\), \(A_{s}=\frac{\partial m(x)}{\partial x}Q\). Condition (18) holds if the multiplier function \(m(x)\) is monotonically increasing. Assuming unitary parameters, \(\tau=\rho=1\), and spatial domain length, \(b-a=1\),
\[\mu_{B}= 0 \mu_{P_{1}}= \mu_{Q}=\eta_{Q}=1,\quad\mu_{\Psi}= k^{2}+1\] \[\varepsilon_{0}= \dfrac{1}{\mu_{m}} \varepsilon_{1}= \dfrac{2k}{\mu_{m}(k^{2}+1)} c=\min_{x\in[a,b]}\dfrac{\partial m(x)}{\partial x}\]
Since \(\dfrac{2k}{k^{2}+1}\leq 1\) for all \(k\geq 0\), \(\varepsilon_{1}\leq\varepsilon_{0}\) for any \(k\). Choosing \(\varepsilon=\dfrac{1}{2}\varepsilon_{1}\) we obtain \(M=\dfrac{k^{2}+k+1}{k^{2}-k+1}\) which is independent of the choice of \(m(x)\). With an exponential multiplier function, as in Lemma 2, from (25) we obtain that the decay rate \(\alpha=\dfrac{ke^{-1}}{k^{2}+k+1}\). Alternatively, considering a linear multiplier function, \(m(x)=x-a\), so \(\mu_{m}=c=1\) and the exponential decay rate is \(\alpha=\dfrac{k}{k^{2}+k+1}\). This is a better lower bound for the decay rate than the exponential multiplier function. This point is illustrated in Figure 1.
If the boundary dissipation comes from a control law, then the value of \(k\) should be chosen to optimize the exponential decay rate. For this example the exponential decay rate \(\alpha=\dfrac{k}{k^{2}+k+1}\) is maximized with \(k=1\). Note that this choice
Figure 1: Bound on the exponential decay rate of the wave equation as a function of the boundary dissipation \(k\) for different multiplier functions
of \(k\) is the same value for which no waves are reflected and the energy of the wave equation reaches zero in finite time.
### Wave equation with variable cross-section and boundary dissipation
In this example, we consider a vibrating string with a non-uniform cross-sectional area \(A(x)\), as shown in Figure 2. The dynamics of the vertical displacements \(w(x,t)\) can be expressed as the wave equation with boundary dissipation described in (28)-(30), where the physical parameters \(\rho(x)=A(x)\rho_{0}\) and \(\tau(x)=A(x)\tau_{0}\) with \(\rho_{0}\) and \(\tau_{0}\) constant, and \(w(x,0)\in L^{2}([0,1],\mathbb{R})\). It will be assumed that \(\tau_{0}=\rho_{0}=1\), boundary dissipation gain \(k=0.5\) and that cross-sectional area
\[A(x)=\frac{10-x}{10}.\]
The port-Hamiltonian formulation of this vibrating string is similar to the previous analysis (31) except that
\[Q(x)=\begin{bmatrix}\dfrac{(10-x)}{10}&0\\ 0&\dfrac{10}{(10-x)}\end{bmatrix}.\]
As a consequence,
\[A_{s}(x)= \dfrac{\partial m(x)}{\partial x}Q(x)-m(x)\dfrac{\partial Q(x)}{ \partial x}\] \[= \begin{bmatrix}\dfrac{m(x)+\frac{\partial m(x)}{\partial x}(10-x )}{10}&0\\ 0&\dfrac{\frac{\partial m(x)}{\partial x}(10-x)-m(x)}{0.1(10-x)^{2}} \end{bmatrix}. \tag{32}\]
Choosing the multiplier function \(m(x)=x\),
\[A_{s}(x)=\begin{bmatrix}1&0\\ 0&\dfrac{10\left(10-2x\right)}{(10-x)^{2}}\end{bmatrix}>0,\quad\forall x\in \left[0,1\right].\]
The problem of finding the maximum \(c\) such that \(A_{s}(x)\geq cQ(x)\) is equivalent to finding the largest \(c\) so that the eigenvalues of \(A_{s}(x)-cQ(x)\) are non-negative; that is so the matrix
\[\begin{bmatrix}1-c\dfrac{10-x}{10}&0\\ 0&\dfrac{10}{(10-x)}\left(\dfrac{10-2x}{10-x}-c\right)\end{bmatrix}\]
is positive semi-definite. The largest such value of \(c\) is \(c=8/9\). Then,
\[\Psi(x)=\dfrac{5}{2\left(10-x\right)}+\dfrac{(10-x)}{10},\]
\[\mu_{Q}= \dfrac{10}{9}\eta_{Q}= \dfrac{9}{10}\mu_{\Psi}= \dfrac{5}{4}\] \[\mu_{P_{1}}= \mu_{m}=1 \epsilon_{0}= \dfrac{9}{10}\varepsilon_{1}= \dfrac{4}{5}\,.\]
Finally, choosing \(\varepsilon=\varepsilon_{1}\) a bound on the exponential decay rate is
\[\alpha=\dfrac{32}{85}\approx 0.3765.\]
### Timoshenko Beam
Consider a Timoshenko beam with variable material parameters on a bar \(x\in[a,b]\). Let \(\rho(x)\), \(\epsilon(x)\) and \(\iota(x)\) be the mass per unit length,Young's modulus, and moment of inertia of the cross section, respectively; \(\iota_{\rho}(x)=\iota(x)\rho(x)\) is the mass moment of inertia of the cross section; and \(\gamma(x)\) and \(\delta(x)\) are the viscous damping coefficients. The shear modulus \(\kappa(x)=\xi G(x)A(x)\), where \(G(x)\) is the modulus of elasticity in shear, \(A(x)\) is the cross sectional area, and \(\xi\) is a constant depending on the shape of the cross section. The parameters \(k_{1}\) and \(k_{2}\) are boundary damping coefficients. This leads to the following partial differential equation
\[\rho(x)\dfrac{\partial^{2}w(x,t)}{\partial t^{2}}= \dfrac{\partial}{\partial x}\left(\kappa(x)\left(\dfrac{\partial w (x,t)}{\partial x}-\phi(x,t)\right)\right)\] \[-\gamma(x)\dfrac{\partial w(x,t)}{\partial t} \tag{33a}\] \[\iota_{\rho}(x)\dfrac{\partial^{2}\phi(x,t)}{\partial t^{2}}= \dfrac{\partial}{\partial x}\left(\epsilon(x)\iota(x)\dfrac{ \partial\phi(x,t)}{\partial x}\right)-\delta(x)\dfrac{\partial\phi(x,t)}{ \partial t}\] \[+\kappa(x)\left(\dfrac{\partial w(x,t)}{\partial x}-\phi(x,t)\right) \tag{33b}\]
with boundary conditions
\[\dfrac{\partial w(a,t)}{\partial t}= \dfrac{\partial\phi(a,t)}{\partial t}= 0 \tag{34a}\] \[\kappa(b)\left(\dfrac{\partial w(b,t)}{\partial x}-\phi(b,t) \right)+k_{1}\dfrac{\partial w(b,t)}{\partial t}= 0\] (34b) \[\epsilon(b)\iota(b)\dfrac{\partial\phi(b,t)}{\partial x}+k_{2} \dfrac{\partial\phi(b,t)}{\partial t}= 0\,. \tag{34c}\]
Set \(z_{1}(x,t)=\rho(x)\dfrac{\partial w(x,t)}{\partial t}\), \(z_{2}(x,t)=\iota_{\rho}(x)\dfrac{\partial\phi(x,t)}{\partial t}\), \(z_{3}(x,t)=\frac{\partial w(x,t)}{\partial x}-\phi(x,t)\), \(z_{4}(x,t)=\frac{\partial\phi(x,t)}{\partial x}\) and \(\mathbf{z}(x,t)=\begin{bmatrix}z_{1}(x,t)&z_{2}(x,t)&z_{3}(x,t)&z_{4}(x,t)\end{bmatrix}\). System (33) can be rewritten in the port-Hamiltonian formulation as in [21] to obtain
\[\dfrac{\partial\mathbf{z}(x,t)}{\partial t}-P_{1}\dfrac{\partial Q(x)\mathbf{z}(x,t)}{ \partial x}-\left[P_{0}-G_{0}\right]Q(x)\mathbf{z}(x,t)=0\]
where
\[P_{1}= \begin{bmatrix}0&0&1&0\\ 0&0&0&1\\ 1&0&0&0\\ 0&1&0&0\end{bmatrix},\quad P_{0}=\begin{bmatrix}0&0&0&0\\ 0&0&1&0\\ 0&-1&0&0\\ 0&0&0&0\end{bmatrix},\] \[G_{0}= \begin{bmatrix}\gamma&0&0&0\\ 0&\delta&0&0\\ 0&0&0&0\\ 0&0&0&0\end{bmatrix},\quad\text{and}\] \[Q(x)= \begin{bmatrix}\frac{1}{\sigma(x)}&0&0&0\\ 0&\frac{1}{\nu_{\rho}(x)}&0&0\\ 0&0&\kappa(x)&0\\ 0&0&0&\epsilon(x)\iota(x)\end{bmatrix}.\]
Similarly, defining
\[W_{1}= \begin{bmatrix}0&0&1&0\\ 0&0&0&1\end{bmatrix},\quad\tilde{W}_{1}=\begin{bmatrix}1&0&0&0\\ 0&1&0&0\end{bmatrix}\quad\text{and}\] \[W_{2}= \begin{bmatrix}1&0&0&0\\ 0&1&0&0\end{bmatrix} \tag{35}\]
the boundary conditions (34) can be written in the standard form (2)-(4).
First, consider an inviscid beam with the same parameters as in [21]; that is \(\gamma(x)=\delta(x)=0\), with \(\rho=0.2\)kg/m, \(\epsilon\iota=1.2\times 10^{-2}\)Nm\({}^{2}\), \(\kappa=4\times 10^{-3}\)N, \(\iota_{\rho}=2\times 10^{-2}\)kgm, \(b-a=0.1\)m and \(k_{1}=k_{2}=k\). This leads to
\[Q =\begin{bmatrix}5&0&0&0\\ 0&50&0&0\\ 0&0&\dfrac{1}{250}&0\\ 0&0&0&\dfrac{1}{75}\end{bmatrix}\] \[B =\begin{bmatrix}0&-50&0&0\\ -50&0&0&0\\ 0&0&0&\dfrac{1}{250}\\ 0&0&\dfrac{1}{250}&0\end{bmatrix}\] \[\Psi =\begin{bmatrix}250k^{2}+\dfrac{1}{5}&0\\ 0&75k^{2}+\dfrac{1}{50}\end{bmatrix}\]
and \(\mu_{P_{1}}=1\), \(\mu_{Q}=\mu_{B}=50\), \(\eta_{Q}=1/250\) and \(\mu_{\Psi}=250k^{2}+1/5\). Choosing a linear multiplier function, \(m=x-a\) with \(\mu_{m}=0.1\), then
\[A_{s}(x)=\begin{bmatrix}5&50(x-a)&0&0\\ 50(x-a)&50&0&0\\ 0&0&\dfrac{1}{250}&\dfrac{a-x}{250}\\ 0&0&\dfrac{a-x}{250}&\dfrac{1}{75}\end{bmatrix}\]
which has eigenvalues
\[\mathrm{eig}(A_{s}(x))=\begin{cases}\dfrac{13\pm\sqrt{36(x-a)^{2}+49}}{1500}\\ \dfrac{55\pm 5\sqrt{400(x-a)^{2}+81}}{2}\end{cases}.\]
Since \(0\leq x-a\leq 0.1\), \(\min\mathrm{eig}(A_{s}(x))\geq\dfrac{65-\sqrt{1234}}{7500}>3.9\times 10^{-3}\). This implies that for sufficiently small \(c>0\), \(\mathbf{z}^{\top}(x,t)A_{s}(x)\mathbf{z}(x,t)\geq c\mathbf{z}^{\top}(x,t)Q\mathbf{z}(x,t)\). More precisely, the eigenvalues
\[\mathrm{eig}(A_{s}-cQ)=\begin{cases}\dfrac{\left(13\pm 7\sqrt{\left(\frac{6(x-a) }{7(1-c)}\right)^{2}+1}\right)(1-c)}{1500}\\ \dfrac{\left(55\pm 45\sqrt{\left(\frac{20(x-a)}{9(1-c)}\right)^{2}+1} \right)(1-c)}{2}\end{cases}\]
need to be non-negatives. It is easy to check, through some simple calculations, that this condition is satisfied when \(c\leq 1+\sqrt{\dfrac{1}{10}}\).
Thus, applying Theorem 1 with \(\varepsilon_{0}=\dfrac{1}{25}\), \(\varepsilon_{1}=\dfrac{100k}{1250k^{2}+1}\) and \(c=0.6837\) and choosing \(\varepsilon=\dfrac{1}{50}\) we obtain that \(M=3\) and \(\alpha=4.5\times 10^{-3}\). The bound on the decay rate in this example is not improved with an exponential multiplier function.
Now we consider normalized physical parameters as in [21]. That is, \(\rho(x)=\iota_{\rho}(x)=\epsilon(x)\iota(x)=\kappa(x)=\gamma=\delta=1\), boundary dissipation coefficients, \(k_{1}=k_{2}=1\), and beam length \(b-a=1\). We obtain that \(\eta_{K}=\eta_{Q}=\mu_{Q}=\mu_{P_{1}}=1\), \(\mu_{B}=\sqrt{2}\) and \(\mu_{\Psi}=2\). Then, considering a linear multiplier function,
\[A_{s}(x)=\begin{bmatrix}1&x-a&x-a&0\\ x-a&1&0&x-a\\ x-a&0&1&a-x\\ 0&x-a&a-x&1\end{bmatrix}\]
whose eigenvalues are \(1\pm\sqrt{2}(x-a)\). As a consequence, \(A_{s}(x)>0\) only for \(x<a+\dfrac{1}{\sqrt{2}}<b\) and not in the entire interval \([a,b]\). The linear multiplier function \(m(x)=x-a\), used with the previous set of parameters, cannot be used and it is necessary to consider another function.
Choosing an exponential multiplier function, as in Lemma 2,
\[\varepsilon_{0}= \varepsilon_{1}=\dfrac{e^{-(1+\sqrt{2})}}{C}\]
Then, varying \(\xi\) on (22) and (25), we obtain the values of \(M\) and \(\alpha\) shown in Table 2.
Figure 2: Vibrating string with non-uniform cross-sectional area
In [21] a Lyapunov approach is used for the stability analysis in the port-Hamiltonian formulation of a Timoshenko beam with viscous dissipation and unitary parameters, leading to an exponential decay rate of \(0.0285\) with a \(M=2.783\). Choosing \(\xi=0.4713\), from (25) we also obtain \(M=2.783\) and \(\alpha=0.0286\).
## V Conclusions
An explicit formulation in terms of physical parameters for the exponential energy decay lower bound of a class of port-Hamiltonian systems with boundary dissipation on one-dimensional spatial domains have been presented. The choice of an exponential function, \(m(x)=Ce^{\beta(x-a)}\), leads to a conclusion that provided that the boundary dissipation \(K>0\) the system is exponentially stable. Furthermore, a a lower bound on the decay rate is obtained \(\alpha\). This result applies to systems with variable physical parameters, as illustrated by several examples.
For uniform systems, \(m(x)\) is commonly chosen as a linear function; that is \(m(x)=x-x_{0}\), where \(x_{0}\) is chosen so that \(m(a)\geq 0\); see for example, [1, 11]. This choice of \(m(x)\) also works for uniform port-Hamiltonian systems (1)-(5) with \(P_{0}=G_{0}=0\), as was shown in [22]. However, this multiplier function does not work for all port-Hamiltonian systems with the form (1), as shown by the example of a Timoshenko beam with parameters from [21]. In the example of a wave equation with constant coefficients, both multiplier functions can be used, but the linear function leads to a better bound on the decay rate. The selection of a multiplier function to optimize the bound on the decay rate is an open research problem.
|
2306.03717 | Description Logics with Abstraction and Refinement | Ontologies often require knowledge representation on multiple levels of
abstraction, but description logics (DLs) are not well-equipped for supporting
this. We propose an extension of DLs in which abstraction levels are
first-class citizens and which provides explicit operators for the abstraction
and refinement of concepts and roles across multiple abstraction levels, based
on conjunctive queries. We prove that reasoning in the resulting family of DLs
is decidable while several seemingly harmless variations turn out to be
undecidable. We also pinpoint the precise complexity of our logics and several
relevant fragments. | Carsten Lutz, Lukas Schulze | 2023-06-06T14:27:03Z | http://arxiv.org/abs/2306.03717v3 | # Description Logics with Abstraction and Refinement
# Description Logics with Abstraction and Refinement
Carsten Lutz\({}^{1,2}\)
Lukas Schulze\({}^{1}\)
\({}^{1}\) Department of Computer Science, Leipzig University, Germany
\({}^{2}\)Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI)
{clu, lschulze}@informatik.uni-leipzig.de
###### Abstract
Ontologies often require knowledge representation on multiple levels of abstraction, but description logics (DLs) are not well-equipped for supporting this. We propose an extension of DLs in which abstraction levels are first-class citizens and which provides explicit operators for the abstraction and refinement of concepts and roles across multiple abstraction levels, based on conjunctive queries. We prove that reasoning in the resulting family of DLs is decidable while several seemingly harmless variations turn out to be undecidable. We also pinpoint the precise complexity of our logics and several relevant fragments.
## 1 Introduction
Abstraction and refinement is an important topic in many subfields of computer science such as systems verification [4]. The same is true for ontology design because ontologies often refer to different levels of abstraction (or equivalently, levels of granularity). To name only one example, the widely known medical ontology SnOMed CT contains the concepts Arm, Hand, Finger, Phalanx, Osteocyte, and Mitochondrion which intuitively all belong to different (increasingly finer) levels of abstraction [11]. Existing ontology languages, however, do not provide explicit support for modeling across different abstraction levels. The aim of this paper is to introduce a family of description logics (DLs) that provide such support in the form of abstraction and refinement operators.
We define the _abstraction DL_\(\mathcal{ALCHI}^{\text{abs}}\) as an extension of the familiar description logic \(\mathcal{ALCHI}\), which may be viewed as a modest and tame core of the OWL 2 DL ontology language. In principle, however, the same extension can be applied to any other DL, both more and less expressive than \(\mathcal{ALCHI}\). Abstraction levels are explicitly named and referred to in the ontology, and we provide explicit operators for the abstraction and refinement of concepts and roles. For example, the concept refinement
\[L_{2}\text{:}q_{A}\ \underline{\text{refines}}\ L_{1}\text{:}\mathsf{Arm},\]
where \(q_{A}\) denotes the conjunctive query (CQ)
\[q_{A}=\mathsf{UArm}(x_{1})\wedge\mathsf{LArm}(x_{2})\wedge \mathsf{Hand}(x_{3})\wedge\] \[\mathsf{joins}(x_{1},x_{2})\wedge\mathsf{joins}(x_{2},x_{3}),\]
expresses that every instance of \(\mathsf{Arm}\) on the coarser abstraction level \(L_{1}\) decomposes into an ensemble of three objects on the finer level \(L_{2}\) as described by \(q_{A}\). Concept abstractions are dual to refinements and state that every ensemble of a certain kind on a finer level gives rise to an abstracting object on a coarser level. Semantically, there is one classical DL interpretation for each abstraction level and a (partial) refinement function that associates objects on coarser levels with ensembles on finer levels. Every object may participate in at most one ensemble and we require that abstraction levels are organized in the form of a tree. We believe that the abstraction DLs defined along these lines are useful for many application domains, examples are given in the paper. Though not limited to it, our DLs are particularly well-suited for capturing the mereological (part-whole) aspect of abstraction and refinement [12].
Our main technical contribution is to show that adding abstraction and refinement to \(\mathcal{ALCHI}\) preserves decidability of the base logic, and to provide a detailed analysis of its complexity, also considering many relevant fragments. It turns out that satisfiability in \(\mathcal{ALCHI}^{\text{abs}}\) is 2ExpTime-complete. Note that this is in line with the fact that CQ evaluation in \(\mathcal{ALCHI}\) is 2ExpTime-complete [12]. For the fragments, however, such parallels to evaluation cease to hold. We use \(\mathcal{ALCHI}^{\text{abs}}[\mathrm{cr}]\) to denote \(\mathcal{ALCHI}^{\text{abs}}\) in which only concept refinement is admitted, and likewise ca denotes concept abstraction and rr, ra denote role refinement and abstraction. We recall that \(\mathcal{ALC}\) is \(\mathcal{ALCHI}\) without inverse roles and role hierarchies.
We show that satisfiability in the natural fragment \(\mathcal{ALCHI}^{\text{abs}}[\mathrm{cr}]\) is only ExpTime-complete, despite the fact that it still comprises CQs. Moreover, 2ExpTime-hardness already holds for \(\mathcal{ALC}^{\text{abs}}\) in contrast to the fact that CQ evaluation in \(\mathcal{ALC}\) is in ExpTime. There are actually three different sources of complexity as satisfia
Figure 1: The complexity of satisfiability in abstraction DLs.
bility is 2ExpTime-hard already in each of the fragments \(\mathcal{ALC}^{\text{abs}}[\text{ca}]\), \(\mathcal{ALC}^{\text{abs}}[\text{rr}]\), and \(\mathcal{ALC}^{\text{abs}}[\text{rr}]\). In \(\mathcal{ALC}^{\text{abs}}[\text{ra}]\), role abstractions allow us to recover inverse roles. The same is true for \(\mathcal{ALC}^{\text{abs}}[\text{ca}]\) that, however, requires a more subtle reduction relying on the fact that ensembles must not overlap. Finally, \(\mathcal{ALC}^{\text{abs}}[\text{rr}]\) is 2ExpTime-hard because role refinements allow us to generate objects interlinked in a complex way. See Figure 1 for a summary.
We then observe that the decidability of \(\mathcal{ALC}\Pi^{\text{abs}}\) is more fragile than it may seem on first sight and actually depends on a number of careful design decisions. In particular, we consider three natural extensions and variations and show that each of them is undecidable. The first variation is to drop the requirement that abstraction levels are organized in a tree. The second is to add the requirement that ensembles (which are tuples rather than sets) must not contain repeated elements. And the third is to drop the requirement that CQs in abstraction and refinement statements must be full, that is, to admit quantified variables in such CQs.
Proofs are in the appendix.
**Related Work.** A classic early article on granularity in AI is Hobbs (1985). Granularity has later been studied in the area of foundational ontologies, focussing on a philosophically adequate modeling in first-order logic. Examples include granular partitions Bittner and Smith (2003), the descriptions and situations framework Gangemi and Mika (2003), and domain-specific approaches Fonseca et al. (2002); Schulz et al. (2008); Vogt (2019).
Existing approaches to representing abstraction and refinement / granularity in DLs and in OWL are rather different in spirit. Some are based on rough or fuzzy set theory Klinov et al. (2018); Lisi and Mencar (2018), some provide mainly a modeling discipline Calegari and Ciucci (2010), some aim at the spatial domain Hbeich et al. (2021) or at speeding up reasoning Glimm et al. (2017), and some take abstraction to mean the translation of queries between different data schemas Cima et al. (2022). We also mention description logics of context Klamran and Gutierrez-Basulto (2016); an abstraction level can be seen as a context, but the notion of context is more general and governed by looser principles. A categorization of different forms of granularity is in Keet (2008).
There is a close connection between DLs with abstraction as proposed in this paper and the unary negation fragment of first-order logic (UNFO). In fact, UNFO encompasses ontologies formulated in DLs such as \(\mathcal{ALC}\mathcal{I}\) and conjunctive queries. UNFO satisfiability is decidable and 2ExpTime-complete Segoufin and ten Cate (2013). This does, however, not imply any of the results in this paper due to the use of refinement functions in the semantics of our DLs and the fact that UNFO extended with functional relations is undecidable Segoufin and ten Cate (2013).
## 2 Preliminaries
Base DLs.Fix countably infinite sets \(\mathbf{C}\) and \(\mathbf{R}\) of _concept names_ and _role names_. A _role_ is a role name or an _inverse role_, that is, an expression \(r^{-}\) with \(r\) a role name. If \(R=r^{-}\) is an inverse role, then we set \(R^{-}=r\). \(\mathcal{ALC}\mathcal{I}\)-_concepts_\(C,D\) are built according to the syntax rule
\[C,D::=A\mid\neg C\mid C\sqcap D\mid C\sqcup D\mid\exists R.C\mid\forall R.C\]
where \(A\) ranges over concept names and \(R\) over roles. We use \(\top\) as an abbreviation for \(A\sqcup\neg A\) with \(A\) a fixed concept name and \(\bot\) for \(\neg\top\). An \(\mathcal{ALC}\)_-concept_ is an \(\mathcal{ALC}\)-concept that does not use inverse roles and an \(\mathcal{EL}\)_-concept_ is an \(\mathcal{ALC}\)-concepts that uses none of \(\neg\), \(\sqcup\), and \(\forall\).
An \(\mathcal{ALC}\mathcal{I}\)_-ontology_ is a finite set of _concept inclusions (CIs)_\(C\sqsubseteq D\) with \(C\) and \(D\)\(\mathcal{ALC}\mathcal{I}\)-concepts and _role inclusions (RIs)_, \(R\sqsubseteq S\) with \(R,S\) roles. The letter \(\mathcal{I}\) indicates the presence of inverse roles and \(\mathcal{H}\) indicates the presence of role inclusions (also called role hierarchies), and thus it should also be clear what we mean e.g. by an \(\mathcal{ALC}\mathcal{I}\)-ontology and an \(\mathcal{ALC}\mathcal{H}\)-ontology. An \(\mathcal{EL}\)_-ontology_ is a finite set of CIs \(C\sqsubseteq D\) with \(C,D\)\(\mathcal{EL}\)-concepts.
An _interpretation_ is a pair \(\mathcal{I}=(\Delta^{\mathcal{I}},\cdot^{\mathcal{I}})\) with \(\Delta^{\mathcal{I}}\) a non-empty set (the _domain_) and \(\cdot^{\mathcal{I}}\) an _interpretation function_ that maps every concept name \(A\in\mathbf{C}\) to a set \(\mathcal{A}^{\mathcal{I}}\subseteq\Delta^{\mathcal{I}}\) and every role name \(r\in\mathbf{R}\) to a binary relation \(r^{\mathcal{I}}\subseteq\Delta^{\mathcal{I}}\times\Delta^{\mathcal{I}}\). The interpretation function is extended to compound concepts as usual, c.f. Baader et al. (2017). An interpretation \(\mathcal{I}\)_satisfies_ a CI \(C\sqsubseteq D\) if \(C^{\mathcal{I}}\subseteq D^{\mathcal{I}}\) and an RI \(R\sqsubseteq S\) if \(R^{\mathcal{I}}\subseteq S^{\mathcal{I}}\). It is a _model_ of an ontology \(\mathcal{O}\) if it satisfies all CIs and RIs in it.
For any syntactic object \(O\) such as an ontology or a concept, we use \(||O||\) to denote the _size_ of \(O\), that is, the number of symbols needed to write \(O\) over a suitable alphabet.
**Conjunctive Queries.** Let \(\mathbf{V}\) be a countably infinite set of variables. A _conjunctive query (CQ)_ takes the form \(q(\bar{x})=\exists\bar{y}\,\varphi(\bar{x},\bar{y})\) with \(\varphi\) a conjunction of _concept atoms_\(C(x)\) and _role atoms_\(r(x,y)\), \(C\) a (possibly compound) concept, \(r\) a role name, and \(x,y\) variables from \(\bar{x}\cup\bar{y}\). We may write \(\alpha\in q\) to indicate that \(\alpha\) is an atom in \(\varphi\) and \(r^{-}(x,y)\in q\) in place of \(r(y,x)\in q\). The variables in \(\bar{x}\) are the _answer variables_ of \(q\). We require that every answer variable \(x\) occurs in some atom of \(q\), but omit this atom in writing in case it is \(\top(x)\). With \(\mathsf{var}(q)\), we denote the set of all (answer and quantified) variables in \(q\). If \(q\) has no answer variables then it is _Boolean_. We mostly restrict our attention to CQs \(q\) that are _full_, meaning that \(q\) has no quantified variables. A CQ \(q\) is _connected_ if the undirected graph with node set \(\mathsf{var}(q)\) and edge set \(\{\{v,v^{\prime}\}\mid r(v,v^{\prime})\in q\text{ for any }r\in\mathbf{R}\}\) is. A CQ \(q\) is a _subquery_ of a CQ \(q^{\prime}\) if \(q\) can be obtained from \(q^{\prime}\) by dropping atoms.
Let \(q(\bar{x})=\exists\bar{y}\,\varphi(\bar{x},\bar{y})\) be a CQ and \(\mathcal{I}\) an interpretation. A mapping \(h:\bar{x}\cup\bar{y}\to\Delta^{\mathcal{I}}\) is a _homomorphism_ from \(q\) to \(\mathcal{I}\) if \(C(x)\in q\) implies \(h(x)\in C^{\mathcal{I}}\) and \(r(x,y)\in q\) implies \((h(x),h(y))\in r^{\mathcal{I}}\). A tuple \(\bar{d}\in(\Delta^{\mathcal{I}})^{|\bar{x}|}\) is an _answer_ to \(q\) on \(\mathcal{I}\) if there is a homomorphism \(h\) from \(q\) to \(\mathcal{I}\) with \(h(\bar{x})=\bar{d}\). We use \(q(\mathcal{I})\) to denote the set of all answers to \(q\) on \(\mathcal{I}\). If \(q\) is Boolean, we write \(\mathcal{I}\models q\) to indicate the existence of a homomorphism from \(q\) to \(\mathcal{I}\).
## 3 DLs with Abstraction and Refinement
We extend \(\mathcal{ALC}\Pi\) to the DL \(\mathcal{ALC}\Pi^{\text{abs}}\) that supports abstraction and refinement. Fix a countable set \(\mathbf{A}\) of _abstraction_
tion levels_. An \(\mathcal{ALCHI}^{\mathsf{abs}}\)-ontology is a finite set of statements of the following form:
* _labeled concept inclusions_ \(C\sqsubseteq_{L}D\)_,_
* _labeled role inclusions_ \(R\sqsubseteq_{L}S\)_,_
* _concept refinements_ \(L{:}q(\bar{x})\) _refines_ \(L^{\prime}{:}C\)_,_
* _concept abstractions_ \(L^{\prime}{:}C\) _abstracts_ \(L{:}q(\bar{x})\)_,_
* _role refinements_ \(L{:}q(\bar{x},\bar{y})\) _refines_ \(L^{\prime}{:}q_{R}(x,y)\)_,_
_role abstractions_ \(L^{\prime}{:}R\) _abstracts_ \(L{:}q(\bar{x},\bar{y})\)_
where \(L,L^{\prime}\) range over \(\mathbf{A}\), \(C,D\) over \(\mathcal{ALCHI}\)-concepts, \(R,S\) over roles, \(q\) over full conjunctive queries, and \(q_{R}\) over full conjunctive queries of the form \(C_{1}(x)\wedge R(x,y)\wedge C_{2}(y)\). In concept and role abstraction statements, we additionally require the CQ \(q\) to be connected. We may write \(C\equiv_{L}D\) as shorthand for the two CIs \(C\sqsubseteq_{L}D\) and \(D\sqsubseteq_{L}C\). We underline abstraction and refinement operators to ensure better readability throughout the paper.
Intuitively, a concept refinement \(L{:}q(\bar{x})\) refines \(L^{\prime}{:}C\) expresses that any instance of \(C\) on abstraction level \(L^{\prime}\) refines into an _ensemble_ of \(|\bar{x}|\) objects on abstraction level \(L\) which satisfies all properties expressed by CQ \(q\). Conversely, a concept abstraction \(L^{\prime}{:}C\)abstracts\(L{:}q(\bar{x})\) says that any ensemble of \(|\bar{x}|\) objects on abstraction level \(L\) that satisfies \(q\) abstracts into a single instance of \(C\) on abstraction level \(L^{\prime}\). Role refinements and abstractions can be understood in a similar way, where each of the two elements that participate in a role relationship refines into its own ensemble.
Note that in role refinements, we consider CQs \(q_{R}=C_{1}(x)\wedge R(x,y)\wedge C_{2}(y)\) rather than only the role \(R\). This is because roles are often of a general kind such as partOf or interactsWith and need proper context to be meaningfully refined. This context is provided by the concepts \(C_{1},C_{2}\).
**Example 1**.: _Granularity is important in many domains. Anatomy has already been mentioned in the introduction. The concept refinement given there may be complemented by choosing \(q_{A}\) as in the introduction and adding the concept abstraction_
\[L_{1}{:}\mathsf{Arm\ abstracts}\ L_{2}{:}q_{A}.\]
_We next consider bikes as a simple example for a technical domain. Let us first say how wheels refine into components: \(L_{2}{:}q_{W}\)refines \(L_{1}{:}\mathsf{Wheel}\) where_
\[q_{W} =\mathsf{A}\mathsf{A}\mathsf{x}(x_{1})\wedge\mathsf{Spokes}(x_{ 2})\wedge\mathsf{Rim}(x_{3})\wedge\mathsf{Tire}(x_{4})\wedge\] \[\mathsf{join}(x_{2},x_{1})\wedge\mathsf{join}(x_{2},x_{3}) \wedge\mathsf{carries}(x_{3},x_{4}).\]
_We may then use the following role refinement to express how frames connect to wheels:_
\[L_{2}{:}q_{FW}\ \underline{\mathsf{refines}}\ L_{1}{:}\mathsf{Wheel}(x)\wedge \mathsf{connTo}(x,y)\wedge\mathsf{Frame}(y)\]
_where, for \(\bar{x}=x_{1}\cdots x_{4}\) and \(\bar{y}=y_{1}\cdots y_{7}\) (assuming that frames have seven components),_
\[q_{FW}(\bar{x},\bar{y})=\mathsf{A}\mathsf{c}\mathsf{e}(x_{1})\wedge\mathsf{ connTo}(x_{1},y_{1})\wedge\mathsf{Dropout}(y_{1}).\]
_This expresses that if a wheel is connected to a frame, then the axle of the wheel is connected to the dropout of the frame._
Extensions \(\mathcal{L}^{\mathsf{abs}}\) of other DLs \(\mathcal{L}\) introduced in Section 2, such as \(\mathcal{ALC}\) and \(\mathcal{ALCHI}\), may be defined in the expected way. We also consider various fragments of \(\mathcal{ALCHI}^{\mathsf{abs}}\). With \(\mathcal{ALCHI}^{\mathsf{abs}}\)[cr,rr], for example, we mean the fragment of \(\mathcal{ALCHI}^{\mathsf{abs}}\) that admits concept refinement and role refinement, but neither concept abstraction nor role abstraction (identified by ca and ra).
We next define the semantics of \(\mathcal{ALCHI}^{\mathsf{abs}}\), based on _A-interpretations_ which include one traditional DL interpretation for each abstraction level. Formally, an A-interpretation takes the form \(\ \mathcal{I}=(\mathbf{A}_{\mathcal{I}},\prec,(\mathcal{I}_{L})_{L\in \mathbf{A}_{\mathcal{I}}},\rho)\), where
* \(\mathbf{A}_{\mathcal{I}}\subseteq\mathbf{A}\) is the set of relevant abstraction levels;
* \(\prec\ \subseteq\ \mathbf{A}_{\mathcal{I}}\times\mathbf{A}_{\mathcal{I}}\) is such that the directed graph \((\mathbf{A}_{\mathcal{I}},\{(L^{\prime},L)\mid L\prec L^{\prime}\})\) is a tree; intuitively, \(L\prec L^{\prime}\) means that \(L\) is less abstract than \(L^{\prime}\) or, in other words, that the modeling granularity of \(L\) is finer than that of \(L^{\prime}\);
* \((\mathcal{I}_{L})_{L\in\mathbf{A}_{\mathcal{I}}}\) is a collection of interpretations \(\mathcal{I}_{L}\), one for every \(L\in\mathbf{A}_{\mathcal{I}}\), with pairwise disjoint domains; we use \(L(d)\) to denote the unique \(L\in\mathbf{A}_{\mathcal{I}}\) with \(d\in\Delta^{\mathcal{I}_{L}}\);
* \(\rho\) is the _refinement function_, a partial function that associates pairs \((d,L)\in\Delta^{\mathcal{I}}\times\mathbf{A}_{\mathcal{I}}\) such that \(L\prec L(d)\) with an \(L\)_-ensemble_\(\rho(d,L)\), that is, with a non-empty tuple over \(\Delta^{\mathcal{I}_{L}}\). We want every object to participate in only one ensemble and thus require that
* for all \(d\in\Delta^{\mathcal{I}}\), there is at most one \(e\in\Delta^{\mathcal{I}}\) such that \(d\) occurs in \(\rho(e,L(d))\).
For readability, we may write \(\rho_{L}(d)\) in place of \(\rho(d,L)\).
An A-interpretation \(\mathcal{I}=(\mathbf{A}_{\mathcal{I}},\prec,(\mathcal{I}_{L})_{L\in\mathbf{A }_{\mathcal{I}}},\rho)\)_satisfies_ a
* labeled concept or role inclusion \(\alpha\sqsubseteq_{L}\beta\) if \(L\in\mathbf{A}_{\mathcal{I}}\) and \(\alpha^{\mathcal{I}_{L}}\subseteq\beta^{\mathcal{I}_{L}}\);
* concept refinement \(L{:}q(\bar{x})\) refines \(L^{\prime}{:}C\) if \(L\prec L^{\prime}\) and for all \(d\in C^{\mathcal{I}_{L^{\prime}}}\), there is an \(\bar{e}\in q(\mathcal{I}_{L})\) such that \(\rho_{L}(d)=\bar{e}\);
* concept abstraction \(L^{\prime}{:}C\)abstracts\(L{:}q(\bar{x})\) if \(L\prec L^{\prime}\) and for all \(\bar{e}\in q(\mathcal{I}_{L})\), there is a \(d\in C^{\mathcal{I}_{L^{\prime}}}\) v.t. \(\rho_{L}(d)=\bar{e}\);
* role refinement \(L{:}q(\bar{x},\bar{y})\) refines \(L^{\prime}{:}q_{R}(x,y)\) if \(L\prec L^{\prime}\) and for all \((d_{1},d_{2})\in q_{R}(\mathcal{I}_{L^{\prime}})\), there is an \((\bar{e}_{1},\bar{e}_{2})\in q(\mathcal{I}_{L})\) such that \(\rho_{L}(d_{1})=\bar{e}_{1}\) and \(\rho_{L}(d_{2})=\bar{e}_{2}\);
* role abstraction \(L^{\prime}{:}R\)abstracts\(L{:}q(\bar{x},\bar{y})\) if \(L\prec L^{\prime}\) and for all \((\bar{e}_{1},\bar{e}_{2})\in q(\mathcal{I}_{L})\), there is a \((d_{1},d_{2})\in R^{\mathcal{I}_{L^{\prime}}}\) such that \(\rho_{L}(d_{1})=\bar{e}_{1}\) and \(\rho_{L}(d_{2})=\bar{e}_{2}\).
An A-interpretation is a _model_ of an \(\mathcal{ALCHI}^{\mathsf{abs}}\)-ontology if it satisfies all inclusions, refinements, and abstractions in it.
**Example 2**.: _We consider the domain of (robotic) actions. Assume that there is a \(\mathsf{Fetch}\) action that refines into subactions: \(L_{2}{:}q_{F}\ \underline{\mathsf{refines}}\ L_{1}{:}\mathsf{Fetch}\) where_
\[q_{F} =\mathsf{Locate}(x_{1})\wedge\mathsf{Move}(x_{2})\wedge\mathsf{ Grasp}(x_{3})\wedge\] \[\mathsf{precedes}(x_{1},x_{2})\wedge\mathsf{precedes}(x_{2},x_{3}).\]
_We might have a safe version of the fetching action and a two-handed grasping action:_
\[\mathsf{SFetch}\sqsubseteq_{L_{1}}\mathsf{Fetch}\] \[\mathsf{TwoHandedGrasp}\sqsubseteq_{L_{1}}\mathsf{ Grasp}\]
_A safe fetch requires a two-handed grasping subaction: \(L_{2}{:}q_{S}\ \underline{\
_We remark that abstraction statements need to be used with care since ensembles may not overlap, c.f. Condition (\(\ast\)). For example, the reader may want to verify that the following CI and concept abstraction have no model:_
\(\top\sqsubseteq_{L_{2}}\exists r.\exists r.\top\qquad L_{1}{:}\top\) abstracts\(L_{2}{:}r(x,y)\)_._
We are interested in the problem of _(concept) satisfiability_ which means to decide, given an \(\mathcal{ALCHI}^{\mathsf{abs}}\)-ontology \(\mathcal{O}\), an \(\mathcal{ALCI}\)-concept \(C\), and an abstraction level \(L\in\mathbf{A}\), whether there is a model \(\mathcal{I}\) of \(\mathcal{O}\) such that \(C^{\mathcal{I}_{L}}\neq\emptyset\). We then say that \(C\) is \(L\)_-satisfiable w.r.t. \(\mathcal{O}\)_. As usual, the related reasoning problems of subsumption can be reduced to satisfiability in polynomial time, and vice versa [1].
## 4 Upper Bounds
We prove that satisfiability in \(\mathcal{ALCHI}^{\mathsf{abs}}\) is decidable in 2ExpTime. Before approaching this general case, however, we consider the fragment \(\mathcal{ALCHI}^{\mathsf{abs}}[\mathrm{cr}]\) and show that it is only ExpTime-complete.
### \(\mathcal{ALCHI}^{\mathsf{abs}}[\mathrm{cr}]\) in ExpTime
Our aim is to prove the following.
**Theorem 1**.: _Satisfiability in \(\mathcal{ALCHI}^{\mathsf{abs}}[\mathrm{cr}]\) is ExpTime-complete._
The lower bound is inherited from \(\mathcal{ALCHI}\) without abstraction and refinement [1]. We prove the upper bound by a mosaic-based approach, that is, we decide the existence of a model \(\mathcal{I}\) by trying to assemble \(\mathcal{I}\) from small fragments called mosaics. Essentially, a mosaic describes a single ensemble on a single level of abstraction.
Assume that we are given as input an \(\mathcal{ALCHI}^{\mathsf{abs}}[\mathrm{cr}]\)-ontology \(\mathcal{O}\), an \(\mathcal{ALCI}\)-concept \(C_{0}\), and an abstraction level \(L_{0}\). We may assume w.l.o.g. that \(C_{0}\) is a concept name as we can extend \(\mathcal{O}\) with \(A_{0}\sqsubseteq_{L_{0}}C_{0}\) and test satisfiability of the fresh concept name \(A_{0}\). We also assume w.l.o.g. that \(\mathcal{O}\) is in _normal form_, meaning that
1. every CI has one of the forms \[\top\sqsubseteq_{L}A\qquad A\sqsubseteq_{L}\exists R.B\qquad \exists R.B\sqsubseteq_{L}A\] \[A_{1}\sqcap A_{2}\sqsubseteq_{L}A\qquad A\sqsubseteq_{L}\neg B \qquad\neg B\sqsubseteq_{L}A\] where \(A,A_{1},A_{2},B\) are concept names and \(R\) is a role;
2. in every concept refinement \(L{:}q(\bar{x})\) refines \(L^{\prime}{:}C\), \(C\) is a concept name, and so is \(D\) in all concept atoms \(D(x)\in q\).
It is in fact routine to show that every \(\mathcal{ALCHI}^{\mathsf{abs}}[\mathrm{cr}]\) ontology \(\mathcal{O}\) can be converted in polynomial time into an \(\mathcal{ALCHI}^{\mathsf{abs}}[\mathrm{cr}]\) ontology \(\mathcal{O}^{\prime}\) in normal form that is a conservative extension of \(\mathcal{O}\), see e.g. [1]. We also assume that (i) \(\mathcal{O}\) contains \(R\sqsubseteq_{L}R\) for all roles \(R\) and abstraction levels \(L\) in \(\mathcal{O}\), (ii) \(R\sqsubseteq_{L}S\), \(S\sqsubseteq_{L}T\in\mathcal{O}\) implies \(R\sqsubseteq_{L}T\in\mathcal{O}\), and (iii) \(R\sqsubseteq_{L}S\in\mathcal{O}\) implies \(R^{-}\sqsubseteq_{L}S^{-}\in\mathcal{O}\). With \(\prec\) we denote the smallest relation on \(\mathbf{A}_{\mathcal{O}}\) such that \(L\prec L^{\prime}\) for all \(L{:}q(\bar{x})\) refines \(L^{\prime}{:}C\) in \(\mathcal{O}\).
Fix a domain \(\Delta\) of cardinality \(||\mathcal{O}||\). A _mosaic_ is a pair \(M=(L,\mathcal{I})\) where \(L\in\mathbf{A}_{\mathcal{O}}\) is the abstraction level of the mosaic and \(\mathcal{I}\) is an interpretation with \(\Delta^{\mathcal{I}}\subseteq\Delta\) such that \(\mathcal{I}\) satisfies all CIs \(C\sqsubseteq_{L}D\) in \(\mathcal{O}\) and all RIs \(R\sqsubseteq_{L}S\) in \(\mathcal{O}\), with the possible exception of CIs of the form \(A\sqsubseteq_{L}\exists r.B\). We may write \(L^{M}\) to denote \(L\), and likewise for \(\mathcal{I}^{M}\). Let \(\mathcal{M}\) be a set of mosaics. We say that a mosaic \(M=(L,\mathcal{I})\) is _good_ in \(\mathcal{M}\) if for all \(d\in\Delta^{\mathcal{I}}\) the following hold:
1. if \(A\sqsubseteq_{L}\exists R.B\in\mathcal{O}\), \(d\in A^{\mathcal{I}}\), and \(d\notin(\exists R.B)^{\mathcal{I}}\), then there is an \(M^{\prime}=(L,\mathcal{I}^{\prime})\in\mathcal{M}\) and a \(d^{\prime}\in\Delta^{\mathcal{I}^{M^{\prime}}}\) such that 1. \(d^{\prime}\in B^{\mathcal{I}^{\prime}}\), 2. if \(\exists S.A\sqsubseteq_{L}B\in\mathcal{O}\), \(R\sqsubseteq_{L}S\in\mathcal{O}\), and \(d^{\prime}\in A^{\mathcal{I}^{\prime}}\), then \(d\in B^{\mathcal{I}^{\prime}}\); 3. if \(\exists S.A\sqsubseteq_{L}B\in\mathcal{O}\), \(R^{-}\sqsubseteq_{L}S\in\mathcal{O}\), and \(d\in A^{\mathcal{I}}\), then \(d^{\prime}\in B^{\mathcal{I}^{\prime}}\);
2. for every level \(L^{\prime}\in\mathbf{A}_{\mathcal{O}}\) such that \[Q=\{q\mid L^{\prime}{:}q(\bar{x})\text{ refines }L{:}A\in\mathcal{O}\text{ and }d\in A^{ \mathcal{I}}\}\neq\emptyset,\] there is a mosaic \(M^{\prime}\in\mathcal{M}\) with \(M^{\prime}=(L^{\prime},\mathcal{I}^{\prime})\) and a tuple \(\bar{e}\) over \(\Delta^{\mathcal{I}^{\prime}}\) such that \(\bar{e}\in q(\mathcal{I}^{\prime})\) for all \(q\in Q\).
We now formulate the actual decision procedure. If the directed graph \((\mathbf{A}_{\mathcal{O}},\{(L^{\prime},L)\mid L\prec L^{\prime}\})\) is not a tree, we directly return 'unsatisfiable'. Our algorithm first computes the set \(\mathcal{M}_{0}\) of all mosaics for \(\mathcal{O}\) and then repeatedly and exhaustively eliminates mosaics that are not good. Let \(\mathcal{M}^{\ast}\) denote the set of mosaics at which this process stabilizes.
**Lemma 1**.: \(C_{0}\) _is \(L_{0}\)-satisfiable w.r.t. \(\mathcal{O}\) iff \(\mathcal{M}^{\ast}\) contains (i) a mosaic \(M\) with \(L^{M}=L_{0}\) and \(C_{0}^{\mathcal{I}^{M}}\neq\emptyset\) and (ii) a mosaic \(M\) with \(L^{M}=L\), for every \(L\) in \(\mathbf{A}_{\mathcal{O}}\)._
The algorithm thus returns'satisfiable' if Conditions (i) and (ii) from Lemma 1 are satisfied and 'unsatisfiable' otherwise. It is easy to see that the algorithm runs in single exponential time.
### \(\mathcal{ALCHI}^{\mathsf{abs}}\) in 2ExpTime
Our aim is to prove the following.
**Theorem 2**.: _Satisfiability in \(\mathcal{ALCHI}^{\mathsf{abs}}\) is decidable in 2ExpTime._
A matching lower bound will be provided later on. We prove Theorem 2 by a mosaic-based approach which is, however, significantly more complex than the one used in the previous section. In particular, a mosaic now represents a'slice' through an A-interpretation that includes multiple abstraction levels and multiple ensembles.
Assume that we are given as input an \(\mathcal{ALCHI}^{\mathsf{abs}}\)-ontology \(\mathcal{O}\), an \(\mathcal{ALCI}\)-concept \(C_{0}\), and an abstraction level \(L_{0}\). We again assume that \(C_{0}\) is a concept name. and \(\mathcal{O}\) is in normal form, defined as in the previous section, but with the obvious counterparts of Point 2 for role refinements and (concept and role) abstractions. We also define the relation \(\prec\) on \(\mathbf{A}_{\mathcal{O}}\) as in the previous section, except that we now consider concept and role refinements, as well as concept and role abstractions, in the obvious way. Again, if the directed graph \((\mathbf{A}_{\mathcal{O}},\{(L^{\prime},L)\mid L\prec L^{\prime}\})\) is not a tree, then we directly return 'unsatisfiable'.
Fix a set \(\Delta\) of cardinality \(||\mathcal{O}||^{||\mathcal{O}||}\). A _mosaic_ is a tuple
\[M=((\mathcal{I}_{L})_{L\in\mathbf{A}_{\mathcal{O}}},\rho,f_{\mathsf{in}},f_{ \mathsf{out}})\]
where
* \((\mathcal{I}_{L})_{L\in\mathbf{A}_{\mathcal{O}}}\) is a collection of interpretations and \(\rho\) is a partial function such that \((\mathbf{A}_{\mathcal{O}},\prec,(\mathcal{I}_{L})_{L\in\mathbf{A}_{\mathcal{O}}},\rho)\) is an \(\mathbf{A}\)-interpretation except that some interpretation domains \(\Delta^{\mathcal{I}_{L}}\) may be empty; the length of tuples in the range of \(\rho\) may be at most \(||\mathcal{O}||\);
* \(f_{\text{in}}\) and \(f_{\text{out}}\) are functions that associate every \(L\in\mathbf{A}_{\mathcal{O}}\) with a set of pairs \((q,h)\) where \(q\) is a CQ from an abstraction statement in \(\mathcal{O}\) or a subquery thereof, and \(h\) is a partial function from \(\mathsf{var}(q)\) to \(\Delta^{\mathcal{I}_{L}}\); we call these pairs the _forbidden incoming queries_ in the case of \(f_{\text{in}}\) and the _forbidden outgoing queries_ in the case of \(f_{\text{out}}\).
We may write \(\mathcal{I}_{L}^{M}\) to denote \(\mathcal{I}_{L}\), for any \(L\in\mathbf{A}_{\mathcal{O}}\), and likewise for \(\rho^{M}\), \(f_{\text{in}}^{M}\), and \(f_{\text{out}}^{M}\).
Every mosaic has to satisfy several additional conditions. Before we can state them, we introduce some notation. For \(V\subseteq\mathsf{var}(q)\), we use \(q|_{V}\) to denote the restriction of \(q\) to the variables in \(V\) and write \(\overline{V}\) as shorthand for \(\overline{V}=\mathsf{var}(q)\setminus V\). A _maximally connected component (MCC)_ of \(q\) is a CQ \(q|_{V}\) that is connected and such that \(V\) is maximal with this property. A CQ \(p=E\uplus p_{0}\) is a _component of \(q\) w.r.t. \(V\subseteq\mathsf{var}(q)\)_ if \(p_{0}\) is an MCC of \(q|_{\overline{V}}\) and \(E\) is the set of all atoms from \(q\) that contain one variable from \(V\) and one variable from \(\overline{V}\).
**Example 3**.: _The following CQ has two components w.r.t. \(V=\{x,y\}\), which are displayed in dashed and dotted lines:_
_For example, the dotted component is defined by \(p=E\uplus p_{0}\) with \(E=r(u,x)\wedge r(y,v)\) and \(p_{0}=r(u,v)\wedge r(v,u)\)._
With these notions at hand, let us explain the intuition of the \(f_{\text{in}}\) and \(f_{\text{out}}\) components of mosaics. Our decomposition of A-interpretations into sets of mosaics is such that every ensemble falls within a single mosaic. This means that we must avoid homomorphisms from the CQs in concept abstractions that hit multiple mosaics: such homomorphisms would hit elements from multiple ensembles while also turning the set of all elements that are hit into an ensemble; they thus generate overlapping ensembles which is forbidden. Almost the same holds for role abstractions where however the CQ takes the form \(q(\bar{x},\bar{y})\) with each of \(\bar{x}\) and \(\bar{y}\) describing an ensemble, and we must only avoid homomorphisms that hit multiple mosaics from the variables in \(\bar{x}\), or from the variables in \(\bar{y}\).
Query avoidance is implemented by the \(f_{\text{in}}\) and \(f_{\text{out}}\) components. In brief and for CQs \(q(\bar{x})\) from concept abstractions, we consider any non-empty subset \(V\subsetneqq(q)\) of variables and homomorphism \(h\) from \(q|_{V}\) to the current mosaic. We then have to avoid any homomorphism \(g\) from \(q\setminus q|_{V}\) that is compatible with \(h\) and hits at least one mosaic other than the current one. The choice of \(V\) decomposes \(q\) into remaining components, which are exactly the components of \(q\) w.r.t. \(V\) defined above. We choose one such component \(p\) and put \((p,h^{\prime})\) into \(f_{\text{out}}\), \(h^{\prime}\) the restriction of \(h\) to the variables in \(p\), to'send' the information to other mosaics that this query is forbidden. The \(f_{\text{in}}\) component, in contrast, contains forbidden queries that we'receive' from other mosaics.
We now formulate the additional conditions on mosaics. We require that \(M=((\mathcal{I}_{L})_{L\in\mathbf{A}_{\mathcal{O}}},\rho,f_{\text{in}},f_{ \text{out}})\) satisfies the following conditions, for all \(L\in\mathbf{A}_{\mathcal{O}}\):
1. the \(A\)-interpretation \((\mathbf{A}_{\mathcal{O}},\prec,(\mathcal{I}_{L})_{L\in\mathbf{A}_{\mathcal{O }}},\rho)\) satisfies all inclusions, refinements, and abstractions in \(\mathcal{O}\) with the possible exception of CIs the form \(A\sqsubseteq_{L}\exists r.B\);
2. for all concept abstractions \(L^{\prime}\):\(A\) abstracts \(L\):\(q(\bar{x})\) in \(\mathcal{O}\), all non-empty \(V\subsetneqq(q)\), and all homomorphisms \(h\) from \(q|_{V}\) to \(\mathcal{I}_{L}\): there is a component \(p\) of \(q\) w.r.t. \(V\) such that \((p,h|_{V\cap\mathsf{var}(p)})\in f_{\text{out}}(L)\)
3. for all role abstractions \(L^{\prime}\):\(R\) abstracts \(L\):\(q(\bar{x},\bar{y})\) in \(\mathcal{O}\), all non-empty \(V\subsetneqq(q)\) with \(\overline{V\neq\bar{x}}\) and \(V\neq\bar{y}\),1 and all homomorphisms \(h\) from \(q|_{V}\) to \(\mathcal{I}_{L}\): there is a component \(p\) of \(q\) w.r.t. \(V\) such that \((p,h|_{V\cap\mathsf{var}(p)})\in f_{\text{out}}(L)\); Footnote 1: Here we view \(\bar{x}\) and \(\bar{y}\) as sets.
4. for all \((q,h)\in f_{\text{in}}(L)\), all \(V\subseteq\mathsf{var}(q)\), and all homomorphisms \(g\) from \(q|_{V}\) to \(\mathcal{I}_{L}\) that extend \(h\), there is a component \(p\) of \(q\) w.r.t. \(V\) such that \((p,g|_{V\cap\mathsf{var}(p)})\in f_{\text{out}}(L)\).
We next need a mechanism to interconnect mosaics. This is driven by concept names \(A\) and elements \(d\in A^{\mathcal{I}_{L}}\) such that \(A\sqsubseteq_{L}\exists R.B\in\mathcal{O}\) and \(d\) lacks a witness inside the mosaic. In principle, we would simply like to find a mosaic \(M^{\prime}\) that has some element \(e\) on level \(L\) such that \(e\in B^{\mathcal{I}^{M^{\prime}}}\) and an \(R\)-edge can be put between \(d\) and \(e\). The situation is complicated, however, by the presence of role refinements and role abstractions, which might enforce additional edges that link the two mosaics. We must also be careful to synchronize the \(f_{\text{in}},f_{\text{out}}\) components of the two involved mosaics across the connecting edges.
Consider mosaics \(M=((\mathcal{I}_{L})_{L\in\mathbf{A}_{\mathcal{O}}},\rho,f_{\text{in}},f_{ \text{out}})\) and \(M^{\prime}=((\mathcal{I}_{L}^{\prime})_{L\in\mathbf{A}_{\mathcal{O}}},\rho^{ \prime},f_{\text{in}}^{\prime},f_{\text{out}}^{\prime})\). An \(M,M^{\prime}\)_-edge_ is an expression \(R(d,d^{\prime})\) such that \(R\) is a role, \(d\in\Delta^{\mathcal{I}_{L}}\), and \(d^{\prime}\in\Delta^{\mathcal{I}_{L}}\) for some \(L\in\mathbf{A}_{\mathcal{O}}\). A set \(E\) of \(M,M^{\prime}\)-edges is an _edge candidate_ if the following conditions are satisfied:
1. \(R(d,e)\in E\) and \(L(d)=L\) implies \(S(d,e)\in E\), for all \(R\sqsubseteq_{L}S\in\mathcal{O}\);
2. if \(\exists R.A\sqsubseteq_{L}B\in\mathcal{O}\), \(R(d,d^{\prime})\in E\), and \(d^{\prime}\in A^{\mathcal{I}_{L}}\), then \(d\in B^{\mathcal{I}_{L}}\);
3. for all \(L\in\mathbf{A}_{\mathcal{O}}\), all \((q,h)\in f_{\text{out}}(L)\), where \(q=E_{q}\uplus q|_{\overline{V}}\) for \(V=\mathsf{dom}(h)\), and all functions \(g\) from \(\overline{V}\cap\mathsf{var}(E_{q})\) to \(\Delta^{\mathcal{I}_{L}^{\prime}}\) such that \(R(h(x),g(y))\in E\) for all \(R(x,y)\in E_{q}\), we have \((q|_{\overline{V}},g)\in f_{\text{in}}^{\prime}(L)\);
4. for all \(R(d,d^{\prime})\in E\) and all \(L\):\(q(\bar{x},\bar{y})\) refines \(L^{\prime}\):\(q_{R}(x,y)\in\mathcal{O}\) such that \(q=q|_{\bar{x}}\uplus E_{q}\uplus q|_{\bar{y}}\), \(q_{R}=C_{x}(x)\wedge R(x,y)\wedge C_{y}(y)\), \(d\in C_{x^{\prime}}^{\mathcal{I}_{L}}\), and \(d^{\prime}\in C_{y}^{\prime}\): 1. \(\rho_{L}(d)\) and \(\rho_{L}^{\prime}(d^{\prime})\) are defined; 2.
5. for all role abstractions \(L^{\prime}\):\(R\)\(\mathtt{abstracts}\)\(L\):\(q(\bar{x},\bar{y})\in\mathcal{O}\), where \(q=q|_{\bar{x}}\uplus E_{q}\uplus q|_{\bar{y}}\), all homomorphisms \(h\) from \(q|_{\bar{x}}\) to \(\mathcal{I}_{L}\), and all homomorphisms \(g\) from \(q|_{\bar{y}}\) to \(\mathcal{I}_{L}^{\prime}\) such that \(\{S(h(x),g(y))\mid S(x,y)\in E_{q}\}\subseteq E\), there are \(d\in\Delta^{\mathcal{I}_{L^{\prime}}}\) and \(d^{\prime}\in\Delta^{\mathcal{I}_{L^{\prime}}}\) with \(\rho_{L}(d)=h(\bar{x})\), \(\rho_{L}^{\prime}(d^{\prime})=g(\bar{y})\), and \(R(d,d^{\prime})\in E\);
6. Converses of Conditions 2-5 above that go from \(M^{\prime}\) to \(M\) instead of from \(M\) to \(M^{\prime}\); details are in the appendix.
Let \(\mathcal{M}\) be a set of mosaics. A mosaic \(M\) is _good in_\(\mathcal{M}\) if for all \(A\sqsubseteq_{L}\exists R.B\in\mathcal{O}\) and \(d\in(A\sqcap\neg\exists R.B)^{\mathcal{I}_{L}^{M}}\):
1. there is a mosaic \(M^{\prime}\in\mathcal{M}\), a \(d^{\prime}\in B^{\mathcal{I}_{L}^{M^{\prime}}}\), and an edge candidate \(E\) such that \(R(d,d^{\prime})\in E\).
The actual algorithm is now identical to that from the previous section. We first compute the set \(\mathcal{M}_{0}\) of all mosaics and then repeatedly and exhaustively eliminate mosaics that are not good. Let \(\mathcal{M}^{*}\) denote the set of mosaics at which this process stabilizes.
**Lemma 2**.: \(C_{0}\) _is \(L_{0}\)-satisfiable w.r.t. \(\mathcal{O}\) iff \(\mathcal{M}^{*}\) contains (i) a mosaic \(M\) with \(C_{0}^{\mathcal{I}_{L_{0}}^{\mathcal{I}_{L_{0}}^{\mathcal{I}_{L}^{M}}}}\neq\emptyset\) and (ii) a mosaic \(M\) with \(\Delta^{\mathcal{I}_{L}^{M}}\neq\emptyset\), for every \(L\) in \(\mathbf{A}_{\mathcal{O}}\)._
The algorithm thus returns'satisfiable' if Conditions (i) and (ii) from Lemma 2 are satisfied and 'unsatisfiable' otherwise. It can be verified that the algorithm runs in double exponential time.
## 5 Lower Bounds
We have seen that the fragment \(\mathcal{ALCHI}^{\mathrm{abs}}[\mathrm{cr}]\) of \(\mathcal{ALCHI}^{\mathrm{abs}}\) which focusses on concept refinement is only ExpTime-complete. Here we show that all other fragments that contain only a single form of abstraction/refinement are 2ExpTime-hard, and consequently 2ExpTime-complete. This of course also provides a matching lower bound for Theorem 2--actually three rather different lower bounds, each one exploiting a different effect. All of our lower bounds apply already when \(\mathcal{ALCHI}\) is replaced with \(\mathcal{ALC}\) as the underlying DL.
### Role Abstraction: \(\mathcal{ALC}^{\mathrm{abs}}[\mathrm{ra}]\)
The 2ExpTime-hardness of satisfiability in \(\mathcal{ALCHI}^{\mathrm{abs}}\) is not entirely surprising given that we have built conjunctive queries into the logic and CQ evaluation on \(\mathcal{ALC}\) knowledge bases is known to be 2ExpTime-hard [10]. In fact, this is already the case for the following _simple_ version of the latter problem: given an \(\mathcal{ALC}\) ontology \(\mathcal{O}\), a concept name \(A_{0}\), and a Boolean CQ \(q\), decide whether \(\mathcal{I}\models q\) for all models \(\mathcal{I}\) of \(\mathcal{O}\) with \(A_{0}^{\mathcal{I}}\neq\emptyset\). We write \(\mathcal{O},A_{0}\models q\) if this is the case.
It is easy to reduce the (complement of the) simple CQ evaluation problem to satisfiability in \(\mathcal{ALC}\)\(\mathtt{4bs}[\mathrm{ca}]\). Fix two abstraction levels \(L\prec L^{\prime}\), let \(\widehat{q}\) be the CQ obtained from \(q\) by dequantifying all variables, thus making all variables answer variables, and let \(\mathcal{O}^{\prime}\) be the set of all concept inclusions \(C\sqsubseteq_{L}D\) with \(C\sqsubseteq D\in\mathcal{O}\) and the concept abstraction
\[L^{\prime}\text{:}\bot\text{\ \
### Role Refinement: \(\mathcal{ALC}^{\text{abs}}[\text{rr}]\)
While concept and role abstractions enable reductions from CQ evaluation, this does not seem to be the case for concept and role refinements. Indeed, we have seen in Section 4.1 that concept refinements do not induce 2ExpTime-hardness. Somewhat surprisingly, role refinements behave differently and are a source of 2ExpTime-hardness, though for rather different reasons than abstraction statements.
It is well-known that there is an exponentially space-bounded alternating Turing machine (ATM) that decides a 2ExpTime-complete problem and on any input \(w\) makes at most \(2^{|w|}\) steps [11]. We define ATMs in detail in the appendix and only note here that our ATMs have a one-side infinite tape and a dedicated accepting state \(q_{a}\) and rejecting state \(q_{r}\), no successor configuration if its state is \(q_{a}\) or \(q_{r}\), and exactly two successor configurations otherwise.
Let \(M=(Q,\Sigma,\Gamma,q_{0},\Delta)\) be a concrete such ATM with \(Q=Q_{\exists}\uplus Q_{\forall}\uplus\{q_{a},q_{r}\}\). We may assume w.l.o.g that \(M\) never attempts to move left when the head is positioned on the left-most tape cell. Let \(w=\sigma_{1}\cdots\sigma_{n}\in\Sigma^{*}\) be an input for \(M\). We want to construct an \(\mathcal{ALC}^{\text{abs}}[\text{rr}]\)-ontology \(\mathcal{O}\) and choose a concept name \(S\) and abstraction level \(L_{1}\) such that \(S\) is \(L_{1}\)-satisfiable w.r.t. \(\mathcal{O}\) iff \(w\in L(M)\). Apart from \(S\), which indicates the starting configuration, we use the following concept names:
* \(A_{\sigma}\), for each \(\sigma\in\Gamma\), to represent tape content;
* \(A_{q}\), for each \(q\in Q\), to represent state and head position;
* \(B_{q,\sigma,M}\) for \(q\in Q\), \(\sigma\in\Gamma,M\in\{L,R\}\), serving to choose a transition;
* \(H_{\leftarrow},H_{\leftarrow}\) indicating whether a tape cell is to the right or left of the head.
plus some auxiliary concept names whose purpose shall be obvious. We use the role name \(t\) for next tape cell and \(c_{1},c_{2}\) for successor configurations.
The ontology \(\mathcal{O}\) uses the abstraction levels \(\mathbf{A}=\{L_{1},\ldots,L_{n}\}\) with \(L_{i+1}\prec L_{i}\) for \(1\leq i<n\). While we are interested in \(L_{1}\)-satisfiability of \(S\), the computation of \(M\) is simulated on level \(L_{n}\). We start with generating an infinite computation tree on level \(L_{1}\):
\[S\sqsubseteq_{L_{1}}\exists c_{1}.N\sqcap\exists c_{2}.N\sqsubseteq_{L_{1}} \exists c_{1}.N\sqcap\exists c_{2}.N.\]
In the generated tree, each configuration is represented by a single object. On levels \(L_{2},\ldots,L_{n}\), we generate similar trees where, however, configurations are represented by \(t\)-paths. The length of these paths doubles with every level and each node on a path is connected via \(c_{1}\) to the corresponding node in the path that represents the first successor configuration, and likewise for \(c_{2}\) and the second successor configuration. This is illustrated in Figure 2 where for simplicity we only show a first successor configuration and three abstraction levels. We use the following role refinements:
\[L_{i+1}\colon q(\bar{x},\bar{y})\]
\[L_{i}\colon t(x,y)\]
\[L_{i+1}\colon q(\bar{x},\bar{y})\]
\[L_{i+1}\colon q_{j}(\bar{x},\bar{y})\]
\[L_{i}\colon c_{i}(x,y)\]
for \(0\leq i<n\) and \(j\in\{1,2\}\), and where \(\bar{x}=x_{1}x_{2}\), \(\bar{y}=y_{1}y_{2}\) and
\[q(\bar{x},\bar{y}) =t(x_{1},x_{2})\wedge t(x_{2},y_{1})\wedge t(y_{1},y_{2})\] \[q_{j}(\bar{x},\bar{y}) =t(x_{1},x_{2})\wedge t(y_{1},y_{2})\wedge c_{j}(x_{1},y_{1})\wedge c _{j}(x_{2},y_{2}).\]
To make more precise what we want to achieve, let the \(m\)_-computation tree_, for \(m>0\), be the interpretation \(\mathcal{I}_{m}\) with
\[\Delta^{\mathcal{I}_{m}} =\{c_{0},c_{1}\}^{*}\cdot\{1,\ldots,m\}\] \[t^{\mathcal{I}_{m}} =\{(wi,wj)\mid w\in\{c_{0},c_{1}\}^{*},1\leq i<m,j=i+1\}\] \[c_{\ell}^{\mathcal{I}_{m}} =\{(wj,wc_{i}j)\mid w\in\{c_{0},c_{1}\}^{*},1\leq j\leq m,i\in\{0,1\} \}\]
for \(\ell\in\{1,2\}\). It can be shown that for any model \(\mathcal{I}\) of the \(\mathcal{ALC}^{\text{abs}}[\text{rr}]\)-ontology \(\mathcal{O}\) constructed so far and for all \(i\in\{1,\ldots,n\}\), we must find a (homomorphic image of a) \(2^{i}\)-computation tree in the interpretation \(\mathcal{I}_{L_{i}}\). This crucially relies on the fact that ensembles cannot overlap. In Figure 2, for example, the role refinements for \(t\) and for \(c_{1}\) both apply on level \(L_{2}\), and for attaining the structure displayed on level \(L_{3}\) it is crucial that in these applications each object on level \(L_{2}\) refines into the same ensemble on level \(L_{3}\).
On level \(L_{n}\), we thus find a \(2^{n}\)-computation tree which we use to represent the computation of \(M\) on input \(w\). To start, the concept name \(S\) is copied down from the root of the \(1\)-computation tree on level \(L_{1}\) to that of the \(2^{n}\)-computation tree on level \(L_{n}\). To achieve this, we add a copy of the above role refinements for \(c_{1}\), but now using the CQ
\[q_{1}(\bar{x},\bar{y})=S(x_{1})\wedge c_{1}(x_{1},y_{1})\wedge c_{1}(x_{2},y_{2}).\]
We next describe the initial configuration:
\[S\sqsubseteq_{L_{n}}A_{q_{0}}\sqcap A_{\sigma_{1}}\sqcap\forall t.A_{\sigma_{2} }\sqcap\cdots\sqcap\forall t^{n-1}.(A_{\sigma_{n}}\sqcap B_{\rightarrow})\] \[B_{\rightarrow}\sqsubseteq_{L_{n}}\forall t.(A_{\square}\sqcap B_{ \rightarrow})\]
For existential states, we consider one of the two possible successor configurations:
\[A_{q}\sqcap A_{\sigma}\sqsubseteq_{L_{n}}(\forall c_{1}.B_{q^{\prime},\sigma^{ \prime},M^{\prime}})\sqcup(\forall c_{2}.B_{\bar{q},\bar{\sigma},\bar{M}})\]
for all \(q\in Q_{\exists}\) and \(\sigma\in\Gamma\) such that \(\Delta(q,\sigma)=\{(q^{\prime},\sigma^{\prime},M^{\prime}),\)\((\bar{q},\bar{\sigma},M)\}\). For universal states, we use both successors:
\[A_{q}\sqcap A_{\sigma}\sqsubseteq_{L_{n}}(\forall c_{1}.B_{q^{\prime},\sigma^{ \prime},M^{\prime}})\sqcap(\forall c_{2}.B_{\bar{q},\bar{\sigma},\bar{M}})\]
for all \(q\in Q_{\forall}\) and \(\sigma\in\Gamma\) such that \(\Delta(q,\sigma)=\{(q^{\prime},\sigma^{\prime},M^{\prime}),\)\((\bar{q},\bar{\sigma},M)\}\). We next implement the transitions:
\[B_{q,\sigma,M}\sqsubseteq_{L_{n}}A_{\sigma}\quad\exists t.B_{q,\sigma,L} \sqsubseteq_{L_{n}}A_{q}\] \[B_{q,\sigma,R}\sqsubseteq_{L_{n}}\forall t.A_{q}\]
for all \(q\in Q\), \(\sigma\in\Gamma\), and \(M\in\{L,R\}\). We mark cells that
Figure 2: Dotted lines indicate refinement.
are not under the head:
\[A_{q}\sqsubseteq_{L_{n}}\forall t.H_{\neg} \exists t.A_{q}\sqsubseteq_{L_{n}}H_{\neg}\] \[H_{\neg} \sqsubseteq_{L_{n}}\forall t.H_{\neg} \exists t.H_{\neg} \sqsubseteq_{L_{n}}H_{\neg}\]
for all \(q\in Q\). Such cells do not change:
\[(H_{\neg}\sqcup H_{\neg})\sqcap A_{\sigma}\sqsubseteq_{L_{n}}\forall c_{i}.A_ {\sigma}\]
for all \(\sigma\in\Gamma\) and \(i\in\{1,2\}\). State, content of tape, and head position must be unique:
\[A_{q}\sqcap A_{q^{\prime}}\sqsubseteq_{L_{n}}\bot A_{\sigma}\sqcap A_{\sigma^{ \prime}}\sqsubseteq_{L_{n}}\bot\] \[(H_{\neg}\sqcup H_{\neg})\sqcap A_{q}\sqsubseteq_{L_{n}}\bot\]
for all \(q,q^{\prime}\in Q\) and \(\sigma,\sigma^{\prime}\in\Gamma\) with \(q\neq q^{\prime}\) and \(\sigma\neq\sigma^{\prime}\). Finally, all followed computation paths must be accepting:
\[A_{q_{r}}\sqsubseteq_{L_{n}}\bot.\]
This finishes the construction of \(\mathcal{O}\).
**Lemma 3**.: \(S\) _is \(L_{1}\)-satisfiable w.r.t. \(\mathcal{O}\) iff \(w\in L(M)\)._
We have thus obtained the announced result.
**Theorem 5**.: _Satisfiability in \(\mathcal{ALC}^{\mathsf{abs}}[\mathrm{rr}]\) is 2ExpTime-hard._
## 6 Undecidability
One might be tempted to think that the decidability of \(\mathcal{ALC}\mathcal{HI}^{\mathsf{abs}}\) is clear given that only a finite number of abstraction levels can be mentioned in an ontology. However, achieving decidability of DLs with abstraction and refinement requires some careful design choices. In this section, we consider three seemingly harmless extensions of \(\mathcal{ALC}\mathcal{HI}^{\mathsf{abs}}\) and show that each of them results in undecidability. This is in fact already the case for \(\mathcal{EL}^{\mathsf{abs}}\) where the underlying DL \(\mathcal{ALC}\mathcal{HI}\) is replaced with \(\mathcal{EL}\).
### Basic Observations
We make some basic observations regarding the DL \(\mathcal{EL}^{\mathsf{abs}}\) and its fragments. In classical \(\mathcal{EL}\), concept satisfiability is not an interesting problem because every concept is satisfiable w.r.t. every ontology. This is not the case in \(\mathcal{EL}^{\mathsf{abs}}\) where we can express concept inclusions of the form \(C\sqsubseteq_{L}\bot\) with \(C\) an \(\mathcal{EL}\)-concept, possibly at the expense of introducing additional abstraction levels. More precisely, let \(L^{\prime}\) be the unique abstraction level with \(L\prec L^{\prime}\) if it exists and a fresh abstraction level otherwise. Then \(C\sqsubseteq_{L}\bot\) can be simulated by the following CI and concept abstraction:
\[C\sqsubseteq_{L}\exists r_{C}.\exists r_{C}.\top\] \[L^{\prime}.\top\] abstracts
\[L^{\prime}.\top\]
where \(r_{C}\) is a fresh role name. Note that this again relies on the fact that ensembles cannot overlap, and thus an \(r_{C}\)-path of length two in level \(L\) results in unsatisfiability. The same can be achieved by using a role abstraction in place of the concept abstraction.
To prove our undecidability results, it will be convenient to have available concept inclusions of the form \(C\sqsubseteq_{L}\forall r.D\) with \(C,D\)\(\mathcal{EL}\)-concepts. Let \(L^{\prime}\) be a fresh abstraction level. Then \(C\sqsubseteq_{L}\forall r.D\) can be simulated by the following role refinement and concept abstraction:
\[L^{\prime}.r(x,y)\wedge A(y)\]
\[L^{\prime}.D\] abstracts
\[L^{\prime}.A(x)\]
where \(A\) is a fresh concept name. It is easy to see that the same can be achieved with a role abstraction in place of the concept abstraction.
In the following, we thus use inclusions of the forms \(C\sqsubseteq_{L}\bot\) and \(C\sqsubseteq\forall r.D\) in the context of \(\mathcal{EL}^{\mathsf{abs}}\) and properly keep track of the required types of abstraction and refinement statements.
### Repetition-Free Tuples
In the semantics of \(\mathcal{ALC}\mathcal{HI}^{\mathsf{abs}}\) as defined in Section 3, ensembles are tuples in which elements may occur multiple times. It would arguably be more natural to define ensembles to be repetition-free tuples. We refer to this version of the semantics as the _repetition-free_ semantics.
If only concept and role refinement are admitted, then there is no difference between satisfiability under the original semantics and under the repetition-free semantics. In fact, any model of \(\mathcal{O}\) under the original semantics can be converted into a model of \(\mathcal{O}\) under repetition-free semantics by duplicating elements. This gives the following.
**Proposition 1**.: _For every \(\mathcal{ALC}\)-concept \(C\), abstraction level \(L\), and \(\mathcal{ALC}\mathcal{HI}^{\mathsf{abs}}[\mathrm{rr},\mathrm{rr}]\)-ontology \(\mathcal{O}\): \(C\) is \(L\)-satisfiable w.r.t. \(\mathcal{O}\) iff \(C\) is \(L\)-satisfiable w.r.t. \(\mathcal{O}\) under the repetition-free semantics._
The situation changes once we admit abstraction.
**Theorem 6**.: _Under the repetition-free semantics, satisfiability is undecidable in \(\mathcal{ALC}^{\mathsf{abs}}[\mathrm{ca}]\), \(\mathcal{ALC}^{\mathsf{abs}}[\mathrm{ra}]\), \(\mathcal{EL}^{\mathsf{abs}}[\mathrm{rr},\mathrm{ca}]\), and \(\mathcal{EL}^{\mathsf{abs}}[\mathrm{rr},\mathrm{ra}]\)._
In the following, we prove undecidability for satisfiability in \(\mathcal{ALC}^{\mathsf{abs}}[\mathrm{ca}]\). The result for \(\mathcal{ALC}^{\mathsf{abs}}[\mathrm{ra}]\) is a minor variation and the results for \(\mathcal{EL}^{\mathsf{abs}}\) are obtained by applying the observations from Section 6.1.
We reduce the complement of the halting problem for deterministic Turing machines (DTMs) on the empty tape. Assume that we are given a DTM \(M=(Q,\Sigma,\Gamma,q_{0},\delta)\). As in Section 5.3, we assume that \(M\) has a one-side infinite tape and never attempts to move left when the head is on the left end of the tape. We also assume that there is a dedicated halting state \(q_{h}\in Q\).
We want to construct an \(\mathcal{ALC}^{\mathsf{abs}}[\mathrm{ca}]\)-ontology \(\mathcal{O}\) and choose a concept name \(S\) and abstraction level \(L\) such that \(S\) is \(L\)-satisfiable w.r.t. \(\mathcal{O}\) iff \(M\) does not halt on the empty tape. We use essentially the same concept and role names as in Section 5.3, except that only a single role name \(c\) is used for transitions to the (unique) next configuration. Computations are represented in the form of a grid as shown on the left-hand side of Figure 3, where the concept names \(X_{i}\) must be disregarded as they belong to a different reduction (and so do the queries). We use two abstraction levels \(L\) and \(L^{\prime}\) with \(L\prec L^{\prime}\). The computation of \(M\) is represented on level \(L\).
We first generate an infinite binary tree in which every node has one \(t\)-successor and one \(c\)-successor:
\[\top\sqsubseteq_{L}\exists t.\top\sqcap\exists c.\top\]
To create the desired grid structure, it remains to enforce that grid cells close, that is, the \(t\)-successor of a \(c\)-successor of
any node coincides with the \(c\)-successor of the \(t\)-successor of that node. We add the concept abstraction
\[L^{\prime}\colon\bot\mbox{\ abstracts }L\colon q(\bar{x})\mbox{ where}\] \[q(\bar{x})=c(x_{1},x_{2})\wedge t(x_{1},x_{3})\wedge c(x_{3},x_{4 })\wedge t(x_{2},x_{4}^{\prime}).\]
The idea is that any non-closing grid cell admits a repetition-free answer to \(q\) on \(\mathcal{I}_{L}\), thus resulting in unsatisfiability. If all grid cells close, there will still be answers, but all of them are repetitive. The above abstraction alone, however, does not suffice to implement this idea. It still admits, for instance, a non-closing grid cell in which the two left elements have been identified. We thus need to rule out such unintended identifications and add the concept abstraction \(L^{\prime}\colon\bot\mbox{\ abstracts }L\colon q\) for the following six CQs \(q\):
\[t(x_{1},x_{2})\wedge c(x_{1},x_{2})\qquad t(x_{1},x_{2})\wedge c (x_{2},x_{1})\] \[t(x_{1},x_{2})\wedge c(x_{1},x_{3})\wedge t(x_{3},x_{2})\qquad c (x_{1},x_{1})\] \[t(x_{1},x_{2})\wedge c(x_{1},x_{3})\wedge c(x_{2},x_{3})\qquad t (x_{1},x_{1})\]
The rest of the reduction is now very similar to that given in Section 5.3, details are in the appendix.
### DAG Semantics
Our semantics requires abstraction levels to be organized in a tree. While this is very natural, admitting a DAG structure might also be useful. This choice, which we refer to as the _DAG semantics_, leads to undecidability.
**Theorem 7**.: _Under the DAG semantics, satisfiability is undecidable in \(\mathcal{ALC}^{\mathsf{abs}}[\mathrm{ca,cr}]\) and \(\mathcal{EL}^{\mathsf{abs}}[\mathrm{ca,cr,rr}]\)._
The result is again proved by a reduction from (the complement of) the halting problems for DTMs. In fact, the reduction differs from that in Section 6.2 only in how the grid is constructed and thus we focus on that part. We present the reduction for \(\mathcal{ALC}^{\mathsf{abs}}[\mathrm{ca,cr}]\).
Assume that we are given a DTM \(M=(Q,\Sigma,\Gamma,q_{0},\delta)\). We want to construct an ontology \(\mathcal{O}\) and choose a concept name \(S\) and abstraction level \(L\) such that \(S\) is \(L\)-satisfiable w.r.t. \(\mathcal{O}\) iff \(M\) does not halt on the empty tape. We use abstraction levels \(L,L_{1},L_{2},L_{3},L_{4}\) with \(L\prec L_{i}\) for all \(i\in\{1,\ldots,4\}\). The computation of \(M\) is simulated on level \(L\). We start with generating an infinite \(t\)-path with outgoing infinite \(c\)-paths from every node:
\[S\sqsubseteq_{L}\exists t.A_{t}\qquad A_{t}\sqsubseteq_{L}\exists t.A_{t}\] \[A_{t}\sqsubseteq_{L}\exists c.A_{c}\qquad A_{c}\sqsubseteq_{L} \exists c.A_{c}.\]
In principle, we would like to add the missing \(t\)-links using the following concept abstraction and refinement:
\[L_{1}\colon U_{1}\mbox{\ abstracts }L\colon q\] \[L\colon q\wedge t(x_{3},x_{4})\mbox{\ \ref{def:L}}L_{1}\colon U_{1}\mbox{ \ where}\] \[q=c(x_{1},x_{3})\wedge t(x_{1},x_{2})\wedge c(x_{2},x_{4}).\]
This would then even show undecidability under the original semantics, but it does not work because it creates overlapping ensembles and thus simply results in unsatisfiability. We thus use the four abstraction levels \(L_{1},\ldots,L_{4}\) in place of only \(L_{1}\). This results in different kinds of ensembles on level \(L\), one for each level \(L_{i}\), and an \(L_{i}\)-ensemble can overlap with an \(L_{j}\)-ensemble if \(i\neq j\). We label the grid with concept names \(X_{1},\ldots,X_{4}\) as shown in Figure 3, using CIs
\[X_{1}\sqsubseteq_{L}\forall c.X_{3}\sqcap\forall t.X_{2}\qquad X _{2}\sqsubseteq_{L}\forall c.X_{4}\sqcap\forall t.X_{1}\] \[X_{3}\sqsubseteq_{L}\forall c.X_{1}\qquad X_{4}\sqsubseteq_{L} \forall c.X_{2}\qquad S\sqsubseteq_{1}.\]
We define four variations \(q_{1},\ldots,q_{4}\) of the above CQ \(q\), as shown on the right-hand side of Figure 3, and use the following concept abstraction and refinement, for \(i\in\{1,\ldots,4\}\):
\[L_{i}\colon U_{i}\mbox{\ abstracts }L\colon q_{i}\] \[L\colon q_{i}\wedge t(x_{3},x_{4})\mbox{\ \ref{def:L}}L_{i}\colon U_{i}.\]
It can be verified that this eliminates overlapping ensembles and indeed generates a grid.
### Quantified Variables
The final variation that we consider is syntactic rather than semantic: we admit quantified variables in CQs in abstraction and refinement statements.
**Theorem 8**.: _In the extension with quantified variables, satisfiability is undecidable in \(\mathcal{ALC}^{\mathsf{abs}}[\mathrm{ca,cr}]\) and \(\mathcal{EL}^{\mathsf{abs}}[\mathrm{ca,cr,rr}]\)._
We use a DTM reduction that follows the same lines as the previous reduction and only explain how to generate a grid. We again start with an infinite \(t\)-path with outgoing infinite \(c\)-paths from every node. In the previous reduction, the main issue when adding the missing \(t\)-links was that a naive implementation creates overlapping ensembles. It is here that quantified variables help since they allow us to speak about elements without forcing them to be part of an ensemble. We use the following concept abstraction and refinement:
\[L_{1}\colon U_{1}\mbox{\ \ref{def:L}}L\colon q\] \[L\colon q\wedge t(x_{3},x_{4})\mbox{\ \ref{def:L}}L_{1}\colon U_{1}\mbox{ \ where}\] \[q=\exists x_{1}\exists x_{2}\,c(x_{1},x_{3})\wedge t(x_{1},x_{2}) \wedge c(x_{2},x_{4}).\]
## 7 Conclusion
We have introduced DLs that support multiple levels of abstraction and include operators based on CQs for relating these levels. As future work, it would be interesting to analyse the complexity of abstraction DLs based on Horn DLs such as \(\mathcal{EL}\), \(\mathcal{ELI}\) and Horn-\(\mathcal{ALCI}\). It would also be interesting to design an ABox formalism suitable for abstraction DLs, and to use such DLs for ontology-mediated querying. Finally, our work leaves open some decidability questions such as for \(\mathcal{ALC}^{\mathsf{abs}}[\mathrm{cr}]\) and \(\mathcal{EL}^{\mathsf{abs}}[\mathrm{cr},\mathrm{ca}]\) under the DAG semantics and with quantified variables.
Figure 3: Grid structure and queries for the DAG Semantics.
## Acknowledgments
The research reported in this paper has been supported by the German Research Foundation DFG, as part of Collaborative Research Center (Sonderforschungsbereich) 1320 Project-ID 329551904 "EASE - Everyday Activity Science and Engineering", University of Bremen ([http://www.ease-crc.org/](http://www.ease-crc.org/)). The research was conducted in subproject "P02 - Ontologies with Abstraction".
This work is partly supported by BMBF (Federal Ministry of Education and Research) in DAAD project 57616814 (SECAI, School of Embedded Composite AI) as part of the program Konrad Zuse Schools of Excellence in Artificial Intelligence.
|
2302.00931 | Linear preservers on idempotents of Fourier algebras | In this article, we give a representation of bounded complex linear operators
which preserve idempotent elements on the Fourier algebra of a locally compact
group. When such an operator is moreover positive or contractive, we show that
the operator is induced by either a continuous group homomorphism or a
continuous group anti-homomorphism. If the groups are totally disconnected,
bounded homomorphisms on the Fourier algebra can be realised by the idempotent
preserving operators. | Ying-Fen Lin, Shiho Oi | 2023-02-02T08:12:15Z | http://arxiv.org/abs/2302.00931v1 | # Linear Preservers on Idempotents of Fourier Algebras
###### Abstract.
In this article, we give a representation of bounded complex linear operators which preserve idempotent elements on the Fourier algebra of a locally compact group. When such an operator is moreover positive or contractive, we show that the operator is induced by either a continuous group homomorphism or a continuous group anti-homomorphism. If the groups are totally disconnected, bounded homomorphisms on the Fourier algebra can be realised by the idempotent preserving operators.
Key words and phrases:Fourier algebras, idempotents, linear preservers, quotient groups 2020 Mathematics Subject Classification: 47B48, 47B49, 43A22
## 1. Introduction
Let \(G\) be a locally compact Hausdorff group. The Fourier-Stieltjes \(B(G)\) and the Fourier algebras \(A(G)\) of \(G\) were introduced by Eymard in his celebrating paper [12]. Recall that \(B(G)\) is the linear combination of all continuous positive definite functions on \(G\), as a Banach space, \(B(G)\) is naturally isometric to the predual of \(W^{*}(G)\), the von Neumann algebras generated by the universal representations \(\omega_{G}\) of \(G\). Moreover, it is a commutative Banach \(*\)-algebra with respect to pointwise multiplication and complex conjugation. The Fourier algebra \(A(G)\) is the closed ideal of \(B(G)\) generated by the functions with compact supports. As a Banach space, \(A(G)\) is isometric to the predual of the group von Neumann algebra \(\operatorname{VN}(G)\), the von Neumann algebra generated by the left regular representations \(\lambda_{G}\) of \(G\). It is well known that \(A(G)\) is regular, semisimple, and the Fourier and the Fourier-Stieltjes algebras are both subalgebras of \(C_{b}(G)\), the algebra of continuous bounded functions on \(G\).
Takesaki and Tatsuuma in [25] showed that there is a one-to-one correspondence between compact subgroups of \(G\) and non-zero right invariant closed self-adjoint subalgebras of \(A(G)\). As a refinement, Bekka, Lau and Schlichting in [3] studied non-zero, closed, invariant \(*\)-subalgebras of \(A(G)\). They showed that these spaces are the Fourier algebras \(A(G/K)\) of the quotient group \(G/K\) for some compact normal subgroup \(K\) of \(G\). On the other hand, Forrest [13] introduced the Fourier algebra \(A(G/K)\) of the left coset space \(G/K\), where \(K\) is a
## 1. Introduction
Let \(G\) be a group of finite order. Let \(K\) be a finite group of finite order. Let \(G\) be a finite group of finite order. Let \(K\) be a finite group of finite order.
of idempotent elements in Fourier-Stieltjes and Fourier algebras, for our purpose we will focus solely on operators which preserve idempotents.
In the rich literature of linear preservers, there are many works that study linear maps \(T\) on spaces \(X\) which preserve some subsets \(S\) of \(X\), i.e., \(T(S)\subset S\). Dieudonne in [9] studied semi-linear maps on \(M_{n}(\mathbb{K})\), the algebra of \(n\times n\) matrices over a field \(\mathbb{K}\), which preserve the set of all singular matrices. After that, many mathematicians considered linear maps on \(M_{n}(\mathbb{K})\) that preserve subsets of matrices with different properties (e.g. [19, 23, 10, 4] to name a few). In [5], it is shown that every complex linear map \(T\) on \(M_{n}(\mathbb{C})\) which preserves the set of all idempotents is either an inner automorphism or an inner anti-automorphism. In addition, in [6] linear maps on \(M_{n}(\mathbb{C})\) which send potent matrices (that is, matrices \(A\) satisfy \(A^{r}=A\) for some integer \(r\geq 2\)) to potent matrices were characterised. Since then, the studies of idempotent preserving maps have attracted considerable interest, see, e.g. [11, 14]. Recently, in [21] the authors proved that every additive map from the rational span of Hermitian idempotents in a von Neumann algebra into the rational span of Hermitian idempotents in a C*-algebra can be extended to a Jordan \(*\)-homomorphism.
In this paper, we study bounded linear operators from \(A(G)\) into \(B(H)\) which send idempotents to idempotents. We show that such an operator will give rise an algebraic homomorphism on \(A_{I}(G)\). The algebra \(A_{I}(G)\) will be our main object of study, namely, we will characterise linear mappings defined on the Fourier algebra \(A(G)\) or on \(A_{I}(G)\) which preserve \(I(G)\). Moreover, we show that when the groups are totally disconnected, idempotent preserving operators will recover algebraic homomorphisms on the Fourier algebra.
## 2. Main results
Let \(G\) be a locally compact Hausdorff group and \(K\) be a closed subgroup of \(G\). We will denote by \(G/K\) the homogeneous space of left cosets of \(K\). Let
\[B(G:K):=\{u\in B(G):u(xk)=u(x)\,\text{for all }x\in G,k\in K\},\]
this is, functions in \(B(G)\) which are constant on cosets of \(K\), and
\[A(G:K):=\{u\in B(G:K):q(\text{supp}(u))\text{ is compact in }G/K\}^{-B(G)},\]
where \(\text{supp}(u)\) is the support of \(u\) in \(G\) and \(q\) is the canonical quotient map from \(G\) to \(G/K\). If furthermore \(K\) is a normal subgroup, by [13, Proposition 3.2] we have that \(B(G:K)\) and \(A(G:K)\) are isometrically isomorphic to the Fourier-Stieltjes and the Fourier algebras \(B(G/K)\) and \(A(G/K)\), respectively. Note that \(A(G:K)\cap A(G)\neq\{0\}\) if and only if \(K\) is compact.
Let \(e\) be the identity of the group \(G\), we denote the connected component of \(e\) by \(G_{e}\) which is a closed normal subgroup of \(G\), thus,
is a totally disconnected locally compact group. The following result about the algebra \(A_{I}(G)\) generated by idempotents of \(A(G)\) in relation with \(A(G:G_{e})\) was given in [18], for the completion we give a short proof in the paper.
**Proposition 2.1**.: _[_18_, Proposition 1.1(ii)]_ _If the connected component \(G_{e}\) is compact then \(A_{I}(G)=A(G:G_{e})\), this is, \(A_{I}(G)\) consists of all functions in \(A(G)\) which are constant on cosets of \(G_{e}\). In particular, if \(G_{e}\) is not compact then \(A_{I}(G)=\{0\}\)._
Proof.: Let \(q_{G}:G\to G/G_{e}\) is the quotient map onto \(G/G_{e}\). Since \(G_{e}\) is compact, via \(u\mapsto u\circ q_{G}\) we have that \(A(G/G_{e})\) is isometrically isomorphic to \(A(G:G_{e})\), which is a closed subalgebra of \(A(G)\). Thus, \(A_{I}(G)=\overline{\operatorname{span}}\{1_{Y}:Y\in\Omega_{\text{o}}^{\text{c }}(G)\}\subseteq A(G:G_{e})\). Conversely, since \(G/G_{e}\) is totally disconnected, we have that the span of the idempotents of \(A(G/G_{e})\) is dense [13, Theorem 5.3]. Moreover \(A(G/G_{e})\) is isomorphic to \(A(G:G_{e})\), thus, \(A(G:G_{e})\) is generated by idempotents of \(A(G/G_{e})\), so \(A(G:G_{e})\subseteq A_{I}(G)\).
If the Fourier algebra which contains non-trivial idempotents, that is, the connected component \(G_{e}\) is compact, then by Proposition 2.1 there is an isometric isomorphism from \(A_{I}(G)\) onto \(A(G/G_{e})\). More precisely, it induces an isometric isomorphism \(\varphi_{G}:A_{I}(G)\to A(G/G_{e})\) as
\[\varphi_{G}(f)(q_{G}(a))=f(a) \tag{1}\]
for any \(f\in A_{I}(G)\) and \(a\in G\), where \(q_{G}:G\to G/G_{e}\) is the quotient map onto \(G/G_{e}\).
### Idempotent preserving maps with \(T(I(G))\subset I_{b}(h)\)
Let \(G\) and \(H\) be two locally compact groups. We consider a bounded complex linear map \(T:A(G)\to B(H)\) which satisfies
\[T(I(G))\subset I_{B}(H). \tag{2}\]
For any \(f\in\operatorname{span}\{1_{Y}:Y\in\Omega_{\text{o}}^{\text{c}}(G)\}\), there exists \(\alpha_{i}\in\mathbb{C}\) and \(Y_{i}\in\Omega_{\text{o}}^{\text{c}}(G)\) such that \(f=\Sigma_{k=1}^{n}\alpha_{k}1_{Y_{k}}\). Thus, we have \(Tf=\Sigma_{k=1}^{n}\alpha_{k}T1_{Y_{k}}\in\operatorname{span}\{1_{Y}:Y\in \Omega_{\text{o}}(H)\}\subset B(H)\). Let us recall that \(A_{I}(G)=\overline{\operatorname{span}}\{1_{Y}:Y\in\Omega_{\text{o}}^{\text{c }}(G)\}\) and \(B_{I}(H)=\overline{\operatorname{span}}\{1_{Y}:Y\in\Omega_{\text{o}}(H)\}\). Since \(T\) is a bounded map, we obtain \(T(A_{I}(G))\subseteq B_{I}(H)\subset B(H)\).
Our aim is to obtain a representation of such a map \(T\) on \(A_{I}(G)\). If \(I(G)=\{0\}\), then \(A_{I}(G)=\{0\}\). Since \(T\) is complex linear, we have \(T=0\) on \(A_{I}(G)\). Thus without loss of generality, we can assume that the Fourier algebra \(A(G)\) have non-zero idempotent elements. Hence, the connected component \(G_{e}\) is always a compact normal subgroup of \(G\). On the other hand, we define the following map which will be used in sequel.
**Definition 2.2**.: _Let \(G\) be a locally compact Hausdorff group. Using the axiom of choice, let \(S\) be a set of representatives of the cosets of \(G/G_{e}\), that is \(G=\bigsqcup_{a\in S}aG_{e}\). Then we define a map \([\ \cdot\ ]_{G/G_{e}}\) from \(G/G_{e}\) onto \(S\) by_
\[[aG_{e}]_{G/G_{e}}=a\]
_for any \(a\in S\)._
We first have the following observations concerning the operator satisfying (2).
**Lemma 2.3**.: _The map \(T\) preserves the disjointness of idempotents. This is, \(Tf\cdot Tg=0\) for any \(f,g\in I(G)\) with \(f\cdot g=0\)._
Proof.: Let \(f,g\in I(G)\) such that \(f\cdot g=0\). Then we have \((f+g)^{2}=f+g\). Thus \(f+g\in I(G)\). By the assumption, \(Tf\), \(Tg\) and \(T(f+g)\in I_{B}(H)\). Since we have \((T(f+g))^{2}=Tf+Tg\), we get \(Tf\cdot Tg=0\).
**Definition 2.4**.: _We define \(\Phi:A(G/G_{e})\to B(H)\) by_
\[\Phi(f)=T\circ\varphi_{G}^{-1}(f)\]
_for any \(f\in A(G/G_{e})\), where \(\varphi_{G}\) is given in (1)._
Then \(\Phi\) is a bounded complex linear operator from \(A(G/G_{e})\) into \(B(H)\). In order to achieve our main result, we consider the dual map \(\Phi^{*}:W^{*}(H)\to\operatorname{VN}(G/G_{e})\) and have the following lemmas.
**Lemma 2.5**.: _Let \(\lambda\in\operatorname{VN}(G/G_{e})\) and \(a\in G/G_{e}\). Suppose that \(a\in\operatorname{supp}\lambda\). Then for every neighbourhood \(V\) of \(a\) in \(G/G_{e}\), there exists \(h\in I(G/G_{e})\) such that \(\operatorname{supp}h\subset V\) and \(\langle\lambda,h\rangle\neq 0\)._
Proof.: Since \(G/G_{e}\) is totally disconnected, every neighbourhood of the identity contains an open compact subgroup. As \(a^{-1}V\) is a neighbourhood of the identity, there exists an open compact subgroup \(G_{a}\) in \(G/G_{e}\) such that \(G_{a}\subset a^{-1}V\). Thus \(aG_{a}\subset V\). Since \(aG_{a}\) is a compact open coset in \(G/G_{e}\), we have that \(1_{aG_{a}}\in A(G/G_{e})\) is an idempotent with norm \(1\). Since \(a\in\operatorname{supp}\lambda\), there is \(g\in A(G/G_{e})\) such that \(\operatorname{supp}g\subset aG_{a}\) and \(\langle\lambda,g\rangle\neq 0\). Put \(\delta=|\langle\lambda,g\rangle|\). As \(\varphi_{G}^{-1}(g)\in A_{I}(G)\), there are \(\alpha_{i}\in\mathbb{C}\) and \(f_{i}\in I(G)\) such that \(\|\varphi_{G}^{-1}(g)-\sum_{i=1}^{n}\alpha_{i}f_{i}\|<\delta/\|\lambda\|\) for some \(n\in\mathbb{N}\). Since \(\varphi_{G}\) is an isometric isomorphism, we have \(\|g-\sum_{i=1}^{n}\alpha_{i}\varphi_{G}(f_{i})\|<\delta/\|\lambda\|\) and \(\varphi_{G}(f_{i})\in I(G/G_{e})\). Then we obtain
\[1_{aG_{a}}(g-\sum_{i=1}^{n}\alpha_{i}\varphi_{G}(f_{i}))=g-\sum_{i=1}^{n} \alpha_{i}1_{aG_{a}}\varphi_{G}(f_{i}),\]
thus,
\[\|g-\sum_{i=1}^{n}\alpha_{i}1_{aG_{a}}\varphi_{G}(f_{i})\|\leq\|g-\sum_{i=1}^{ n}\alpha_{i}\varphi_{G}(f_{i})\|<\frac{\delta}{\|\lambda\|}.\]
Suppose for every \(1\leq i\leq n\), we have \(\langle\lambda,1_{aG_{a}}\varphi_{G}(f_{i})\rangle=0\). Then
\[|\langle\lambda,g\rangle| =|\langle\lambda,g\rangle-\sum_{i=1}^{n}\alpha_{i}\langle\lambda,1 _{aG_{a}}\varphi_{G}(f_{i})\rangle|\] \[=|\langle\lambda,(g-\sum_{i=1}^{n}\alpha_{i}1_{aG_{a}}\varphi_{G} (f_{i}))\rangle|\] \[\leq\|\lambda\|\|g-\sum_{i=1}^{n}\alpha_{i}1_{aG_{a}}\varphi_{G} (f_{i})\|\] \[<\|\lambda\|\frac{\delta}{\|\lambda\|}=\delta.\]
This implies that \(|\langle\lambda,g\rangle|<\delta\), which is a contradiction. Therefore, there is an \(i_{0}\in\{1,\cdots,n\}\) such that
\[\langle\lambda,1_{aG_{a}}\varphi_{G}(f_{i_{0}})\rangle\neq 0.\]
We also have \(\operatorname{supp}(1_{aG_{a}}\varphi_{G}(f_{i_{0}}))\subset V\) and \(1_{aG_{a}}\varphi_{G}(f_{i_{0}})\in I(G/G_{e})\), the proof is thus completed.
**Proposition 2.6**.: _For any \(a\in H\), there exist uniquely \(b\in G/G_{e}\) and \(\alpha\in\mathbb{C}\) such that \(\Phi^{*}(\omega_{H}(a))=\alpha\lambda_{G/G_{e}}(b)\)._
Proof.: Suppose there were \(b_{1},b_{2}\in G/G_{e}\) such that \(b_{1},b_{2}\) were both in \(\operatorname{supp}(\Phi^{*}(\omega_{H}(a)))\). Since \(G_{e}\) is a closed subgroup of \(G\), the quotient group \(G/G_{e}\) is Hausdorff. Thus, there are neighbourhoods \(V_{b_{1}}\) and \(V_{b_{2}}\) of \(b_{1}\) and \(b_{2}\), respectively, in \(G/G_{e}\) such that \(V_{b_{1}}\cap V_{b_{2}}=\emptyset\). By Lemma 2.5, there are \(h_{i}\in I(G/G_{e})\), for \(i=1,2\), such that \(\operatorname{supp}h_{i}\subset V_{b_{i}}\) and \(\langle\Phi^{*}(\omega_{H}(a)),h_{i}\rangle\neq 0\). As \(V_{b_{1}}\cap V_{b_{2}}=\emptyset\), we get \(h_{1}h_{2}=0\). Since \(\varphi_{G}\) is an isomorphism, we have \(\varphi_{G}^{-1}(h_{i})\in I(G)\), for \(i=1,2\), and \(\varphi_{G}^{-1}(h_{1})\cdot\varphi_{G}^{-1}(h_{2})=\varphi_{G}^{-1}(h_{1}h_{ 2})=0\). By Lemma 2.3, we have \(T(\varphi_{G}^{-1}(h_{1}))\cdot T(\varphi_{G}^{-1}(h_{2}))=0\). On the other hand, we obtain
\[0\neq\langle\Phi^{*}(\omega_{H}(a)),h_{1}\rangle=\Phi(h_{1})(a)=T\circ\varphi_ {G}^{-1}(h_{1})(a)=T(\varphi_{G}^{-1}(h_{1}))(a),\]
and
\[0\neq\langle\Phi^{*}(\omega_{H}(a)),h_{2}\rangle=\Phi(h_{2})(a)=T\circ\varphi_ {G}^{-1}(h_{2})(a)=T(\varphi_{G}^{-1}(h_{2}))(a).\]
Therefore,
\[T(\varphi_{G}^{-1}(h_{1}))\cdot T(\varphi_{G}^{-1}(h_{2}))\neq 0,\]
this is a contradiction. Since \(\operatorname{supp}(\Phi^{*}(\omega_{H}(a)))\neq\emptyset\), there is uniquely \(b\in G/G_{e}\) such that \(\operatorname{supp}(\Phi^{*}(\omega_{H}(a)))=\{b\}\). Consequently, by [20, Corollary 2.5.9], there is an \(\alpha\in\mathbb{C}\) such that \(\Phi^{*}(\omega_{H}(a))=\alpha\lambda_{G/G_{e}}(b)\).
For any \(a\in H\), by Proposition 2.6, there are unique \(b\in G/G_{e}\) and \(\alpha\in\mathbb{C}\) such that \(\Phi^{*}(\omega_{H}(a))=\alpha\lambda_{G/G_{e}}(b)\), thus we have
\[\Phi(f)(a)=\alpha f(b),\]
for any \(f\in A(G/G_{e})\). We define \(\phi:H\to\mathbb{C}\) by \(\alpha=\phi(a)\). We also define \(\psi:H\to G/G_{e}\) by \(b=\psi(a)\). Then we get
\[\Phi(f)(a)=\phi(a)f(\psi(a)), \tag{3}\]
for any \(f\in A(G/G_{e})\) and \(a\in H\).
For any \(h\in I(G)\), since we have \(\Phi(\varphi_{G}(h))=T(h)\in I_{B}(H)\), we obtain that
\[(\Phi(\varphi_{G}(h)))^{2}=T(h)T(h)=T(h)=\Phi(\varphi_{G}(h)).\]
On the other hand, since \((\varphi_{G}(h))^{2}=\varphi_{G}(h^{2})=\varphi_{G}(h)\) in \(A(G/G_{e})\), we obtain that
\[\phi(a)^{2}\varphi_{G}(h)(\psi(a))=\phi(a)\varphi_{G}(h)(\psi(a)) \tag{4}\]
for any \(h\in I(G)\) and \(a\in H\).
**Lemma 2.7**.: _For any \(a\in H\), there is an idempotent \(1_{\psi(a)G_{0}}\) of \(A(G/G_{e})\) where \(\psi(a)G_{0}\) is an open compact neighbourhood of \(\psi(a)\)._
Proof.: As \(G/G_{e}\) is totally disconnected, there is an open compact subgroup \(G_{0}\) in \(G/G_{e}\). For any \(\psi(a)\in G/G_{e}\), \(\psi(a)G_{0}\) is a compact open coset in \(G/G_{e}\), hence \(1_{\psi(a)G_{0}}\) is an idempotent of \(A(G/G_{e})\) with norm \(1\).
**Lemma 2.8**.: _The map \(\Phi:A(G/G_{e})\to B(H)\) is an algebraic homomorphism._
Proof.: Let \(a\in H\). By Lemma 2.7, there is an idempotent \(1_{\psi(a)G_{0}}\) of \(A(G/G_{e})\). Since \(\varphi_{G}:A_{I}(G)\to A(G/G_{e})\) is surjective, there is \(f\in A_{I}(G)\) such that \(\varphi_{G}(f)=1_{\psi(a)G_{0}}\). Moreover, we have that \(f^{2}=(\varphi_{G}^{-1}(1_{\psi(a)G_{0}}))^{2}=\varphi_{G}^{-1}(1_{\psi(a)G_ {0}})=f\), this implies that \(f\in I(G)\). Thus by (4), we have
\[\phi(a)^{2}=\phi(a)^{2}1_{\psi(a)G_{0}}(\psi(a))=\phi(a)^{2} \varphi_{G}(f)(\psi(a))\\ =\phi(a)\varphi_{G}(f)(\psi(a))=\phi(a)1_{\psi(a)G_{0}}(\psi(a))= \phi(a).\]
Since \(a\in H\) is arbitrary, we have
\[\phi^{2}=\phi \tag{5}\]
on \(H\), thus we get \(\phi:H\to\{0,1\}\). In addition, for any \(f,g\in A(G/G_{e})\) and \(a\in H\), we have
\[\Phi(fg)(a)=\phi(a)(fg)(\psi(a))=\phi(a)^{2}(fg)(\psi(a))\\ =\phi(a)f(\psi(a))\phi(a)g(\psi(a))=(\Phi(f)\Phi(g))(a).\]
Hence \(\Phi\) is an algebraic homomorphism from \(A(G/G_{e})\) into \(B(H)\).
**Lemma 2.9**.: _The map \(\psi:\phi^{-1}(1)\to G/G_{e}\) is continuous._
Proof.: For any \(a_{0}\in\phi^{-1}(1)\subset H\), let \(U\) be an open neighbourhood of \(\psi(a_{0})\) in \(G/G_{e}\). Then there is \(f_{0}\in A(G/G_{e})\) such that
\[f_{0}(\psi(a_{0}))=1\quad\text{and}\quad f_{0}(b)=0\ \text{ for }b\in(G/G_{e})\setminus U.\]
Let \((a_{\lambda})_{\lambda}\subseteq\phi^{-1}(1)\) be a net such that \(a_{\lambda}\to a_{0}\). As \(\Phi(f_{0})\in B(H)\), \(\Phi f_{0}(a_{\lambda})\to\Phi f_{0}(a_{0})=f_{0}(\psi(a_{0}))=1\). There is an \(\lambda_{0}\) such that if \(\lambda\geq\lambda_{0}\) then \(|\Phi f_{0}(a_{\lambda})|>\frac{1}{2}\). Since \(\Phi f_{0}(a_{\lambda})=f_{0}(\psi(a_{\lambda}))\), we have \(\psi(a_{\lambda})\in U\) provided \(\lambda\geq\lambda_{0}\). Thus \(\psi\) is continuous on \(\phi^{-1}(1)\).
**Lemma 2.10**.: _The set \(\phi^{-1}(1)\) is an open subset of \(H\)._
Proof.: Let \(a\in\phi^{-1}(1)\) be arbitrary. By Lemma 2.7, there is an idempotent \(1_{\psi(a)G_{0}}\) of \(A(G/G_{e})\) where \(\psi(a)G_{0}\) is an open compact neighbourhood of \(\psi(a)\). Since \(\Phi(1_{\psi(a)G_{0}})\in B(H)\subset C_{b}(H)\), there exists an open neighbourhood \(V\) of \(a\) in \(H\) such that if \(b\in V\) then
\[|\Phi(1_{\psi(a)G_{0}})(a)-\Phi(1_{\psi(a)G_{0}})(b)|\leq\frac{1}{2}.\]
We have
\[|1-\phi(b)1_{\psi(a)G_{0}}(\psi(b))|=|\Phi(1_{\psi(a)G_{0}})(a)-\Phi(1_{\psi(a )G_{0}})(b)|\leq\frac{1}{2}.\]
Since either \(\phi(b)1_{\psi(a)G_{0}}(\psi(b))=1\) or \(\phi(b)1_{\psi(a)G_{0}}(\psi(b))=0\), this implies that
\[\phi(b)1_{\psi(a)G_{0}}(\psi(b))=1.\]
Hence we have \(\phi(b)=1\) for any \(b\in V\). Thus \(V\subset\phi^{-1}(1)\). It follows that \(\phi^{-1}(1)\) is an open subset of \(H\).
**Theorem 2.11**.: _Let \(G\) and \(H\) be two locally compact Hausdorff groups, and \(T:A(G)\to B(H)\) be a bounded complex linear operator. Suppose \(T\) satisfies that \(T(I(G))\subset I_{B}(H)\). Then there are an open subset \(U\) of \(H\) and a continuous map \(\psi\) from \(U\) into \(G/G_{e}\) such that_
\[Tf(a)=\begin{cases}f([\psi(a)]_{G_{e}})&\quad\text{if }a\in U\text{,}\\ 0&\quad\text{if }a\in H\setminus U\text{,}\end{cases} \tag{6}\]
_for any \(f\in A_{I}(G)\) and \(a\in H\)._
Proof.: Let \(U=\phi^{-1}(1)\). By Lemma 2.10, \(U\) is an open subset of \(H\). Moreover, Lemma 2.9 shows that \(\psi:U\to G/G_{e}\) is a continuous map. Applying (3), for any \(f\in A_{I}(G)\) and \(a\in H\), we have
\[Tf(a)=\Phi(\varphi_{G}(f))(a)=\phi(a)\varphi_{G}(f)(\psi(a)).\]
Thus, we get
\[Tf(a) =\begin{cases}\varphi_{G}(f)(\psi(a))&\quad\text{if }a\in U \text{,}\\ 0&\quad\text{if }a\in H\setminus U\end{cases}\] \[=\begin{cases}f([\psi(a)]_{G_{e}})&\quad\text{if }a\in U\text{,}\\ 0&\quad\text{if }a\in H\setminus U\text{,}\end{cases}\]
for any \(f\in A_{I}(G)\) and any \(a\in H\).
The following example shows that the assumption in Theorem 2.11 does not imply \(T(I(G))\subset I(H)\). This observation is in line with the well known fact that \(f\circ\psi\) may not be in the Fourier algebra \(A(H)\) in general (see Remark 2.13).
**Example 2.12**.: _Let \(G=\{0\}\) be the trivial group. Then we define a bounded linear operator \(T:A(G)\to B(\mathbb{Z})\) by \(T(1_{G})=1_{\mathbb{Z}}\). Then it satisfies \(T(I(G))\subset I_{B}(H)\). Note that in this case, we have \(U=\mathbb{Z}\) and the continuous map \(\psi:\mathbb{Z}\to G/G_{e}\) is \(\psi(n)=0\) for any \(n\in\mathbb{Z}\). On the other hand, since \(1_{\mathbb{Z}}\notin A(\mathbb{Z})\), we have \(T(I(G))\nsubseteq I(H)\)._
**Remark 2.13**.: _In general the converse statement of above theorem may not hold since we do not know if \(Tf\in A(H)\) for any \(f\in A_{I}(G)\), even \(T\) has a representation of the form (6). If we only have \(\psi:U\subseteq H\to G\) being continuous, then \(f\mapsto f\circ\psi\) maps \(A(G)\) into \(\ell^{\infty}(H)\) in general. For abelian groups \(G\) and \(H\), Cohen [7] showed that \(f\mapsto f\circ\psi\) maps \(A(G)\) to \(B(H)\) if and only if \(\psi\) is a continuous piecewise affine map from a set in the open coset ring of \(H\) into \(G\). This characterisation was extended by Host [15] to the case when \(G\) has an abelian subgroup of finite index and \(H\) is arbitrary, and by [22] to general groups._
Under extra assumptions on \(T\), we obtain algebraic structures for the open set \(U\) and algebraic properties on the map \(\psi\). Let us first recall positive operators on the Fourier algebra.
A bounded linear operator \(T:A_{I}(G)\to B(H)\) is said to be positive if \(T(u)\) is positive definite whenever \(u\in A_{I}(G)\) is a positive definite function.
**Corollary 2.14**.: _Let \(G\) and \(H\) be two locally compact groups. Let \(T:A_{I}(G)\to B(H)\) be a positive bounded complex linear operator. If \(T\) satisfies that \(T(I(G))\subset I_{B}(H)\) then there exists an open subgroup \(U\) of \(H\) and a continuous group homomorphism or anti-homomorphism \(\psi\) from the open subgroup \(U\) of \(H\) into \(G/G_{e}\) such that_
\[Tf(a)=\begin{cases}f([\psi(a)]_{G_{e}})&\quad\text{if $a\in U$,}\\ 0&\quad\text{if $a\in H\setminus U$,}\end{cases}\]
_for any \(f\in A_{I}(G)\) and \(a\in G\)._
Proof.: Since isometric isomorphism \(\varphi_{G}\) preserves positivity, \(u\in A_{I}(G)\) is a positive definite function if and only if \(\varphi_{G}(u)\) is positive definite. This implies that \(T\) is positive if and only if \(\Phi\) is positive, thus, \(\Phi:A(G/G_{e})\to B(H)\) is a positive homomorphism by Lemma 2.8. It follows from [22, Theorem 4.3] that there exists an open subgroup \(U\) of \(H\) and a continuous group homomorphism or anti-homomorphism \(\psi\) from \(U\) into \(G/G_{e}\) such that for any \(f\in A(G/G_{e})\), \(\Phi f\) is either equal
to \(f\circ\psi\) in \(U\), or \(0\) otherwise. Thus, we have
\[Tf(a)=\begin{cases}f([\psi(a)]_{G_{e}})&\quad\text{if $a\in U$},\\ 0&\quad\text{if $a\in H\setminus U$},\end{cases}\]
for any \(f\in A_{I}(G)\) and \(a\in H\).
**Corollary 2.15**.: _Let \(G\) and \(H\) be two locally compact groups, and \(T:A_{I}(G)\to B(H)\) be a contractive complex linear operator. If \(T\) satisfies that \(T(I(G))\subset I_{B}(H)\) then there exists an open subgroup \(U\) of \(H\), a continuous group homomorphism or anti-homomorphism \(\psi\) from \(U\) into \(G/G_{e}\), and elements \(b\in G\) and \(c\in H\) such that_
\[Tf(a)=\begin{cases}f(b[\psi(ca)]_{G_{e}})&\quad\text{if $a\in c^{-1}U$,}\\ 0&\quad\text{if $a\in H\setminus c^{-1}U$.}\end{cases}\]
Proof.: Since \(\varphi_{G}\) is an isometric isomorphism, if \(T\) is contractive then \(\Phi\) is also a contractive operator. By Lemma 2.8, \(\Phi\) is a contractive homomorphism from \(A(G/G_{e})\) into \(B(H)\). It follows from [22, Theorem 5.1] that there exists an open subgroup \(U\) of \(H\), a continuous group homomorphism or anti-homomorphism \(\psi\) from \(U\) into \(G/G_{e}\), and elements \(bG_{e}\in G/G_{e}\) and \(c\in H\) such that for any \(f\in A(G/G_{e})\) and \(a\in H\), \(\Phi f(a)=f(bG_{e}\psi(ca))\) provided \(a\in c^{-1}U\), otherwise, \(\Phi f(a)=0\). By recalling the definition of \(\Phi\), we have the characterisation of \(T\).
### Idempotent preserving maps with \(T(I(G))\subset I(h)\)
Let us assume that the bounded linear operator \(T:A(G)\to B(H)\) satisfies \(T(I(G))\subset I(H)\). Then naturally we obtain \(T(A_{I}(G))\subseteq A_{I}(H)\).
We define \(T_{q}:A(G/G_{e})\to A(H/H_{e})\) by
\[T_{q}(f)=\varphi_{H}\circ T\circ\varphi_{G}^{-1}(f)=\varphi_{H}\circ\Phi(f),\]
for any \(f\in A(G/G_{e})\), where \(\varphi_{H}:A_{I}(H)\to A(H/H_{e})\) is an isometric isomorphism defined similarly as in (1). Note that \(T_{q}\) is an algebraic homomorphism.
**Lemma 2.16**.: _Let \(a\in\phi^{-1}(1)\subset H\) and \(b\in H\) such that \(a^{-1}b\in H_{e}\). Then \(\phi(b)=1\) and \(\psi(a)=\psi(b)\)._
Proof.: Suppose that \(\psi(a)\neq\psi(b)\). By (3), we have \(\Phi:A(G/G_{e})\to B(H)\) such that for any \(f\in A(G/G_{e})\),
\[\Phi(f)(a)=f(\psi(a))\]
and
\[\Phi(f)(b)=\phi(b)f(\psi(b)).\]
Since \(G/G_{e}\) is Hausdorff, there are disjoint open neighbourhoods \(V_{a}\) and \(V_{b}\) of \(\psi(a)\) and \(\psi(b)\), respectively, in \(G/G_{e}\). By Lemma 2.5, for \(\lambda_{G/G_{e}}(\psi(a))\in VN(G/G_{e})\), there is \(h\in I(G/G_{e})\) such that \(\operatorname{supp}h\subset V_{a}\) and \(h(\psi(a))\neq 0\). Since \(a\in\phi^{-1}(1)\), we get
\[\Phi(h)(a)=h(\psi(a))\neq 0\]
and
\[\Phi(h)(b)=\phi(b)h(\psi(b))=0.\]
By the assumption that \(T(I(G))\subset I(H)\) and \(\varphi_{G}^{-1}(h)\in I(G)\), we have \(\Phi(h)=T(\varphi_{G}^{-1}(h))\in I(H)\), an idempotent in \(A(H)\). Hence, there is \(Y\in\Omega_{\mathrm{o}}^{\mathrm{c}}(H)\) such that \(1_{Y}=\Phi(h)\). Since \(1_{Y}(a)=\Phi(h)(a)=h(\psi(a))\neq 0\), we have \(a\in Y\). In addition, \(Y\) is a clopen subset of \(H\) and \(H_{e}\) is a connected component containing \(e\), thus \(aH_{e}\subset Y\). This implies that \(b=aa^{-1}b\in Y\). It follows that
\[1=1_{Y}(b)=\Phi(h)(b)=0.\]
This is a contradiction. Thus we have \(\psi(a)=\psi(b)\). Furthermore, suppose that \(\phi(b)=0\). There is an \(h\in I(G/G_{e})\) such that \(h(\psi(b))\neq 0\). Thus there is \(Y\in\Omega_{\mathrm{o}}^{\mathrm{c}}(H)\) such that \(1_{Y}=\Phi(h)\). By a similar argument, we have \(1_{Y}(a)=\Phi(h)(a)=h(\psi(a))\neq 0\), \(a\in Y\) and \(b\in Y\). We obtain that
\[1=1_{Y}(b)=\Phi(h)(b)=\phi(b)h(\psi(b))=0.\]
This is a contradiction. Therefore, \(\phi(b)=1\) and \(\psi(a)=\psi(b)\).
For any \(a,b\in H\), the condition \(a^{-1}b\in H_{e}\) induces an equivalence relation on \(H\). Lemma 2.16 shows that \(\phi:H\to\{0,1\}\) and \(\psi:H\to G/G_{e}\) are constant functions on each equivalence class. Thus these induce maps \(\phi^{\prime}:H/H_{e}\to\{0,1\}\) and \(\psi^{\prime}:\phi^{\prime-1}(1)\to G/G_{e}\) by
\[\phi^{\prime}(aH_{e})=\phi(a)\quad\text{for any $a\in H$}\]
and
\[\psi^{\prime}(aH_{e})=\psi(a)\quad\text{for any $aH_{e}\in\phi^{\prime-1}(1)$}.\]
By Lemma 2.9, the map \(\psi:\phi^{-1}(1)\to G/G_{e}\) is continuous. As we have \(\phi^{\prime-1}(1)=q_{H}(\phi^{-1}(1))\), we obtain that \(\psi^{\prime}:\phi^{\prime-1}(1)\to G/G_{e}\) is continuous.
**Theorem 2.17**.: _Let \(G\) and \(H\) be two locally compact Hausdorff groups and \(T:A(G)\to B(H)\) be a bounded complex linear operator. Suppose that \(T\) satisfies \(T(I(G))\subset I(H)\). Then there exists an open subset \(U\) of \(H\) and a continuous map \(\psi^{\prime}\) from an open subset \(q_{H}(U)\) of \(H/H_{e}\) into \(G/G_{e}\) such that_
\[Tf(a)=\begin{cases}f([\psi^{\prime}(aH_{e})]_{G_{e}})&\text{ if $a\in U$,}\\ 0&\text{ if $a\in H\setminus U$,}\end{cases} \tag{7}\]
_for any \(f\in A_{I}(G)\)._
Proof.: Define \(U=\phi^{-1}(1)\). Since \(q_{H}:H\to H/H_{e}\) is the quotient map and \(U\) is a open subset of \(H\) by Lemma 2.10. By (3), for any \(f\in A(G/G_{e})\) and \(a\in H\), we have
\[T_{q}(f)(aH_{e})=\varphi_{H}\circ\Phi(f)(aH_{e})=\Phi(f)(a)=\phi^{\prime}(aH _{e})f(\psi(a)). \tag{8}\]
We shall show that \(\phi^{\prime-1}(1)\) is an open subset of \(H/H_{e}\). Let \(a\in\phi^{\prime-1}(1)\). By Lemma 2.7, there is an idempotent \(1_{\psi^{\prime}(a)G_{0}}\) of \(A(G/G_{e})\) where \(\psi^{\prime}(a)G_{0}\) is an open compact neighbourhood of \(\psi^{\prime}(a)\). Since \(T_{q}(1_{\psi^{\prime}(a)G_{0}})\in A(H/H_{e})\subset C_{0}(H/H_{e})\), the space of all continuous functions on \(H/H_{e}\) vanishing at infinity, there exists an open neighbourhood \(V\) of \(a\) in \(H/H_{e}\) such that if \(b\in q_{H}^{-1}(V)\) then
\[|T_{q}(1_{\psi^{\prime}(a)G_{0}})(a)-T_{q}(1_{\psi^{\prime}(a)G_{0}})(bH_{e})| \leq\frac{1}{2}.\]
We have
\[|1-\phi^{\prime}(bH_{e})1_{\psi^{\prime}(a)G_{0}}(\psi(b))|=|T_{q}(1_{\psi^{ \prime}(a)G_{0}})(a)-T_{q}(1_{\psi^{\prime}(a)G_{0}})(bH_{e})|\leq\frac{1}{2}.\]
Since either \(\phi^{\prime}(bH_{e})1_{\psi^{\prime}(a)G_{0}}(\psi(b))=1\) or \(\phi^{\prime}(bH_{e})1_{\psi^{\prime}(a)G_{0}}(\psi(b))=0\), this implies that
\[\phi^{\prime}(bH_{e})1_{\psi^{\prime}(a)G_{0}}(\psi(b))=1.\]
Hence we have \(\phi^{\prime}(bH_{e})=1\) for any \(b\in q_{H}^{-1}(V)\). Thus \(V\subset\phi^{\prime-1}(1)\). It follows that \(\phi^{\prime-1}(1)\) is an open subset of \(H/H_{e}\). Let us recall that \(\psi^{\prime}:q_{H}(U)\to G/G_{e}\) is a continuous map. Applying (8), we have
\[T_{q}(f)(aH_{e}) =\phi^{\prime}(aH_{e})f(\psi(a))\] \[=\begin{cases}f(\psi^{\prime}(aH_{e}))&\text{ if }a\in U,\\ 0&\text{ if }a\in H\setminus U,\end{cases}\]
for any \(f\in A(G/G_{e})\) and \(a\in H\). As we have
\[T_{q}(\varphi_{G}(f))(aH_{e})=\varphi_{H}\circ T\circ\varphi_{G}^{-1}(\varphi _{G}(f))(aH_{e})=\varphi_{H}\circ T(f)(aH_{e})=Tf(a)\]
for any \(f\in A_{I}(G)\) and \(a\in H\), we get
\[Tf(a)=\begin{cases}\varphi_{G}(f)(\psi^{\prime}(aH_{e}))&\text{ if }a\in U,\\ 0&\text{ if }a\in H\setminus U.\end{cases}\]
## 3. Idempotent preserving bijections on \(A_{i}(g)\)
In this section we assume furthermore that the bounded linear operator \(T:A(G)\to B(H)\) satisfies that \(T(I(G))\subset I(H)\) and \(T|_{A_{I}(G)}\) is a bijection onto \(A_{I}(H)\).
**Theorem 3.1**.: _Let \(G\) and \(H\) be two locally compact groups, and \(T:A(G)\to B(H)\) be a bounded complex linear operator. Suppose the operator \(T\) satisfies that \(T(I(G))\subset I(H)\) and \(T|_{A_{I}(G)}:A_{I}(G)\to A_{I}(H)\) is bijective. Then there exists a homeomorphism \(\psi:H/H_{e}\to G/G_{e}\) such that_
\[Tf(a)=f([\psi(aH_{e})]_{G_{e}})\]
_for all \(f\in A_{I}(G)\) and \(a\in H\)._
Proof.: Since \(T|_{A_{I}(G)}\) is a bijective linear map, \(\varphi_{G}\) and \(\varphi_{H}\) are isometric isomorphisms, by the proof of Proposition 2.1 we have \(T_{q}:=\varphi_{H}\circ T|_{A_{I}(G)}\circ\varphi_{G}^{-1}\) is an isomorphism from \(A(G/G_{e})\) onto \(A(H/H_{e})\).
Applying Theorem 2.17, there is an open subset \(U\) of \(H\) such that and a continuous map \(\psi\) from an open subset \(q_{H}(U)\) of \(H/H_{e}\) into \(G/G_{e}\) such that
\[T_{q}(f)(a)=\begin{cases}f(\psi(a))&\text{if }a\in q_{H}(U),\\ 0&\text{if }a\in(H/H_{e})\setminus q_{H}(U),\end{cases}\]
for any \(f\in A(G/G_{e})\). Since \(T_{q}:A(G/G_{e})\to A(H/H_{e})\) is surjective and the Fourier algebra \(A(H/H_{e})\) separates the points in \(H/H_{e}\), we have \(q_{H}(U)=H/H_{e}\). Thus \(U=H\) and we have
\[T_{q}(f)(a)=f(\psi(a))\]
for every \(f\in A(G/G_{e})\) and \(a\in H/H_{e}\). For any \(h\in I(H)\), there exists \(h_{q}\in A(H/H_{e})\) with \(h_{q}^{2}=h_{q}\) such that
\[\varphi_{H}(h)=h_{q}.\]
Since \(T_{q}\) is bijective, there exists \(f_{q}\in A(G/G_{e})\) such that
\[T_{q}(f_{q})=h_{q}.\]
Moreover, since \(T_{q}\) is an algebraic homomorphim, we have \(T_{q}(f_{q}^{2})=(T_{q}(f_{q}))^{2}=h_{q}^{2}=h_{q}=T_{q}(f_{q})\). By the injectivity of \(T_{q}\), we get \(f_{q}^{2}=f_{q}\). On the other hand, as \(\varphi_{G}\) is an isometric isomorphism from \(A_{I}(G)\) onto \(A(G/G_{e})\), there exists \(f\in I(G)\) such that
\[\varphi_{G}(f)=f_{q}.\]
Hence, we have
\[T(f)=(\varphi_{H}^{-1}\circ T_{q}\circ\varphi_{G})(f)=\varphi_{H}^{-1}\circ T _{q}(f_{q})=\varphi_{H}^{-1}(h_{q})=h.\]
This implies that \(T(I(G))=I(H)\). In particular, we have \(T^{-1}(I(H))\subset I(G)\). Thus we can apply similar arguments to \(T|_{A_{I}(G)}^{-1}:A_{I}(H)\to A_{I}(G)\) and to \(T_{q}^{-1}=\varphi_{G}\circ T|_{A_{I}(G)}^{-1}\circ\varphi_{H}^{-1}\) on \(A(H/H_{e})\), we can then define a continuous map \(\tilde{\psi}:G/G_{e}\to H/H_{e}\) such that
\[T_{q}^{-1}(g)(b)=g(\tilde{\psi}(b))\]
for any \(g\in A(H/H_{e})\) and \(b\in G/G_{e}\).
For any \(g\in A(H/H_{e})\) and \(a\in H/H_{e}\), we have
\[g(a)=T_{q}(T_{q}^{-1}g)(a)=g(\tilde{\psi}(\psi(a))).\]
Since the Fourier algebra \(A(H/H_{e})\) separates points in \(H/H_{e}\), we get
\[a=\tilde{\psi}(\psi(a))\quad\text{for}\quad a\in H/H_{e}\,. \tag{9}\]
Moreover, we obtain
\[f(b)=T_{q}^{-1}(T_{q}f)(b)=f(\psi(\tilde{\psi}(b))),\]
for any \(f\in A(G/G_{e})\) and \(b\in G/G_{e}\). Similarly, as \(A(G/G_{e})\) separates points in \(G/G_{e}\), we have
\[b=\psi(\tilde{\psi}(b))\quad\text{for}\quad b\in G/G_{e}\,. \tag{10}\]
By (9) and (10), we have that \(\psi:H/H_{e}\to G/G_{e}\) is a bijection and \(\tilde{\psi}=\psi^{-1}\). Let us recall that \(\psi\) and \(\tilde{\psi}\) are continuous on \(H/H_{e}\) and \(G/G_{e}\), respectively. As \(\tilde{\psi}=\psi^{-1}\), we have that \(\psi\) is a homeomorphism. In addition, we obtain
\[T_{q}(f)(a)=f(\psi(a))\quad\text{for}\,\,\,f\in A(G/G_{e}),\,a\in H/H_{e}\,.\]
Since \(T=\varphi_{H}^{-1}\circ T_{q}\circ\varphi_{G}\), we get
\[Tf(a)=f([\psi(aH_{e})]_{G_{e}})\]
for all \(f\in A_{I}(G)\) and \(a\in H\).
Note that the assumption of bijectivity in above theorem is needed for the function \(\psi:H/H_{e}\to G/G_{e}\) to be a homeomorphism.
**Example 3.2**.: _Let \(G=\{1,2\}\) be a multiplicative group equipped with the discrete topology. Let \(H=\{0\}\) be the trivial group. We define \(T:A(G)\to A(H)\) by \(Tf(0)=f(1)\) for any \(f\in A(G)\). Then \(T\) is a bounded complex linear operator on \(A(G)\) and for any \(1_{Y}\in A(G)\), \(T(1_{Y})=1_{H}\) if \(1\in Y\), otherwise \(T(1_{Y})=0\). Thus \(T(I(G))=I(H)\). On the other hand, \(T(1_{\{1\}})=1_{H}=T(1_{G})\), this implies that \(T|_{A_{I}(G)}:A_{I}(G)\to A_{I}(H)\) is not injective. In addition, \(\psi:H/H_{e}=H\to G/G_{e}=G\) satisfying_
\[\psi(0)=1.\]
_is not a homeomorphism._
With extra assumptions on \(T\) as in Section 2.1, we obtain a characterisation of linear idempotent preserving maps between two Fourier algebras. Note that since the continuous map \(\psi\) in the following two corollaries is either a group isomorphism or an anti-isomorphism, we naturally have \(f\circ\psi\in A_{I}(H)\) for any \(f\in A_{I}(G)\) (see [26]), thus, we obtain a necessary and sufficient condition for the idempotent preserving operator \(T\) on \(A_{I}(G)\).
**Corollary 3.3**.: _Let \(G\) and \(H\) be two locally compact groups. A surjective complex linear contraction \(T:A_{I}(G)\to A_{I}(H)\) satisfies \(T(I(G))\subset I(H)\) if and only if there exists a continuous group isomorphism or anti-isomorphism \(\psi:H/H_{e}\to G/G_{e}\) and an element \(b\in G\) such that_
\[Tf(a)=f(b[\psi(aH_{e})]_{G_{e}})\]
_for all \(f\in A_{I}(G)\) and \(a\in H\)._
**Corollary 3.4**.: _Let \(G\) and \(H\) be two locally compact groups. A positive bounded complex linear bijection \(T:A_{I}(G)\to A_{I}(H)\) satisfies
\(T(I(G))\subset I(H)\) if and only if there exists a continuous group isomorphism or anti-isomorphism \(\psi:H/H_{e}\to G/G_{e}\) such that_
\[Tf(a)=f([\psi(aH_{e})]_{G_{e}})\]
_for all \(f\in A_{I}(G)\) and \(a\in H\)._
We will end our paper with a special case when the groups are totally disconnected. In such case, \(A_{I}(G)\) is isometrically algebraic isomorphic to \(A(G)\). Thus the _idempotent preserving_ operators recover the results of algebraic homomorphisms.
**Remark 3.5**.: _Suppose that \(G\) and \(H\) are totally disconnected locally compact groups. Let \(T:A(G)\to A(H)\) be a bounded complex linear operator satisfying \(T(I(G))\subset I(H)\). Then there exists a continuous map \(\psi\) from an open subset \(U\) of \(H\) into \(G\) such that_
\[Tf(a)=\begin{cases}f(\psi(a))&\text{if $a\in U$,}\\ 0&\text{if $a\in H\setminus U$,}\end{cases}\]
_for any \(f\in A(G)\) and \(a\in H\). In addition, if \(T\) is a surjective contraction or \(T\) is a positive bijection, then it is equivalent to \(Tf=f\circ(b\psi)\) for some \(b\in G\) or \(Tf=f\circ\psi\), respectively, for all \(f\in A(G)\) where \(\psi:H\to G\) is a continuous group isomorphism or group anti-isomorphism; in particular, \(T\) is an algebraic homomorphism._
### Acknowledgments
The second author was supported by JSPS KAKENHI Grant Numbers JP21K13804.
|
2310.05480 | Collective Graph Exploration Parameterized by Vertex Cover | We initiate the study of the parameterized complexity of the {\sc Collective
Graph Exploration} ({\sc CGE}) problem. In {\sc CGE}, the input consists of an
undirected connected graph $G$ and a collection of $k$ robots, initially placed
at the same vertex $r$ of $G$, and each one of them has an energy budget of
$B$. The objective is to decide whether $G$ can be \emph{explored} by the $k$
robots in $B$ time steps, i.e., there exist $k$ closed walks in $G$, one
corresponding to each robot, such that every edge is covered by at least one
walk, every walk starts and ends at the vertex $r$, and the maximum length of
any walk is at most $B$. Unfortunately, this problem is \textsf{NP}-hard even
on trees [Fraigniaud {\em et~al.}, 2006]. Further, we prove that the problem
remains \textsf{W[1]}-hard parameterized by $k$ even for trees of treedepth
$3$. Due to the \textsf{para-NP}-hardness of the problem parameterized by
treedepth, and motivated by real-world scenarios, we study the parameterized
complexity of the problem parameterized by the vertex cover number
($\mathsf{vc}$) of the graph, and prove that the problem is fixed-parameter
tractable (\textsf{FPT}) parameterized by $\mathsf{vc}$. Additionally, we study
the optimization version of {\sc CGE}, where we want to optimize $B$, and
design an approximation algorithm with an additive approximation factor of
$O(\mathsf{vc})$. | Siddharth Gupta, Guy Sa'ar, Meirav Zehavi | 2023-10-09T07:41:09Z | http://arxiv.org/abs/2310.05480v1 | # Collective Graph Exploration Parameterized by Vertex Cover
###### Abstract
We initiate the study of the parameterized complexity of the Collective Graph Exploration (CGE) problem. In CGE, the input consists of an undirected connected graph \(G\) and a collection of \(k\) robots, initially placed at the same vertex \(r\) of \(G\), and each one of them has an energy budget of \(B\). The objective is to decide whether \(G\) can be _explored_ by the \(k\) robots in \(B\) time steps, i.e., there exist \(k\) closed walks in \(G\), one corresponding to each robot, such that every edge is covered by at least one walk, every walk starts and ends at the vertex \(r\), and the maximum length of any walk is at most \(B\). Unfortunately, this problem is NP-hard even on trees (Fraigniaud _et al._, 2006). Further, we prove that the problem remains W[1]-hard parameterized by \(k\) even for trees of treedepth 3. Due to the para-NP-hardness of the problem parameterized by treedepth, and motivated by real-world scenarios, we study the parameterized complexity of the problem parameterized by the vertex cover number (vc) of the graph, and prove that the problem is fixed-parameter tractable (FPT) parameterized byvc. Additionally, we study the optimization version of CGE, where we want to optimize \(B\), and design an approximation algorithm with an additive approximation factor of \(O(\mathsf{vc})\).
Collective Graph Exploration, Parameterized Complexity, Approximation Algorithm, Vertex Cover, Treedepth 301201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201201202012012012012012012012012012012012012012012012012012012012012012001201201200120120120120120120120120120120120120120120120120120120201201201201201201201201200120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120120012
Introduction
Collective Graph Exploration (CGE) is a well-studied problem in computer science and robotics, with various real-world applications such as network management and fault reporting, pickup and delivery services, searching a network, and so on. The problem is formulated as follows: given a set of robots (or agents) that are initially located at a vertex of an undirected graph, the objective is to explore the graph as quickly as possible and return to the initial vertex. A graph is _explored_ if each of its edges is visited by at least one robot. In each time step, every robot may move along an edge that is incident to the vertex it is placed at. The total time taken by a robot is the number of edges it traverses. The exploration time is the maximum time taken by any robot. In many real-world scenarios, the robots have limited energy resources, which motivates the minimization of the exploration time [7].
The CGE problem can be studied in two settings: _offline_ and _online_. In the offline setting, the graph is known to the robots beforehand, while in the online setting, the graph is unknown and revealed incrementally as the robots explore it. While CGE has received considerable attention in the online setting, much less is known in the offline setting (Section 1.1). Furthermore, most of the existing results in the offline setting are restricted to trees. Therefore, in this paper, we investigate the CGE problem in the offline setting for general graphs, and present some approximation and parameterized algorithms with respect to the vertex cover number of the graph.
### Related Works
As previously mentioned, the CGE problem is extensively studied in the online setting, where the input graph is unknown. As we study the problem in the offline setting in this paper, we only give a brief overview of the results in the online setting, followed by the results in the offline setting.
Recall that, in the online setting, the graph is unknown to the robots and the edges are revealed to a robot once the robot reaches a vertex incident to the edge. The usual approach to analyze any online algorithm is to compute its _competitive ratio_, which is the worst-case ratio between the cost of the online and the optimal offline algorithm. Therefore, the first algorithms for CGE focused on the competitive ratios of the algorithms. In [13], an algorithm for CGE for trees with competitive ratio \(O(\frac{k}{\log k})\) was given. Later in [15], it was shown that this competitive ratio is tight. Another line of work studied the competitive ratio as a function of the vertices and the depth of the input tree [15, 20, 8, 11, 4, 20]. We refer the interested readers to a recent paper by Cosson _et al._[6] and the references within for an in-depth discussion about the results in the online setting.
We now discuss the results in the offline setting. In [1], it was shown that the CGE problem for edge-weighted trees is NP-hard even for two robots. In [2, 19], an \((2-2/(k+1))\)-approximation was given for the optimization version of CGE for edge-weighted trees where we want to optimize \(B\). In [13], the NP-hardness was shown for CGE for unweighted trees as well. In [10], a 2-approximation was given for the optimization version of CGE for unweighted trees where we want to optimize \(B\). In the same paper, it was shown that the optimization version of the problem for unweighted trees is XP parameterized by the number of robots.
### Our Contribution and Methods
In this paper, we initiate the study of the CGE problem for general unweighted graphs in the offline setting and obtain the following three results. We first prove that CGE is
parameterized by \(\mathsf{vc}\), where \(\mathsf{vc}\) is the vertex cover number of the input graph. Specifically, we prove the following theorem.
**Theorem 1.1**.: Cge _is in FPT parameterized by \(\mathsf{vc}(G)\), where \(G\) is the input graph._
We then study the optimization version of CGE where we want to optimize \(B\) and design an approximation algorithm with an additive approximation factor of \(O(\mathsf{vc})\). Specifically, we prove the following theorem.
There exists an approximation algorithm for CGE that runs in time \(\mathcal{O}((|V(G)|+|E(G)|)\cdot k)\), and returns a solution with an additive approximation of \(8\cdot\mathsf{vc}(G)\), where \(G\) is the input graph and \(k\) is the number of robots.
Finally, we show a border of (in-)tractability by proving that CGE is W[1]-hard parametrized by \(k\), even for trees of treedepth 3. Specifically, we prove the following theorem.
**Theorem 1.3**.: Cge _is W[1]-hard with respect to \(k\) even on trees whose treedepth is bounded by \(3\)._
We first give an equivalent formulation of CGE based on Eulerian cycles (see Lemma 3.4). We obtain the FPT result by using Integer Linear Programming (ILP). By exploiting the properties of vertex cover and the conditions given by our formulation, we show that a potential solution can be encoded by a set of variables whose size is bounded by a function of vertex cover.
To design the approximation algorithm, we give a greedy algorithm that satisfies the conditions given by our formulation. Again, by exploiting the properties of vertex cover, we show that we can satisfy the conditions of our formulation by making optimal decisions at the independent set vertices and using approximation only at the vertex cover vertices.
To prove the W-hardness, we give a reduction from a variant of Bin Packing, called Exact Bin Packing (defined in Section 2). We first prove that Exact Bin Packing is W[1]-hard even when the input is given in unary. We then give a reduction from this problem to CGE to obtain our result.
### Choice of Parameter
As mentioned in the previous section, we proved that CGE is W[1]-hard parameterized by \(k\) even on trees of treedepth 3. This implies that we cannot get an FPT algorithm parameterized by treedepth and \(k\) even on trees, unless \(\mathsf{FPT}=\mathsf{W[1]}\). Thus, we study the problem parameterized by the vertex cover number of the input graph, a slightly weaker parameter than the treedepth.
Our choice of parameter is also inspired by several practical applications. For instance, consider a delivery network of a large company. The company has a few major distributors that receive the products from the company and can exchange them among themselves. There are also many minor distributors that obtain the products only from the major ones, as this is more cost-effective. The company employs \(k\) delivery persons who are responsible for delivering the products to all the distributors. The delivery persons have to start and end their routes at the company location. Since each delivery person has a maximum working time limit, the company wants to minimize the maximum delivery time among them. This problem can be modeled as an instance of CGE by constructing a graph \(G\) that has a vertex for the company and for each distributor and has an edge between every pair of vertices that correspond to locations that can be reached by a delivery person. The \(k\) robots represent
the \(k\) delivery persons and are placed at the vertex corresponding to the company. Clearly, \(G\) has a small vertex cover, as the number of major distributors is much smaller than the total number of distributors.
For another real-world example where the vertex cover is small, suppose we want to cover all the streets of the city as fast as possible using \(k\) agents that start and end at a specific street. The city has a few long streets and many short streets that connect to them. This situation is common in many urban areas. We can represent this problem as an instance of CGE by creating a graph \(G\) that has a vertex for each street and an edge between two vertices if the corresponding streets are adjacent. The \(k\) robots correspond to the \(k\) agents. Clearly, \(G\) has a small vertex cover, as the number of long streets is much smaller than the total number of streets.
## 2 Preliminaries
For \(k\in\mathbb{N}\), let \([k]\) denote the set \(\{1,2,\ldots,k\}\). For a multigraph \(G\), we denote the set of vertices of \(G\) and the multiset of edges of \(G\) by \(V(G)\) and \(E(G)\), respectively. For \(u\in V(G)\), the _set of neighbors_ of \(u\) in \(G\) is \(\mathsf{N}_{G}(u)=\{v\in V\ |\ \{u,v\}\in E(G)\}\). When \(G\) is clear from the context, we refer to \(\mathsf{N}_{G}(u)\) as \(\mathsf{N}(u)\). The _multiset of neighbors_ of \(u\) in \(G\) is the multiset \(\widehat{\mathsf{N}}_{G}(u)=\{v\in V\ |\ \{u,v\}\in E(G)\}\) (with repetition). When \(G\) is clear from the context, we refer to \(\widehat{\mathsf{N}}_{G}(u)\) as \(\widehat{\mathsf{N}}(u)\). The _degree_ of \(u\) in \(G\) is \(|\widehat{\mathsf{N}}_{G}(u)|\) (including repetitions). Let \(\widehat{E}\) be a multiset with elements from \(E(G)\). Let \(\mathsf{Graph}(\widehat{E})\) denote the multigraph \((V^{\prime},\widehat{E})\), where \(V^{\prime}=\{u\ |\ \{u,v\}\in\widehat{E}\}\). A multigraph \(H\) is a _submultigraph_ of a multigraph \(G\) if \(V(H)\subseteq V(G)\) and \(E(H)\subseteq E(G)\). Let \(V^{\prime}\subseteq V(G)\). We denote the submultigraph induced by \(V^{\prime}\) by \(G[V^{\prime}]\), that is, \(V(G[V^{\prime}])=V^{\prime}\) and \(E(G[V^{\prime}])=\{\{u,v\}\in E(G)\ |\ u,v\in V^{\prime}\}\). Let \(U\subseteq V(G)\). Let \(G\setminus U\) denote the subgraph \(G[V(G)\setminus U]\) of \(G\).
An _Eulerian cycle_ in a multigraph \(\widehat{G}\) is a cycle that visits every edge in \(E(\widehat{G})\) exactly once. A _vertex cover_ of \(G\) is \(V^{\prime}\subseteq V(G)\) such that for every \(\{u,v\}\in E(G)\), at least one among \(u\) and \(v\) is in \(V^{\prime}\). The _vertex cover number_ of \(G\) is \(\mathsf{vc}(G)=\mathsf{min}\{|V^{\prime}|\ |\ V^{\prime}\) is a vertex cover of \(G\}\). When \(G\) is clear from context, we refer to \(\mathsf{vc}(G)\) as \(\mathsf{vc}\). A _path_\(P\) in \(G\) is \((v_{0},\ldots,v_{\ell})\), where (i) for every \(0\leq i\leq\ell\), \(v_{i}\in V(G)\), and (ii) for every \(0\leq i\leq\ell-1\), \(\{v_{i},v_{i+1}\}\in E(G)\) (we allow repeating vertices). The _length_ of a path \(P=(v_{0},\ldots,v_{\ell})\), denoted by \(|P|\), is the number of edges in \(P\) (including repetitions), that is, \(\ell\). The set of vertices of \(P\) is \(V(P)=\{v_{0},\ldots,v_{\ell-1}\}\). The multiset of edges of \(P\) is \(E(P)=\{\{v_{i},v_{i+1}\}\ |\ 0\leq i\leq\ell-1\}\) (including repetitions). A _cycle_\(C\) in \(G\) is a path \((v_{0},\ldots,v_{\ell})\) such that \(v_{0}=v_{\ell}\). A _simple cycle_ is a cycle \(C=(v_{0},\ldots,v_{\ell})\) such that for every \(0\leq i<j\leq\ell-1\), \(v_{i}\neq v_{j}\). An _isomorphism_ of a multigraph \(G\) into a multigraph \(G^{\prime}\) is a bijection \(\alpha:V(G)\to V(G^{\prime})\), such that \(\{u,v\}\) appears in \(E(G)\)\(\ell\) times if and only if \(\{\alpha(u),\alpha(v)\}\) appears in \(E(G^{\prime})\)\(\ell\) times, for an \(\ell\in\mathbb{N}\). For a multiset \(A\), we denote by \(2^{A}\) the _power set_ of \(A\), that is, \(2^{A}=\{B\ |\ B\subseteq A\}\). Let \(A\) and \(B\) be two multisets. Let \(A\setminus B\) be the multiset \(D\subseteq A\) such that every \(d\in A\) appears exactly \(\mathsf{max}\{0,d_{A}-d_{B}\}\) times in \(D\), where \(d_{A}\) and \(d_{B}\) are the numbers of times \(d\) appears in \(A\) and \(B\), respectively. A _permutation_ of a multiset \(A\) is a bijection \(\mathsf{Permut}_{A}:A\to[|A|]\).
[\(v_{\mathsf{init}}\)-Robot Cycle] Let \(G\) be a graph, let \(v_{\mathsf{init}}\in V(G)\). A \(v_{\mathsf{init}}\)-robot cycle is a cycle \(\mathsf{RC}=(v_{0}=v_{\mathsf{init}},v_{1},v_{2},\ldots,v_{\ell}=v_{\mathsf{ init}})\) in \(G\) for some \(\ell\).
When \(v_{\mathsf{init}}\) is clear from the context, we refer to a \(v_{\mathsf{init}}\)-robot cycle as a robot cycle.
[Solution] Let \(G\) be a graph, \(v_{\mathsf{init}}\in V(G)\) and \(k\in\mathbb{N}\). A _solution_ for \((G,v_{\mathsf{init}},k)\) is a set of \(k\)\(v_{\mathsf{init}}\)-robot cycles \(\{\mathsf{RC}_{1},\ldots,\mathsf{RC}_{k}\}\) with \(E(G)\subseteq E(\mathsf{RC}_{1})\cup E(\mathsf{RC}_{2})\cup E(\mathsf{RC}_{1})\). The _solution_ for \((G,v_{\mathsf{init}},k)\) is a set of \(k\)\(v_{\mathsf{init}}\)-robot cycles \(\{\mathsf{RC}_{1},\ldots,\mathsf{RC}_{k}\}\) with \(E(G)\subseteq E(\mathsf{RC}_{1})\cup E(\mathsf{RC}_{2})\cup E(\mathsf{RC}_{1})\). The _solution_ for \((G,v_{\mathsf{init}},k)\) is a set of \(k\)\(v_{\mathsf{init}}\)-robot cycles \(\{\mathsf{RC}_{1},\ldots,\mathsf{RC}_{k}\}\) with \(E(G)\subseteq E(\mathsf{RC}_{1})\cup E(\mathsf{RC}_{2})\cup E(\mathsf{RC}_{1})\). The _solution_ for \((G,v_{\mathsf{init}},k)\) is a set of \(k\)\(v_{\mathsf{init}}\)-robot cycles \(\{\mathsf{RC}_{1},\ldots,\mathsf{RC}_{k}\}\) with \(E(G)\subseteq E(\mathsf{RC}_{1})\cup E(\mathsf{RC}_{2})\cup E(\mathsf{RC}_{2})\).
\(\cdots\cup E(\mathsf{RC}_{k})\). Its \(\operatorname{value}\) is \(\mathsf{val}(\{\mathsf{RC}_{1},\ldots,\mathsf{RC}_{k}\})=\mathsf{max}\{|E( \mathsf{RC}_{1})|,|E(\mathsf{RC}_{2})|,\ldots,|E(\mathsf{RC}_{k})|\}\) (see Figure (a)a for an illustration)._
[Collective Graph Exploration with \(k\) Agents] The Collective Graph Exploration (CGE) problem with \(k\) agents is: given a connected graph \(G\), \(v_{\mathsf{init}}\in V(G)\) and \(k\in\mathbb{N}\), find the minimum \(B\) such that there exists a solution \(\{\mathsf{RC}_{1},\ldots,\mathsf{RC}_{k}\}\) where \(\mathsf{val}(\{\mathsf{RC}_{1},\ldots,\mathsf{RC}_{k}\})=B\).
[Collective Graph Exploration with \(k\) Agents and Budget \(B\)] The Collective Graph Exploration (CGE) problem with \(k\) agents and budget \(B\) is: given a connected graph \(G\), \(v_{\mathsf{init}}\in V(G)\) and \(k,B\in\mathbb{N}\), find a solution \(\{\mathsf{RC}_{1},\ldots,\mathsf{RC}_{k}\}\) where \(\mathsf{val}(\{\mathsf{RC}_{1},\ldots,\mathsf{RC}_{k}\})\leq B\), if such a solution exists; otherwise, return "no-instance".
[Bin Packing] The Bin Packing problem is: given a finite set \(I\) of items, a size \(s(i)\in\mathbb{N}\) for each \(i\in I\), a positive integer \(B\) called bin capacity and a positive integer \(k\), decide whether there is a partition of \(I\) into disjoint sets \(I_{1},\ldots,I_{k}\) such that for every \(1\leq j\leq k\), \(\sum_{i\in I_{j}}s(i)\leq B\).
[Exact Bin Packing] The Exact Bin Packing problem is: given a finite set \(I\) of items, a size \(s(i)\in\mathbb{N}\) for each \(i\in I\), a positive integer \(B\) called bin capacity and a positive integer \(k\) such that \(\sum_{i\in I}s(i)=B\cdot k\), decide whether there is a partition of \(I\) into disjoint sets \(I_{1},\ldots,I_{k}\) such that for every \(1\leq j\leq k\), \(\sum_{i\in I_{j}}s(i)=B\).
[Integer Linear Programming] In the Integer Linear Programming Feasibility (ILP) problem, the input consists of \(t\) variables \(x_{1},x_{2},\ldots,x_{t}\) and a set of \(m\) inequalities of the following form:
\[\begin{array}{l}a_{1,1}x_{1}+a_{1,2}x_{1}+\cdots+a_{1,p}x_{t}\leq b_{1}\\ a_{2,1}x_{1}+a_{2,2}x_{2}+\cdots+a_{2,p}x_{t}\leq b_{2}\\ \vdots\vdots\vdots\vdots\vdots\\ a_{m,1}x_{1}+a_{m,2}x_{2}+\cdots+a_{m,p}x_{t}\leq b_{m}\end{array}\]
where all coefficients \(a_{i,j}\) and \(b_{i}\) are required to integers. The task is to check whether there exist integer values for every variable \(x_{i}\) so that all inequalities are satisfiable.
[[14, 17, 18]] An ILP instance of size \(m\) with \(t\) variables can be solved in time \(t^{\mathcal{O}(t)}\cdot m^{\mathcal{O}(1)}\).
Figure 1: (a) An illustration of a graph \(G\) (drawn in black) and a solution for \((G,v_{\mathsf{init}},k=2)\). The 2 robot cycles are shown by red and blue edges where the edge labels show the order in which the edges were covered by the respective robots. (b) The Robot Cycle-Graph for the robot cycle drawn in blue.
## 3 Reinterpretation Based on Eulerian Cycles
Our approach to CGE with \(k\) agents is as follows. Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\) and let \(k\in\mathbb{N}\). Let \(\{\mathsf{RC}_{1},\ldots,\mathsf{RC}_{k}\}\) be a solution, let \(1\leq i\leq k\) and denote \(\mathsf{RC}_{i}=(v_{0}=v_{\mathsf{init}},v_{1},v_{2},\ldots,v_{\ell}=v_{ \mathsf{init}})\) for some \(\ell\in\mathbb{N}\). If we define a multiset \(\widehat{E}_{\mathsf{RC}_{i}}=\{\{v_{j},v_{j+1}\}\mid 0\leq j\leq\ell-1\}\), then, clearly, \(\mathsf{RC}_{i}=(v_{0}=v_{\mathsf{init}},v_{1},v_{2},\ldots,v_{\ell}=v_{ \mathsf{init}})\) is an Eulerian cycle in \(\mathsf{Graph}(\widehat{E}_{\mathsf{RC}_{i}})\). We call this graph the _\(\mathsf{RC}_{i}\)-graph_ (see Figure 0(b)):
[Robot Cycle-Graph] Let \(G\) be a graph, let \(v_{\mathsf{init}}\in V(G)\) and let \(\mathsf{RC}=(v_{0}=v_{\mathsf{init}},v_{1},v_{2},\ldots,v_{\ell}=v_{\mathsf{ init}})\) be a robot cycle. The \(\mathsf{RC}\)-graph, denoted by \(\mathsf{Graph}(\mathsf{RC})\), is the multigraph \(\mathsf{Graph}(\widehat{E}_{\mathsf{RC}})\), where \(\widehat{E}_{\mathsf{RC}}=\{\{v_{i},v_{i+1}\}\mid 0\leq i\leq\ell-1\}\) is a multiset.
Let \(G\) be a graph, let \(v_{\mathsf{init}}\in V(G)\) and let \(\mathsf{RC}=(v_{0}=v_{\mathsf{init}},v_{1},v_{2},\ldots,v_{\ell}=v_{\mathsf{ init}})\) be a robot cycle. Then \(\mathsf{RC}\) is an Eulerian cycle in \(\mathsf{Graph}(\mathsf{RC})\).
On the opposite direction, let \(\widehat{E}\) be a multiset with elements from \(E(G)\), and assume that \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\widehat{E}))\). Let \(\mathsf{RC}=(v_{0},v_{1},v_{2},\ldots,v_{\ell}=v_{0})\) be an Eulerian cycle in \(\mathsf{Graph}(\widehat{E})\) and assume, without loss of generality, that \(v_{0}=v_{\ell}=v_{\mathsf{init}}\). It is easy to see that \(\mathsf{RC}\) is a robot cycle in \(G\):
Let \(G\) be a graph, let \(v_{\mathsf{init}}\in V(G)\), let \(\widehat{E}\) be a multiset with elements from \(E(G)\) and assume that \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\widehat{E}))\). Let \(\mathsf{RC}=(v_{0}=v_{\mathsf{init}},v_{1},v_{2},\ldots,v_{\ell}=v_{\mathsf{ init}})\) be an Eulerian cycle in \(\mathsf{Graph}(\widehat{E})\). Then, \(\mathsf{RC}\) is a robot cycle in \(G\).
From Observations 3 and 3, we get that finding a solution is equal to find \(k\) multisets \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that: (i) for every \(1\leq i\leq k\), \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\widehat{E}_{i}))\) (ii) for every \(1\leq i\leq k\), there exists an Eulerian cycle in \(\mathsf{Graph}(\widehat{E}_{i})\) and (iii) \(E(G)\subseteq\widehat{E}_{1}\cup\ldots\cup\widehat{E}_{k}\), that is, each \(e\in E\) appears at least once in at least one of \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\).
Recall that, in a multigraph \(\widehat{G}\), there exists an Eulerian cycle if and only if \(\widehat{G}\) is connected and each \(v\in V(\widehat{G})\) has even degree in \(\widehat{G}\)[3]. Thus, we have the following lemma:
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\) and let \(k,B\in\mathbb{N}\). Then, \((G,v_{\mathsf{init}},k,B)\) is a yes-instance of CGE if and only if there exist \(k\) multisets \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) with elements from \(E(G)\), such that the following conditions hold:
1. For every \(1\leq i\leq k\), \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\widehat{E}_{i}))\).
2. For every \(1\leq i\leq k\), \(\mathsf{Graph}(\widehat{E}_{i})\) is connected, and every vertex in \(\mathsf{Graph}(\widehat{E}_{i})\) has even degree.
3. \(E(G)\subseteq\widehat{E}_{1}\cup\ldots\cup\widehat{E}_{k}\).
4. \(\mathsf{max}\{|\widehat{E}_{1}|,\ldots,|\widehat{E}_{k}|\}\leq B\).
## 4 High-Level Overview
### FPT Algorithm with Respect to Vertex Cover
Our algorithm is based on a reduction to the ILP problem. We aim to construct linear equations that verify the conditions in Lemma 3.
#### 4.1.1 Encoding \(\widehat{E}_{i}\) by a Valid Pair
First, we aim to satisfy the "local" conditions of Lemma 3 for each robot, that is, Conditions 1 and 2. Let us focus on the "harder" condition of the two, that is, Condition 2. We aim to encode any potential \(\widehat{E}_{i}\) by smaller subsets whose union is \(\widehat{E}_{i}\). In addition, we would like the
"reverse" direction as well: every collection of subsets that we will be able to unite must create some valid \(\widehat{E}_{i}\). Note that we have two goals to achieve when uniting the subsets together: (i) derive a connected graph, where (ii) each vertex has even degree. In the light of this, the most natural encoding for the subsets are cycles, being the simplest graphs satisfying both aforementioned goals. Indeed, every cycle is connected, and a graph composed only of cycles is a graph where every vertex has even degree. Here, the difficulty is to maintain the connectivity of the composed graph. On the positive side, observe that every cycle in the input graph \(G\) has a non-empty intersection with any vertex cover \(\mathsf{VC}\) of \(G\). So, we deal with the connectivity requirement as follows. We seek for a graph \(\overline{G}\) that is essentially (but not precisely) a subgraph of \(G\) that is (i) "small" enough, and (ii) for every valid \(\widehat{E}_{i}\), there exists \(\mathsf{CC}\subseteq E(\overline{G})\) such that \(\mathsf{Graph}(\mathsf{CC})\) is a "submultigraph" of \(\mathsf{Graph}(\widehat{E}_{i})\), \(\mathsf{Graph}(\mathsf{CC})\) is connected, and \(V(\mathsf{Graph}(\mathsf{CC}))\cap\mathsf{VC}=V(\mathsf{Graph}(\widehat{E}_{ i}))\cap\mathsf{VC}\).
**Equivalence Graph \(G^{*}\).** A first attempt to find such a graph is as follows. We define an equivalence relation on \(V(G)\setminus\mathsf{VC}\) based on the sets of neighbors of the vertices in \(V(G)\setminus\mathsf{VC}\) (see Definition 5.2) (see the 4 equivalence classes of the graph \(G\) in Figure 1(a)). We denote the set of equivalence classes induced by this equivalence relation by \(\mathsf{EQ}\). Then \(G^{*}\) is the graph defined as follows (for more details, see Definition 5.3).
**Definition 4.1** (Equivalence Graph \(G^{*}\)).: _Let \(G^{*}\) be the graph that: (i) contains \(\mathsf{VC}\), and the edges having both endpoints in \(\mathsf{VC}\), and (ii) where every equivalence class \(u^{*}\in\mathsf{EQ}\) is represented by a single vertex adjacent to the neighbors of some \(u\in u^{*}\) in \(G\) (which belong to \(\mathsf{VC}\)). See Figure 1(b)._
Unfortunately, this attempt fails, as we might need to use more than one vertex from the same \(u^{*}\in\mathsf{EQ}\) in order to maintain the connectivity. E.g., see Figure 2(b). If we delete \(r_{2}\) and \(y_{5}\), which are in the same equivalence class (in \(G\)) as \(r_{1}\) and \(y_{6}\), respectively, then the graph is no longer connected.
**The Multigraph \(\overline{G}\).** So, consider the following second attempt. We use the aforementioned graph \(G^{*}\), but instead of one vertex representing each \(u^{*}\in\mathsf{EQ}\), we have \(\mathsf{min}\{|u^{*}|,2^{|\mathsf{R}_{G^{*}}(u^{*})|}\}\) vertices. Observe that given a connected subgraph \(G^{\prime}\) of \(G\), and two vertices \(u,u^{\prime}\in u^{*}\) such that \(\mathsf{N}_{G^{\prime}}(u)=\mathsf{N}_{G^{\prime}}(u^{\prime})\), it holds that \(G\setminus\{u^{\prime}\}\) remains connected (e.g., see Figure 2(a) and 2(b). The connectivity is still maintained even after deleting all but one vertex in the same equivalence class (in \(G\)) having same neighbourhood). Therefore, we have enough vertices for each \(u^{*}\in\mathsf{EQ}\) in the graph, and its size is a function of \(|\mathsf{VC}|\); so, we obtained the sought graph \(\overline{G}\). Now, we would like to have an additional property for \(\mathsf{CC}\), which is that every vertex in \(\mathsf{Graph}(\mathsf{CC})\) has even degree in it. To this end, we add to \(\overline{G}\) more vertices for each \(u^{*}\in\mathsf{EQ}\). See Figure 2(c). The vertex \(g_{13}\) having the same neighbours as \(g_{11}\) in \(H\) and being in the same equivalence class (in \(G\)) as \(g_{11}\) is added to make the degrees of \(1\) and \(2\) even. We
Figure 2: An illustration of a graph \(G\) (in (a)), and its corresponding graphs \(G^{*}\) (in (b)) and \(\overline{G}\) (in (c)). The vertex cover vertices and their edges are shown in orange. The 4 equivalence classes and their vertices are shown by red, yellow, green, and blue.
have the following definition for \(\overline{G}\) (for more details, see Definition 5.4 and the discussion before this definition).
**Definition 4.2** (**The Multigraph \(\overline{G}\))**.: _Let \(\overline{G}\) be the graph that: (i) contains \(\mathsf{VC}\), and the edges having both endpoints in \(\mathsf{VC}\), (ii) for every equivalence class \(u^{*}\in\mathsf{EQ}\), there are exactly \(\mathsf{min}\{|u^{*}|,2^{|\mathsf{N}_{G^{*}}(u^{*})|}+|\mathsf{VC}|^{2}\}\) vertices, adjacent to the neighbors of some \(u\in u^{*}\) in \(G\) (which belong to \(\mathsf{VC}\)), (iii) each edge in \(\overline{G}\) appears exactly twice in \(E(\overline{G})\) (for technical reasons). See Figure 1(c)._
**A Skeleton of \(\widehat{E}_{i}\).** We think of \(\mathsf{Graph}(\mathsf{CC})\) as a "skeleton" of a potential \(\widehat{E}_{i}\). By adding cycles with a vertex from \(V(\mathsf{Graph}(\mathsf{CC}))\cap\mathsf{VC}\), we maintain the connectivity, and since every vertex in \(\mathsf{Graph}(\mathsf{CC})\) has even degree, then by adding a cycle, this property is preserved as well. We have the following definition for a skeleton.
**Definition 4.3** (**A Skeleton \(\mathsf{CC}\))**.: _A skeleton of \(\widehat{E}_{i}\) is \(\mathsf{CC}\subseteq\widehat{E}_{i}\) such that: (i) \(\mathsf{Graph}(\mathsf{CC})\) is a "submultigraph" of \(\overline{G}\), (ii) \(\mathsf{Graph}(\mathsf{CC})\) is connected, \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\mathsf{CC}))\) and every vertex in \(\mathsf{Graph}(\mathsf{CC})\) has even degree, and (iii) \(V(\mathsf{Graph}(\mathsf{CC}))\cap\mathsf{VC}=V(\mathsf{Graph}(\widehat{E}_{ i}))\cap\mathsf{VC}\) (See Figure 2(d))._
**An \(\widehat{E}_{i}\)-Valid Pair**.: In Observation 5.1, we prove that we might assume that the \(\widehat{E}_{i}\)'s are _nice multisets_, that is, a multiset where every element appears at most twice. In Lemma 5.12, we prove that every \(\widehat{E}_{i}\) (assuming \(\widehat{E}_{i}\) is nice) can be encoded by a skeleton \(\mathsf{CC}\) (See Figure 2(c).) and a multiset \(\mathcal{C}\) of cycles (of length bounded by \(2|\mathsf{VC}|\)). We say that \((\mathsf{CC},\mathcal{C})\) is an \(\widehat{E}\)_-valid pair_ (for more details, see Definition 5.11 and Section 5.1).
**Definition 4.4** (**A Valid Pair**).: _A pair \((\mathsf{CC},\mathcal{C})\), where \(\mathsf{CC}\) is a skeleton of \(\widehat{E}_{i}\) and \(\mathcal{C}\) is a multiset of cycles in \(\mathsf{Graph}(\widehat{E}_{i})\), is an \(\widehat{E}_{i}\)-valid pair skeleton \(\mathsf{CC}\) if:_
1. _The length of each cycle in_ \(\mathcal{C}\) _is bounded by_ \(2|\mathsf{VC}|\)_._
2. _At most_ \(2|\mathsf{VC}|^{2}\) _cycles in_ \(\mathcal{C}\) _have length other than_ \(4\)_._
3. \(\mathsf{CC}\cup\bigcup_{C\in\mathcal{C}}E(C)=\widehat{E}_{i}\) _(being two multisets)._
Figure 3: The graphs shown here are with respect to the graph \(G\) shown in Figure 1(a). An illustration of (a) a graph \(\mathsf{Graph}(\widehat{E})\), (b) the graph \(H\) obtained by deleting all but one vertex from the same equivalence class in \(G\) and have the same neighbours in \(\mathsf{Graph}(\widehat{E})\), (c) the graph \(\mathsf{Graph}(\mathsf{CC})\) where \(\mathsf{CC}\) is a skeleton of \(\widehat{E}\) obtained from the graph in (b) by adding four more edges from \(\mathsf{Graph}(\widehat{E})\setminus H\), and (d) the graph \(\mathsf{Graph}(\mathsf{CC}^{\prime})\) where \(\mathsf{CC}^{\prime}\) is the skeleton in \(\overline{G}\) that is derived from the skeleton \(\mathsf{CC}\).
#### Robot and Cycle Types
Now, obviously, the number of different cycles in \(G\) (of length bounded by \(2|\mathsf{VC}|\)) is potentially huge. Fortunately, it is suffices to look at cycles in \(G^{*}\) in order to preserve Condition 2 of Lemma 3.4: assume that we have a connected \(\mathsf{Graph}(\mathsf{CC})\) such that every vertex in \(\mathsf{Graph}(\mathsf{CC})\) has even degree in it, and a multiset of cycles with a vertex from \(V(\mathsf{Graph}(\mathsf{CC}))\cap\mathsf{VC}\) in \(G^{*}\). By replacing each vertex that represents \(u^{*}\in\mathsf{EQ}\) by any \(u\in u^{*}\), the connectivity preserved, and the degree of each vertex is even.
Thus, each robot is associated with a _robot type_\(\mathsf{RobTyp}\), which includes a skeleton \(\mathsf{CC}\) of the multiset \(\widehat{E}_{i}\) associated with the robot (along other information discussed later). In order to preserve Condition 1 of Lemma 3.4, we also demand that \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\mathsf{CC}))\). Generally, for each type we define, we will have a variable that stands for the number of elements of that type. We are now ready to present our first equation of the ILP reduction:
**Equation 1: Robot Type for Each Robot.** In this equation, we ensure that the total sum of robots of the different robot types is exactly \(k\), that is, there is exactly one robot type for each robot:
\(1\).
In addition, the other "pieces" of the "puzzle", that is, the cycles, are also represented by types: Each cycle \(C\) of length at most \(2|\mathsf{VC}|\) in \(G^{*}\) is represented by a _cycle type_, of the form \(\mathsf{CycTyp}=(C,\mathsf{RobTyp})\) (along other information discussed later), where \(\mathsf{RobTyp}\) is a robot type that is "able to connect to \(C\)", that is, \(V(\mathsf{Graph}(\mathsf{CC}))\cap\mathsf{VC}\cap V(C)\neq\emptyset\) for \(\mathsf{RobTyp}=\mathsf{CC}\). Similarly, we will have equations for our other types.
**Satisfying the Budget Restriction.** Now, we aim to satisfy the budget condition (Condition 2 of Lemma 3.4), that is, for every \(i\in[k]\), \(|\widehat{E}_{i}|\leq B\). Let \(i\in[k]\) and let \((\mathsf{CC},\mathcal{C})\) be an \(\widehat{E}\)-valid pair. So, \(\widehat{E}_{i}=\mathsf{CC}\cup(\bigcup_{C\in\mathcal{C}}E(C))\) (being a union of two multisets). Now, in Lemma 5.12, we prove that "most" of the cycles in \(\mathcal{C}\) are of length \(4\), that is, for every \(2\leq j\leq 2|\mathsf{VC}|,j\neq 4\), the number of cycles of length \(j\) in \(\mathcal{C}\) is bounded by \(2|\mathsf{VC}|^{2}\). Therefore, we add to the definition of a robot type also the number of cycles of length exactly \(j\), encoded by a vector \(\mathsf{NumOfcyc}=(N_{2},N_{3},N_{5},N_{6},\ldots,N_{2|\mathsf{VC}|})\). So, for now, a robot type is \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{NumOfcyc})\). Thus, in order to satisfy the budget condition, we verify that the budget used by all the robots of a robot type \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{NumOfcyc})\), is as expected together. First, we ensure that the number of cycles of each length \(2\leq j\leq 2|\mathsf{VC}|,j\neq 4\), is exactly as the robot type demands, times the number of robots associated with this type, that is, \(N_{j}\cdot x_{\mathsf{RobTyp}}\). So, we have the following equation:
**Equation 5: Assigning the Exact Number of Cycles of Length Other Than \(4\) to Each Robot Type.** We have the following notation: \(\mathsf{CycTypS}(\mathsf{RobTyp},j)\) is the set of cycle types for cycles of length \(j\) assigned to a robot of robot type \(\mathsf{RobTyp}\).
5. For every robot type \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{NumOfcyc})\) and for every \(2\leq j\leq 2|\mathsf{VC}|\), \(j\neq 4\), \(\sum_{\mathsf{CycTyp}\in\mathsf{CycTypS}(\mathsf{RobTyp},j)}x_{\mathsf{CycTyp }}=N_{j}\cdot x_{\mathsf{RobTyp}}\), where \(\mathsf{NumOfcyc}=(N_{2},N_{3},N_{5},N_{6},\ldots,\allowbreak N_{2|\mathsf{VC}|})\).
Observe that once this equation is satisfied, we are able to arbitrary allocate \(N_{j}\) cycles of length \(j\) to each robot of type \(\mathsf{RobTyp}\). So, in order to verify the budget limitation, we only need to deal with the cycles of length \(4\). Now, notice that the budget left for a robot of type \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{NumOfcyc})\) for the cycles of length \(4\) is \(B-(|\mathsf{CC}|+\sum_{2\leq j\leq 2|\mathsf{VC}|,j\neq 4}N_{j}\cdot j)\), where \(\mathsf{NumOfcyc}=(N_{2},N_{3},N_{5},N_{6},\ldots,N_{2|\mathsf{VC}|})\). Now, the maximum number of cycles we can add to a single robot of type \(\mathsf{RobTyp}\) is the largest number which is a multiple of \(4\), that is less or equal to \(B-\mathsf{Bud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{RobTyp})\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{CycBud}(\mathsf{RobTyp}))\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{CycBud}(\mathsf{RobTyp}))\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{CycBud}(\mathsf{RobTyp}))\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{CycBud}(\mathsf{RobTyp}))\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{CycBud}(\mathsf{RobTyp}))\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{CycBud}(\mathsf{RobTyp}))\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{CycBud}(\mathsf{RobTyp}))\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{CycBud}(\mathsf{RobTyp}))\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{CycBud}(\mathsf{RobTyp}))\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{CycBud}(\mathsf{RobTyp}))\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{CycBud}(\mathsf{RobTyp}))\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{CycBud}(\mathsf{RobTyp}))\). So, for every robot type \(\mathsf{RobTyp}\) let \(\mathsf{CycBud}(\mathsf{RobTyp})=\mathsf{CycBud}(\mathsf{CycBud}(\mathsf{CycBud}( \mathsf{RobTyp})))\). So, for every robot type \(\
\(\lfloor(B-(|\mathsf{CC}|+\sum_{2\leq j\leq 2|\mathsf{VC}|,j\neq 4}N_{j}\cdot j)) \cdot\frac{1}{4}\rfloor\cdot 4\). Notice that \(\mathsf{CycBud}(\mathsf{RobTyp})\) is the budget left for the cycles of length \(4\). Thus, we have the following equation:
**Equation 6: Verifying the Budget Limitation.** This equation is defined as follows.
\(6\). For every \(\mathsf{RobTyp}\in\mathsf{RobTypS}\),
\(4\cdot x_{\mathsf{CycTyp}}\leq x_{\mathsf{RobTyp}}\cdot\mathsf{CycBud}( \mathsf{RobTyp})\).
By now, we have that there exist \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) that satisfy Conditions 1, 2 and 4 of Lemma 3.4 if and only if Equations 1, 5 and 6 can be satisfied.
**Covering Edges with Both Endpoints in \(\mathsf{VC}\).** Now, we aim to satisfy Condition 3 of Lemma 3.4, that is, we need to verify that every edge is covered by at least one robot. First, we deal with edges with both endpoints in \(\mathsf{VC}\). Here, for every \(\{u,v\}\) such that \(u,v\in\mathsf{VC}\), we just need to verify that at least one cycle or one of the \(\mathsf{CC}\)'s contains \(\{u,v\}\). This we can easily solve by the following equation:
**Equation 4: Covering Each Edge With Both Endpoints in \(\mathsf{VC}\).** We have the following notations: For every \(\{u,v\}\in E\) such that \(u,v\in\mathsf{VC}\), (i) let \(\mathsf{CycTypS}(\{u,v\})\) be the set of cycle types \(\mathsf{CycTyp}=(C,\mathsf{RobTyp})\) where \(C\) covers \(\{u,v\}\), and (ii) let \(\mathsf{RobTypS}(\{u,v\})\) be the set of robot types \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{NumOfcy})\) where \(\mathsf{CC}\) covers \(\{u,v\}\). In this equation, we ensure that each \(\{u,v\}\in E(G)\) with both endpoints in \(\mathsf{VC}\) is covered at least once:
\(4\). For every \(\{u,v\}\in E\) such that \(u,v\in\mathsf{VC}\),
\(x_{\mathsf{CycTypS}(\{u,v\})}+\sum_{\mathsf{RobTyp}\in\mathsf{RobTypS}(\{u, v\})}x_{\mathsf{RobTyp}}\geq 1\).
Let \(\mathsf{RobTypS}\) be the set of the robot types, and let \(\mathsf{CycTypS}\) be the set of cycle types.
**Covering Edges with an Endpoint in \(V(G)\setminus\mathsf{VC}\).** Now, we aim to cover the edges from \(E(G)\) with (exactly) one endpoint in \(V(G)\setminus\mathsf{VC}\). Here, we need to work harder. Let \(x_{z}\), for every \(z\in\mathsf{RobTypS}\cup\mathsf{CycTypS}\), be values that satisfy Equations 1 and 4-6. As for now, we will arbitrary allocate cycles to robots according to their types. Then, we will replace every \(u^{*}\in V(\mathsf{Graph}(\mathsf{CC}_{i}))\) and \(u^{*}\in V(C)\), for every cycle \(C\) allocated to the \(i\)-th robot, by an arbitrary \(u\in u^{*}\). Then, we will define \(\widehat{E}_{i}\) as the union of edge set of the cycles and \(\mathsf{CC}_{i}\) we obtained. We saw that due to Equations 1, 5 and 6, Conditions 1, 2 and 4 of Lemma 3.4 are satisfied. In addition, due to Equations 4, we ensure that each \(\{u,v\}\in E(G)\) with both endpoints in \(\mathsf{VC}\) is covered. The change we need to do in order to cover edges with an endpoint in \(V(G)\setminus\mathsf{VC}\) is to make a smarter choices for the replacements of \(u^{*}\) vertices.
#### 4.1.3 Vertex Type
**Allocation of Multisets with Elements from \(\mathsf{N}_{G^{*}}(u^{*})\).** Observe that each \(u^{*}_{j}\in V(\mathsf{Graph}(\mathsf{CC}_{i}))\) that is replaced by some \(u\in u^{*}\), covers the multiset of edges \(\{\{u,v\}\mid v\in\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC}_{i})}(u^ {*}_{j})\}\). In addition, every \(u^{*}\in V(C)\) that is replaced by \(u\in u^{*}\), covers the multiset of edges \(\{\{u,v\},\{u,v^{\prime}\}\}\), where \(v\) and \(v^{\prime}\) are the vertices right before and right after \(u\) in \(C\), respectively. Now, in order to cover every edge with an endpoint in \(V(G)\setminus\mathsf{VC}\), we need to cover the set \(\{\{u,v\}\mid v\in\mathsf{N}_{G^{*}}(u^{*})\}\) for every \(u\in u^{*}\in\mathsf{EQ}\). Therefore, we would like to ensure that the union of multisets of neighbors "allocated" for each \(u\), when we replace some \(u^{*}\) by \(u\), contains \(\{\{u,v\}\mid v\in\mathsf{N}_{G^{*}}(u^{*})\}\).
**The Set \(\mathsf{NeiSubsets}\) of Multisets Needed to Allocate to a Vertex.** Now, the reverse direction holds as well: let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) be multisets satisfying the conditions of Lemma 3.4, for every \(i\in[k]\), let \((\mathsf{CC}_{i},\mathcal{C}_{i})\) be an \(\widehat{E}_{i}\)-valid pair, and let \(u\in u^{*}\in\mathsf{EQ}\). Consider the following multisets (*): (i) for every \(i\in[k]\) such that \(u\in V(\mathsf{Graph}(\mathsf{CC}_{i}))\), the multiset \(\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC}_{i})}(u)\); (ii) for every \(i\in[k]\) and \(C\in\mathcal{C}_{i}\) and every appearance of \(u\) in \(C\), the multiset \(\{v,v^{\prime}\}\), where
and \(v^{\prime}\) are the vertices in \(C\) right before and right after the appearance of \(u\). By Condition 3 of Lemma 3, every edge appears in at least one among \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\). So, as for every \(i\in[k]\), \(\widehat{E}_{i}=\mathsf{CC}_{i}\bigcup_{C\in\mathcal{C}_{i}}E(C)\), the union of the multisets in (*) obviously contains \(\mathsf{N}_{G^{*}}(u^{*})\), e.g. see Figure 4. We would like to store the information of these potential multisets that ensures we covered \(\mathsf{N}_{G^{*}}(u^{*})\). The issue is that there might be a lot of multisets, as \(u\) might appear in many \(\widehat{E}_{i}\)'s. Clearly, it is sufficient to store one copy of each such multiset, as we only care that the union of the multisets contains \(\mathsf{N}_{G^{*}}(u^{*})\). Now, as we assume that \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) are nice multisets, each element in every multiset we derived appears at most twice in that multiset. In addition, since every edge in \(E(\overline{G})\) appears at most twice, for each skeleton \(\mathsf{CC}\subseteq E(\overline{G})\), each edge appears at most twice in \(E(\mathsf{Graph}(\mathsf{CC}))\). So, for each \(u^{*}_{j}\in V(\mathsf{Graph}(\mathsf{CC}))\) we replace by some \(u\in u^{*}\), in the multiset of neighbors that are covered, every element appears at most twice. Moreover, since the degree is even, we have that the number of element in each multiset is even.
For a set \(A\) we define the multiset \(A\times 2=\{a,a\ |\ a\in A\}\). That is, each element in \(A\) appears exactly twice in \(A\times 2\). Thus, we have the following definition for a vertex type (for more details, see Section 5.2).
[Vertex Type] Let \(G\) be a connected graph and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(u^{*}\in\mathsf{EQ}\) and let \(\mathsf{Neibsets}\subseteq 2^{\mathsf{N}_{G^{*}}(u^{*})\times 2}\). Then, \(\mathsf{VerTyp}=(u^{*},\mathsf{Neibsets})\) is a _vertex type_ if for every \(\mathsf{Neib}\mathsf{Neib}\in\mathsf{Neibsets}\), \(|\mathsf{Neib}\mathsf{Sub}|\) is even, and \(\mathsf{N}_{G^{*}}(u^{*})\subseteq\bigcup\mathsf{Neib subsets}\).
Now, given \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) satisfying the conditions of Lemma 3, for every \(i\in[k]\), an \(\widehat{E}_{i}\)-valid pair \((\mathsf{CC}_{i},\mathcal{C}_{i})\), and \(u\in u^{*}\in\mathsf{EQ}\), we derive the vertex type of \(u\) as follows. We take the set \(\mathsf{Neibsets}\) of multisets as described in (*). Clearly, \((u^{*},\mathsf{Neib subsets})\) is a vertex type (for more details, see Definition 5.1 and Lemma 5.1).
For the reverse direction, we will use vertex type in order to cover the edges incident to each \(u\in u^{*}\in\mathsf{EQ}\). Let \(\mathsf{VerTypS}\) be the set of vertex types. We have a variable \(x_{z}\) for every \(z\in\mathsf{VerTypS}\). First, each \(u\in u^{*}\in\mathsf{EQ}\) is associated with exactly one vertex type \(\mathsf{VerTyp}=(u^{*},\mathsf{Neib subsets})\), for some \(\mathsf{Neib subsets}\). To achieve this, we first ensure that for every \(u^{*}\in\mathsf{EQ}\), the total sum of \(x_{z}\) for \(z\in\mathsf{VerTypS}_{u^{*}}\), is exactly \(|u^{*}|\), where \(\mathsf{VerTypS}_{u^{*}}=\{(u^{*},\mathsf{Neib subsets})\in\mathsf{VerTypS}\}\).
**Equation 2: Vertex Type for Each Vertex.** This equation is defined as follows.
2. For every \(u^{*}\in\mathsf{EQ}\), \(\sum_{\mathsf{VerTyp}\in\mathsf{VerTypS}_{u^{*}}}x_{\mathsf{VerTyp}}=|u^{*}|\)
Figure 4: An illustration of the parts of a solution around an independent set vertex \(v\). The three colors represent the parts of the multisets corresponding to three robots. The solid edges belong to the skeleton of the specific robot. The dashed edges belong to a cycle, labelled in the figure, of the multiset of the cycles corresponding to the specific robot. The vertex type of \(v\) derived from the solution shown in the figure is \((v^{*},\{\{1,6,8,8\},\{3,4\},\{7,8\},\{2,2,5,6\}\})\), where \(v\in v^{*}\in\mathsf{EQ}\).
Given values for the variables that satisfy the equation, we arbitrary determine a vertex type \((u^{*},\mathsf{NeiSubsets})\) for each \(u\in u^{*}\), such that there are exactly \(x_{\mathsf{VerTyp}}\) vertices of type \(\mathsf{VerTyp}\).
**Allocation Functions of Multisets to Vertex Types.** Now, let \(u\in u^{*}\in\mathsf{EQ}\) of a vertex type \((u^{*},\mathsf{NeiSubsets})\). We aim that when we do the replacements of \(u^{*}\)'s by vertices from \(u^{*}\), each \(u\) gets an allocation of at least one of any of the multisets in \(\mathsf{NeiSubsets}\). This ensures that we covered all of the edges adjacent to \(u\). Instead of doing this for each \(u\in u^{*}\in\mathsf{EQ}\), we will ensure that each \(\mathsf{NeiSub}\in\mathsf{NeiSubsets}\) is allocated for vertices of type \((u^{*},\mathsf{NeiSubsets})\) at least \(x_{\mathsf{VerTyp}}\) times. To this end, we add more information for the robot types. For a robot type with a skeleton \(\mathsf{CC}\), recall that we replace each \(u^{*}_{j}\in V(\mathsf{Graph}(\mathsf{CC}))\) by some \(u\in u^{*}\). The robot type also determines what is the vertex type of \(u\) that replaces \(u^{*}_{j}\). In particular, we add to the robot type an allocation for each of \(\{(u^{*}_{j},\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC})}(u^{*}_{j}))\mid u ^{*}_{j}\in V(\mathsf{Graph}(\mathsf{CC}))\}\), that is, a function \(\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}\) from this set into \(\mathsf{VerTypS}\) (e.g., a robot of a robot type associated with the skeleton illustrated by Figure 2(d), needs to allocate the pair \((r^{*}_{1},\{1,1\})\), along with the other pairs shown in the figure). Observe that \(u^{*}_{j}\) is the vertex being replaced, and \(\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC})}(u^{*}_{j})\) is the multiset of neighbors that are covered. So, we demand that each \((u^{*}_{j},\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC})}(u^{*}_{j}))\) is allocated to a vertex type \((u^{*},\mathsf{NeiSubsets})\) that "wants" to get \(\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC})}(u^{*}_{j})\), that is, \(\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC})}(u^{*}_{j})\in\mathsf{NeiSubsets}\) (e.g, a robot of a robot type associated with the skeleton illustrated by Figure 2(d), might allocate \((r^{*}_{1},\{1,1\})\) to a vertex type \((r^{*},\{\{1,1\},\{3,4\}\})\)). Now, we are ready to define a robot type as follows (for more details, see Section 5.3).
[Robot Type] A _robot type_ is \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfcyc})\) such that:
1. \(\mathsf{CC}\subseteq E(\overline{G})\).
2. \(\mathsf{Graph}(\mathsf{CC})\) is connected, every vertex in \(\mathsf{Graph}(\mathsf{CC})\) has even degree and \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\mathsf{CC}))\).
3. \(\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}\) is an allocation of \(\{(u^{*}_{j},\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC})}(u^{*}_{j})) \mid u^{*}_{j}\in V(\mathsf{Graph}(\mathsf{CC}))\}\) to vertex types.
4. \(\mathsf{NumOfcyc}=(N_{2},N_{3},N_{5},N_{6},\ldots,N_{2|\mathsf{VC}|})\), where \(0\leq N_{i}\leq 2|\mathsf{VC}|^{2}\) for every \(2\leq i\leq 2|\mathsf{VC}|\), \(i\neq 4\).
Similarly, we add to a cycle type with a cycle \(C\) in \(G^{*}\) an allocation of the multiset \(\{\{v,v^{\prime}\}\mid u^{*}\in V(C)\), \(v\) and \(v^{\prime}\) are the vertices appears right before and right after \(u^{*}\}\) to vertex types (given by a function \(\mathsf{PaAlloc}_{C}\)). Now, we are ready to define a cycle type as follows (for more details, see Section 5.4).
[Cycle Type] Let \(C\in\mathsf{Cyc}_{G^{*}}\), let \(\mathsf{PaAlloc}_{C}\) be an allocation of \(\{\{v,v^{\prime}\}\mid u^{*}\in V(C)\), \(v\) and \(v^{\prime}\) are the vertices appears right before and right after \(u^{*}\}\) to vertex types, and let \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfcyc})\) be a robot type. Then, \(\mathsf{CycTyp}=(C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\) is a cycle type if \(V(\mathsf{Graph}(\mathsf{CC}))\cap V(C)\cap\mathsf{VC}\neq\emptyset\).
We have the following notations.
For every \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiSubsets})\in\mathsf{VerTypS}\), every \(\mathsf{NeiSub}=\{v,v^{\prime}\}\in\mathsf{NeiSubsets}\) and \(1\leq j\leq 2|\mathsf{VC}|\), \(\mathsf{CycTypS}(\mathsf{VerTyp},\mathsf{NeiSub},j)\) is the set of cycle types that assign \(\mathsf{NeiSub}\) to \(\mathsf{VerTyp}\) exactly \(j\) times. For every \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiSubsets})\in\mathsf{VerTypS}\), every \(\mathsf{NeiSub}\in\mathsf{NeiSubsets}\) and \(1\leq j\leq 2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}\), \(\mathsf{RobTypS}(\mathsf{VerTyp},\mathsf{NeiSub},j)\) is the set of robot types that assign \(\mathsf{NeiSub}\) to \(\mathsf{VerTyp}\) exactly \(j\) times. Finally, we have the following equation:
**Equation 3: Assigning Enough Subsets for Each Vertex Type.** The equation is defined as follows.
3. For every \(\mathsf{VerTyp}=(u^{*},\text{NeiSubsets})\in\mathsf{VerTypS}\), and every \(\text{NeiSub}\in\text{NeiSubsets}\),
\[\sum_{j=1}^{2^{|\mathsf{VC}|}}\sum_{\mathsf{C}_{\mathsf{VerTyp}} \in\mathsf{C}_{\mathsf{VerTypS}}(\mathsf{VerTyp},\text{NeiSub},j)}j\cdot x_{ \mathsf{C}_{\mathsf{VerTyp}}}+\] \[\sum_{j=1}^{2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}}\sum_{\mathsf{Rob }_{\mathsf{Typ}}\in\mathsf{Rob}_{\mathsf{TypS}}(\mathsf{VerTyp},\text{NeiSub},j)}j\cdot x_{\mathsf{RobTyp}}\geq x_{\mathsf{VerTyp}}.\]
#### The Correctness of The Reduction
We denote the ILP instance associated with Equations 1-6 by \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). Now, we give a proof sketch for the correctness of the reduction:
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\) and let \(k,B\in\mathbb{N}\). Then, \((G,v_{\mathsf{init}},k,B)\) is a yes-instance of \(\mathsf{CGE}\), if and only if \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\) is a yes-instance of the Integer Linear Programming.
Proof.: Let \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{Rob}_{\mathsf{TypS}}\cup\mathsf{CycTypS}\), be values satisfying Equations 1-6. For every vertex type \(\mathsf{VerTyp}=(u^{*},\text{NeiSubsets})\) and each \(\text{NeiSub}\in\text{NeiSubsets}\), let \(\text{Alloc}(\mathsf{VerTyp},\text{NeiSub})\) be the set of every allocation of \(\text{NeiSub}\) to \(\mathsf{VerTyp}\) by cycles or robots. We arbitrary allocate each element in \(\text{Alloc}(\mathsf{VerTyp},\text{NeiSub})\) to a vertex in \(u^{*}\), such that every vertex \(u\in u^{*}\) of type \(\mathsf{VerTyp}\) gets at least one allocation. Due to Equation 3, we ensure we can do that. Then, we replace every \(u^{*}\in V(\mathsf{Graph}(\mathsf{CC}_{i}))\) and every \(u^{*}\in V(C)\) (for every \(C\in\mathcal{C}_{i}\)) by the \(u\in u^{*}\) derived by the allocation. This ensures we covered every edge adjacent to a vertex in \(V(G)\setminus\mathsf{VC}\). As seen in this overview, the other conditions of Lemma 3 hold (the full proof is in Section 5.6).
For the reverse direction, let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) be multisets satisfying the conditions of Lemma 3. For every \(1\leq i\leq k\) let \((\mathsf{CC}_{i},\mathcal{C}_{i})\) be an \(\widehat{E}_{i}\)-valid pair. Then, we first derive the vertex type of each \(u\in V(G)\setminus\mathsf{VC}\), according to its equivalence class in \(\mathsf{EQ}\), and the set of multisets derived from \(((\mathsf{CC}_{i},\mathcal{C}_{i}))_{1\leq i\leq k}\) (e.g. see Figure 4)(for more details, see Definition 5.14 and Lemma 5.16). Then, we derive the robot type \(\mathsf{RobTyp}=(\mathsf{CC},\text{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \text{NumOfcyc})\) for each \(i\in[k]\): (i) the skeleton \(\mathsf{CC}\) is determined by \(\mathsf{CC}_{i}\) (e.g., see Figure 3d)), (ii) \(\text{NumOfcyc}\) is determined by the number of cycle of each length in \(\mathcal{C}_{i}\) and (iii) the allocation of the multisets of \(\mathsf{Graph}(\mathsf{CC}_{i})\) is determined by the vertex types of \(u\in V(\mathsf{Graph}(\mathsf{CC}_{i}))\cap(V(G)\setminus\mathsf{VC})\) we have already computed (for more details, see Definition 5.19 and Lemma 5.21). Then, for every \(i\in[k]\) and every \(C^{\prime}\in\mathcal{C}_{i}\), we determine the cycle type \(\mathsf{CycTyp}=(C,\text{PaAlloc}_{C},\text{RobTyp})\) of \(C^{\prime}\): (i) \(C\) is determined by \(C^{\prime}\) (we replace each \(u\in u^{*}\in\mathsf{EQ}\) in \(C^{\prime}\) by \(u^{*}\)), (ii) \(\mathsf{Rob}_{\mathsf{Typ}}\) is the robot type of \(i\) we have already computed, and (iii) \(\text{PaAlloc}_{C}\) is determined by the vertex types of \(u\in V(C^{\prime})\cap(V(G)\setminus\mathsf{VC})\) we have already computed (for more details, see Definition 5.24 and Lemma 5.26). Then, for every \(z\in\mathsf{VerTypS}\cup\mathsf{Rob}_{\mathsf{TypS}}\cup\mathsf{CycTypS}\), we define \(x_{z}\) to be the number of elements of type \(z\). As seen in this overview, the values of the variables satisfy Equations 1-6 (the full proof is in Section 5.7).
Observe that the number of variables is bounded by a function of \(|\mathsf{VC}|\), so we will get an FPT runtime with respect to \(\mathsf{vc}\). We analyze the runtime of the algorithm in Section 5.8. Thus, we conclude the correction of Theorem 1.1.
### Approximation Algorithm with Additive Error of \(\mathcal{O}(\mathsf{vc})\)
Our algorithm is based on a greedy approach. Recall that, our new goal (from Lemma 3) is to find \(k\) multisets \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that for every \(1\leq i\leq k\), \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\widehat{E}_{i}))\), \(\mathsf{Graph}(\widehat{E}_{i})\)
is connected and each \(u\in V(\mathsf{Graph}(\widehat{E}_{i}))\) has even degree in \(\mathsf{Graph}(\widehat{E}_{i})\). Now, assume that we have a vertex cover \(\mathsf{VC}\) of \(G\) such that \(G[\mathsf{VC}]\) is connected and \(v_{\mathsf{init}}\in\mathsf{VC}\) (e.g., see the orange vertices in Figure 4(b)), and let \(I=V\setminus\mathsf{VC}\). We first make the degree of every vertex in \(I\) even in \(G\), by duplicating an arbitrary edge for vertices having odd degree (e.g., see the green edges in Figure 4(c)). Observe that, after these operations, \(G\) may be a multigraph (e.g., see the graph in Figure 4(c)).
We initialize \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) with \(k\) empty sets. We partition the set of edges of \(G\) with one endpoint in \(I\) in the following manner. We choose the next multiset from \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) in a round-robin fashion and put a pair of edges, not considered so far, incident to some vertex \(v\in I\), in the multiset (e.g., see the red, blue and green edges in Figure 4(d)). This ensures that the degree of every vertex in \(I\) is even in each multiset (e.g., see the Figures 4(e)-4(g)). Let \(\widehat{E}^{\prime}_{1},\ldots,\widehat{E}^{\prime}_{k}\) be multisets satisfying the conditions of Lemma 3.2. Then, due to Condition 2 of Lemma 3.2, the degree of every vertex is even in every multiset \(\widehat{E}^{\prime}_{i}\). Thus, the total number of edges (with repetition) incident to any vertex in \(\mathsf{Graph}(\widehat{E}^{\prime}_{1}\cup\ldots\cup\widehat{E}^{\prime}_{k})\) is even. Therefore, there must be at least one additional repetition for at least one edge of every vertex with odd degree in \(G\). So, adding an additional edge to each vertex with odd degree is "must" and it does not "exceed" the optimal budget. Then, we partition the edges with both endpoints in \(\mathsf{VC}\), in a balanced fashion, as follows. We choose an edge, not considered so far, and add it to a multiset with minimum size.
Observe that, after this step, we have that: i) every edge of the input graph belongs to at least one of the multisets \(\widehat{E}_{i}\), ii) the degree of each vertex of \(I\) in each multiset is even, and iii) we have not exceeded the optimal budget (e.g., see the Figures 5(a)-5(d)). We still need to ensure that i) \(\mathsf{Graph}(\widehat{E}_{i})\) is connected, for every \(i\in[k]\), ii) the degree of each vertex of \(\mathsf{VC}\) in each multiset is even, and iii) \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\widehat{E}_{i}))\) for every \(i\in[k]\). Next, we add a spanning tree of \(G[\mathsf{VC}]\) to each of the \(\widehat{E}_{i}\), in order to make \(\mathsf{Graph}(\widehat{E}_{i})\) connected and to ensure that \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\widehat{E}_{i}))\) (e.g., see the Figures 5(f)-5(h)). Lastly, we add at most \(|\mathsf{VC}|\) edges, with both endpoints in \(\mathsf{VC}\), to every \(\widehat{E}_{i}\) in order to make the degree of each \(u\in\mathsf{VC}\) even in each of the multiset (e.g., see the Figures 5(i)-5(k)). Observe that the multisets \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) satisfy the conditions of Lemma 3.2. Moreover, we added at most \(\mathcal{O}(|\mathsf{VC}|)\) additional edges to each \(\widehat{E}_{i}\), comparing to an optimal solution.
## 5 FPT Algorithm with Respect to Vertex Cover
In this section, we present an FPT algorithm with respect to the vertex cover number of the input graph \(G\):
CGE _is in FPT parameterized by \(\mathsf{vc}(G)\), where \(G\) is the input graph_.
Our algorithm is based on a reduction to the Integer Linear Programming (ILP) problem.
Recall that by Lemma 3.2, an instance \((G,v_{\mathsf{init}},k,B)\) of CGE is a yes-instance if and only if there exist \(k\) multisets \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) with elements from \(E(G)\) such that:
1. For every \(1\leq i\leq k\), \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\widehat{E}_{i}))\).
2. For every \(1\leq i\leq k\), \(\mathsf{Graph}(\widehat{E}_{i})\) is connected, and every vertex in it has even degree.
3. \(E(G)\subseteq\widehat{E}_{1}\cup\ldots\cup\widehat{E}_{k}\).
4. \(\mathsf{max}\{|\widehat{E}_{1}|,\ldots,|\widehat{E}_{k}|\}\leq B\).
Now, we show that it is enough to look at "simpler" multisets satisfying the conditions of Lemma 3.2. By picking a solution that minimizes \(\sum_{i=1}^{k}|\widehat{E}_{i}|\), it holds that for every \(1\leq i\leq k\) and \(\{u,v\}\in\widehat{E}_{i}\), \(\{u,v\}\) appears at most twice in \(\widehat{E}_{i}\):
**Observation 5.1**.: _Let \(G\) be a connected graph, let \(v_{\text{init}}\in V(G)\) and let \(k,B\in\mathbb{N}\). Assume that there exist \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that the conditions of Lemma 3 hold. Then, there exist \(\widehat{E}^{\prime}_{1},\ldots,\widehat{E}^{\prime}_{k}\) such that the conditions of Lemma 3 hold and for every \(1\leq i\leq k\), each \(\{u,v\}\in\widehat{E}^{\prime}_{i}\) appears at most twice in \(\widehat{E}^{\prime}_{i}\)._
Proof.: Let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) be multisets satisfying the conditions of Lemma 3. Assume that there exists \(1\leq i\leq k\) and \(\{u,v\}\in\widehat{E}_{i}\) that appears more than twice in \(\widehat{E}_{i}\). Let \(\widehat{E}^{\prime}_{i}=\widehat{E}_{i}\setminus\{\{u,v\},\{u,v\}\}\). Observe that since every vertex in \(\mathsf{Graph}(\widehat{E}_{i})\) has even degree, then every vertex in \(\mathsf{Graph}(\widehat{E}^{\prime}_{i})\) has even degree. Thus, it is easy to see that \(\widehat{E}_{1},\ldots,\widehat{E}_{i-1},\widehat{E}^{\prime}_{i},\widehat{E} _{i+1},\ldots\widehat{E}_{k}\) are also multisets satisfying the conditions of Lemma 3.
We aim to construct linear equations that verify the conditions in Lemma 3. We begin with some definitions that we will use later, when we present our reduction.
### Encoding \(\widehat{E}_{i}\) by a Valid Pair
Given \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that conditions of Observation 5.1 hold, we show how to "encode" each \(\widehat{E}_{i}\) by a different structure, which will be useful later. For this purpose, we present
Figure 5: Illustration of the execution of Lines 2–20 of Algorithm 1 on the instance shown in Figure 4(a). The orange vertices in Figures 4(b)–4(g) are the vertex cover vertices. (a) Running example for Algorithm 1: A graph \(G\), a vertex cover VC (drawn in violet) of \(G\), the vertex \(v_{\text{init}}=9\), and \(k=3\). (b) The connected vertex cover VC\({}^{\prime}\) (drawn in orange) of \(G\) obtained from VC after executing Lines 2–3 of Algorithm 1. (c) The green edges are added to make the degree of independent set vertices even (Lines 4–9 of Algorithm 1). (d) Balanced partition of edges incident to independent set vertices to the three robots, shown by red, blue and green edges (Lines 10–20 of Algorithm 1). (e-g) The graphs induced by the multiset corresponding to each of the three robots after Line 20 of Algorithm 1.
Figure 5a. The orange vertices in Figures 5b-5b are the vertex cover vertices. (a) Balanced partition of all the edges, also including the ones incident only to vertex cover vertices to the three robots, shown by red, blue and green edges (after Line 24 of Algorithm 1). (b-d) The graphs induced by the multisets corresponding to each of the three robots after Line 24 of Algorithm 1. (e) A spanning tree \(T\) of the graph induced by the vertex cover \(\mathsf{VC}^{\prime}\) (Line 25 of Algorithm 1). (f-h) The graphs induced by the multisets corresponding to each of the three robots after adding the spanning tree \(T\) (Lines 25-28 of Algorithm 1). (i-k) The graphs induced by the multisets corresponding to each of the three robots after making the degree of each vertex in \(\mathsf{VC}^{\prime}\) even (Lines 29-31 of Algorithm 1). The yellow edges correspond to the edges added by Algorithm 2.
some definitions. Let \(G\) be a connected graph and let \(\mathsf{VC}\) be a vertex cover of \(G\), given as input. We define an equivalence relation on \(V(G)\setminus\mathsf{VC}\) based on the sets of neighbors of the vertices in \(V(G)\setminus\mathsf{VC}\):
[Equivalence Relation for the Independent Set] Let \(G\) be a connected graph and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(\mathsf{IND}=V(G)\setminus\mathsf{VC}\). For every \(u,v\in\mathsf{IND}\), \(u\) is equal to \(v\) if \(\mathsf{N}(u)=\mathsf{N}(v)\). We denote the set of equivalence classes induced by this equivalence relation by \(\mathsf{EQ}_{G,\mathsf{VC}}\).
When \(G\) and \(\mathsf{VC}\) are clear from context, we refer to \(\mathsf{EQ}_{G,\mathsf{VC}}\) as \(\mathsf{EQ}\).
Next, we define the _equivalence graph of \(G\) and \(\mathsf{VC}\)_, denoted by \(G^{*}\). It contains every vertex from \(\mathsf{VC}\), and the edges having both endpoints in \(\mathsf{VC}\). In addition, every equivalence class \(u^{*}\in\mathsf{EQ}\) is represented by a single vertex adjacent to the neighbors of some \(u\in u^{*}\) in \(G\) (which belong to \(\mathsf{VC}\)), e.g. see Figure 1(b).
[Equivalence Graph \(G^{*}\)] Let \(G\) be a connected graph and let \(\mathsf{VC}\) be a vertex cover of \(G\). The _equivalence graph of \(G\) and \(\mathsf{VC}\)_is \(G^{*}=\mathsf{Graph}(E^{*})\) where \(E^{*}=\{\{u^{*},v\}\ |\ u^{*}\in\mathsf{EQ},\exists u\in V(G)\text{ s.t. }\{u,v\}\in E(G) \wedge u\in u^{*}\}\cup\{\{u,v\}\in E(G)\ |\ u,v\in\mathsf{VC}\}\).
To construct \(k\) multisets \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) that satisfy the conditions of Lemma 3, we first deal with Condition 2: given \(1\leq i\leq k\), we need to ensure that \(\mathsf{Graph}(\widehat{E}_{i})\) is connected, and every vertex in it has even degree. For this purpose, we "encode" \(\widehat{E}_{i}\), for every \(1\leq i\leq k\), as follows. First, we construct a "small" subset \(\mathsf{CC}_{i}\subseteq\widehat{E}_{i}\) such that: (i) \(\mathsf{Graph}(\mathsf{CC}_{i})\) is connected; (ii) every vertex in \(\mathsf{Graph}(\mathsf{CC}_{i})\) has even degree; and (iii) \(V(\mathsf{Graph}(\mathsf{CC}_{i}))\cap\mathsf{VC}=\mathsf{Graph}(\widehat{E}_{ i})\cap\mathsf{VC}\). In words, Condition (iii) states that every vertex from \(\mathsf{Graph}(\widehat{E}_{i})\) that belongs to \(\mathsf{VC}\) also belongs to \(\mathsf{Graph}(\mathsf{CC}_{i})\). Observe that every vertex will have even degree in \(\mathsf{Graph}(\widehat{E}_{i}\setminus\mathsf{CC}_{i})\), and every cycle in \(\mathsf{Graph}(\widehat{E}_{i}\setminus\mathsf{CC}_{i})\) will contain at least one vertex from \(\mathsf{VC}\). We will later use these properties to "encode" the edges from \(\widehat{E}_{i}\setminus\mathsf{CC}_{i}\).
Now, we turn to show how to construct \(\mathsf{CC}_{i}\). For this purpose, we define the graph \(\overline{G}\). This graph is similar to \(G^{*}\), but has \(\mathsf{NumVer}(u^{*})=\mathsf{min}\{|u^{*}|,2^{|\mathsf{IN}_{G^{*}}(u^{*})|}+| \mathsf{VC}|^{2}\}\) vertices for each \(u^{*}\in\mathsf{EQ}\). In addition, each edge in \(\overline{G}\) appears exactly twice in \(E(\overline{G})\), e.g. see Figure 1(c). We will see later that we can construct \(\mathsf{CC}_{i}\) as a submultigraph of \(\overline{G}\).
[The Multigraph \(\overline{G}\)] Let \(G\) be a connected graph and let \(\mathsf{VC}\) be a vertex cover of \(G\). For every \(u^{*}\in\mathsf{EQ}\), let \(\mathsf{NumVer}(u^{*})=\mathsf{min}\{|u^{*}|,2^{|\mathsf{IN}_{G^{*}}(u^{*})|}+| \mathsf{VC}|^{2}\}\). Then, \(\overline{G}=\mathsf{Graph}(\overline{E})\) where \(\overline{E}=\{\{u_{i}^{*},v\},\{u_{i}^{*},v\}\ |\ u^{*}\in\mathsf{EQ},\exists u\in V(G)\text{ s.t. }\{u,v\}\in E(G)\wedge u\in u^{*},1 \leq i\leq\mathsf{NumVer}(u^{*})\}\cup\{\{u,v\},\{u,v\}\ |\ \{u,v\}\in E(G),u,v\in\mathsf{VC}\}\).
Next, for a submultigraph \(H\) of \(G\), we define the term \(\overline{G}\)-_submultigraph_ (e.g. see Figure 2(b)):
[\(\overline{G}\)-Submultigraph] Let \(G\) be a connected graph and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(\widehat{E}\) be a multiset with elements from \(E(G)\) such that every \(\{u,v\}\in\widehat{E}\) appears at most twice in \(\widehat{E}\). A submultigraph \(H\) of \(\mathsf{Graph}(\widehat{E})\) is a \(\overline{G}\)-submultigraph if for every \(u^{*}\in\mathsf{EQ}\), \(|V(H)\cap u^{*}|\leq\mathsf{NumVer}(u^{*})\).
Given a \(\overline{G}\)-submultigraph \(H\), we define the operation \(\overline{G}(H)\), which returns a submultigraph \(\overline{H}\) of \(\overline{G}\) isomorphic to \(H\) with isomorphism \(\alpha:V(H)\to V(\overline{H})\), where every vertex \(u\in u^{*}\cap V(H)\) is mapped (by \(\alpha\)) to a vertex in \(\overline{G}\) that belongs to the same equivalence class as \(u\) (e.g., see Figure 7):
[The Operation \(\overline{G}\)] Let \(G\) be a connected graph, let \(\mathsf{VC}\) be a vertex cover of \(G\) and let \(H\) be a \(\overline{G}\)-submultigraph. Then, \(\overline{G}(H)\) is a submultigraph \(\overline{H}\) of \(\overline{G}\) for which there exists a isomorphism \(\alpha:V(H)\to V(\overline{H})\) satisfying the following conditions:
1. _For every_ \(u\in\mathsf{VC}\)_,_ \(\alpha(u)=u\)_._
2. _For every_ \(u\in u^{*}\in\mathsf{EQ}\)_, there exists_ \(j\in[\mathsf{NumVer}(u^{*})]\) _such that_ \(\alpha(u)=u^{*}_{j}\)_._
Observe that since \(H\) is a \(\overline{G}\)-submultigraph, \(\mathsf{NumVer}(u^{*})\) and the number of appearance of each edge are large enough to ensure that \(\overline{G}(H)\) is well defined (although not uniquely defined).
Assuming we have computed \(\mathsf{CC}_{i}\) (which we will do soon in Lemma 5.12), we now explain the intuition for how to encode the edges in \(\widehat{E}_{i}\setminus\mathsf{CC}_{i}\). Recall that every vertex has even degree in \(\mathsf{Graph}(\widehat{E}_{i}\setminus\mathsf{CC}_{i})\). Therefore, if \(\mathsf{Graph}(\widehat{E}_{i}\setminus\mathsf{CC}_{i})\) is not empty, there exists a cycle \(C\) in \(\mathsf{Graph}(\widehat{E}_{i}\setminus\mathsf{CC}_{i})\), and, in particular, a simple one, as is implied by the following observation:
Let \(G\) be a non-empty multigraph. Assume that every vertex in \(G\) has even degree in it. Then, there exists a simple cycle in \(G\).
Now, observe that every vertex also has even degree in \(\mathsf{Graph}(\widehat{E}_{i}\setminus(\mathsf{CC}_{i}\cup E(C)))\), as implied by the following observation:
Let \(G\) be a multigraph. Assume that every vertex in \(G\) has even degree in it. Let \(C\) be a cycle in \(G\). Then, every vertex in \(\mathsf{Graph}(E(G)\setminus E(C))\) has even degree in it.
So, the same reasoning can be reapplied. Therefore, we can encode \(\widehat{E}_{i}\setminus\mathsf{CC}_{i}\) by a multiset of cycles. In particular, in the following lemma, we show that we can encode \(\widehat{E}_{i}\setminus\mathsf{CC}_{i}\) by a multiset of cycles \(\mathcal{C}\) such that all the cycles except for "few" of them are of length \(4\); in addition, the cycles from \(\mathcal{C}\) that are of length other than \(4\) are simple.
Let \(G\) be a multigraph such that each \(\{u,v\}\in E(G)\) appears at most twice in \(E(G)\), and let \(\mathsf{VC}\) be a vertex cover of \(G\). Assume that every vertex in \(G\) has even degree. Then, there exists a multiset of cycles \(\mathcal{C}\) in \(G\) such that:
1. At most \(2|\mathsf{VC}|^{2}\) cycles in \(\mathcal{C}\) have length other than \(4\).
2. The cycles in \(\mathcal{C}\) that have length other than \(4\) are simple.
3. \(\bigcup_{C\in\mathcal{C}}E(C)=E(G)\) (being two multisets).
Proof.: First, assume that there are more than \(2|\mathsf{VC}|^{2}\) edges in \(E(G)\). Let \(\mathsf{IND}=V(G)\setminus\mathsf{VC}\). Every \(u\in V(G)\) has even degree, so let \(2d_{u}\) be the degree of \(u\) in \(G\), where \(d_{u}\in\mathbb{N}\). For every \(u\in\mathsf{IND}\), let \(\widehat{A}_{u}=\{\{\{u,v_{i}\},\{u,v_{i}^{\prime}\}\}_{i=1}^{d_{u}}\}\) be a partition of the edges incident to \(u\) in \(G\) into multisets of two elements, and let \(\widehat{A}=\bigcup_{u\in\mathsf{IND}}\widehat{A}_{u}\). Observe that there are more than \(2|\mathsf{VC}|^{2}\) edges in \(G\), and the number of edges in \(G\) with both endpoints in \(\mathsf{VC}\) is bounded by
\(2\binom{|\mathsf{VC}|}{2}\leq|\mathsf{VC}|^{2}\) (\(\binom{|\mathsf{VC}|}{2}\) different edges, each appears at most twice). So, there are more than \(|\mathsf{VC}|^{2}\) edges with one endpoint in \(\mathsf{IND}\). This implies that we have more than \(\frac{1}{2}|\mathsf{VC}|^{2}\) multisets in \(\widetilde{A}\). Now, every multiset in \(\widetilde{A}\) consists of two edges, each edge is incident to exactly one vertex from \(\mathsf{VC}\). Notice that the number of different options to choose two vertices from \(\mathsf{VC}\) is bounded by \(\frac{1}{2}|\mathsf{VC}|^{2}\). Therefore, since there are more than \(\frac{1}{2}|\mathsf{VC}|^{2}\) multisets in \(\widetilde{A}\), there exist two multisets \(\{\{u,v_{i}\},\{u,v^{\prime}_{i}\}\},\{\{\tilde{u},v_{j}\},\{\tilde{u},v^{ \prime}_{j}\}\}\) such that \(v_{i}=v_{j}\) and \(v^{\prime}_{i}=v^{\prime}_{j}\). So, \(C=(u,v_{i},\tilde{u},v^{\prime}_{i},u)\) is a cycle of length \(4\) in \(G\). From Observation 5.1, the degree of every vertex in \(G\) remains even after deleting the edges of \(C\) from \(G\). Then, after deleting the edges of \(C\) from \(G\) (and inserting it into \(\mathcal{C}\)), we can use the same argument to find yet another cycle.
If \(G\) has \(2|\mathsf{VC}|^{2}\) edges or less (and it is not empty), then since the degree of every vertex is even, from Observation 5.1, we can find a cycle--and, in particular, a simple one--in the multigraph and delete it (and inserting it into \(\mathcal{C}\)).
The process ends when there are no edges in the multigraph. Clearly, by our construction, the conditions of the lemma hold. This ends the proof.
We have the following observation, which will be useful later:
Let \(G\) be a connected multigraph, let \(\mathsf{VC}\) be a vertex cover of \(G\) and let \(C\) be a cycle in \(G\). Then, \(V(C)\cap\mathsf{VC}\neq\emptyset\).
Let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) satisfy the conditions of Lemma 3.1. We encode each \(\widehat{E}_{i}\) as a pair \((\mathsf{CC}_{i},\mathcal{C}_{i})\) where \(\mathsf{CC}_{i}\cup\bigcup_{C\in\mathcal{C}_{i}}E(C)=\widehat{E}_{i}\), and: (i) \(\mathsf{Graph}(\mathsf{CC}_{i})\) is a \(\overline{G}\)-submultigraph; (ii) \(\mathsf{Graph}(\mathsf{CC}_{i})\) is connected, \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\mathsf{CC}_{i}))\) and every vertex in \(\mathsf{Graph}(\mathsf{CC}_{i})\) has even degree; and (iii) \(\mathcal{C}_{i}\) is a multiset of cycles satisfying the conditions of Lemma 5.1. We call such a pair an _\(\widehat{E}_{i}\)-valid pair_:
[Valid Pair] Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(\widehat{E}\) be a multiset with elements from \(E(G)\) such that \(\mathsf{Graph}(\widehat{E})\) is connected, \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\widehat{E}))\), and every vertex in \(\mathsf{Graph}(\widehat{E})\) has even degree. Let \(\mathsf{CC}\subseteq\widehat{E}\), and let \(\mathcal{C}\) be a multiset of cycles in \(\mathsf{Graph}(\widehat{E})\). Then, \((\mathsf{CC},\mathcal{C})\) is an \(\widehat{E}\)-valid pair if the following conditions are satisfied:
1. \(\mathsf{Graph}(\mathsf{CC})\) is a \(\overline{G}\)-submultigraph.
2. \(\mathsf{Graph}(\mathsf{CC})\) is connected, \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\mathsf{CC}))\) and every vertex in \(\mathsf{Graph}(\mathsf{CC})\) has even degree.
3. \(V(\mathsf{Graph}(\mathsf{CC}))\cap\mathsf{VC}=V(\mathsf{Graph}(\widehat{E})) \cap\mathsf{VC}\).
4. At most \(2|\mathsf{VC}|^{2}\) cycles in \(\mathcal{C}\) have length other than \(4\).
5. The cycles in \(\mathcal{C}\) of length other than \(4\) are simple.
6. \(\mathsf{CC}\cup\bigcup_{C\in\mathcal{C}}E(C)=\widehat{E}\).
Next, we invoke Lemma 5.1 in order to prove the following. Let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) satisfy the conditions of Lemma 3.1, and for every \(1\leq i\leq k\) and \(\{u,v\}\in\widehat{E}_{i}\), \(\{u,v\}\) appears at most twice in \(\widehat{E}_{i}\) (see Observation 5.1). Then, there exists \((\mathsf{CC}_{i},\mathcal{C}_{i})\) such that \((\mathsf{CC}_{i},\mathcal{C}_{i})\) is an \(\widehat{E}_{i}\)-valid pair:
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(\widehat{E}\) be a multiset with elements from \(E(G)\). Assume that \(\mathsf{Graph}(\widehat{E})\) is connected, \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\widehat{E}))\), every vertex in \(\mathsf{Graph}(\widehat{E})\) has even degree and every \(\{u,v\}\in\widehat{E}\) appears at most twice in \(\widehat{E}\). Then, there exist \(\mathsf{CC}\subseteq\widehat{E}\) and a multiset of cycles \(\mathcal{C}\) in \(\mathsf{Graph}(\widehat{E})\) such that \((\mathsf{CC},\mathcal{C})\) is an \(\widehat{E}\)-valid pair.
Proof.: First, we construct \(\mathsf{CC}\). For this purpose, we obtain a multigraph \(H\) from \(\mathsf{Graph}(\widehat{E})\) as follows: Starting with \(H=\mathsf{Graph}(\widehat{E})\). While there exist \(u,u^{\prime}\in u^{*}\cap V(H)\) such that \(\mathsf{N}_{H}(u)=\mathsf{N}_{H}(u^{\prime})\), delete \(u^{\prime}\). Observe that, since \(\mathsf{Graph}(\widehat{E})\) is connected, then so is \(H\). Also, observe that for every \(u\in u^{*}\in\mathsf{EQ}\), the number of options for \(\mathsf{N}_{H}(u)\) is bounded by \(2^{|\mathsf{N}_{G^{*}}(u^{*})|}\). So, for every \(u^{*}\in\mathsf{EQ}\), \(|u^{*}\cap V(H)|\leq 2^{|\mathsf{N}_{G^{*}}(u^{*})|}\). Notice that for every \(u^{*}\in\mathsf{EQ}\), we might have more than one vertex from \(u^{*}\) in \(H\) since vertices that have the same neighbors in \(G\), might have different neighbors in \(H\). In addition, each \(u\in u^{*}\in\mathsf{EQ}\) has even degree in \(H\) (being the same as its degree in \(\mathsf{Graph}(\widehat{E})\)), e.g. see Figure 2(b).
Now, let \(v\in\mathsf{VC}\) that has odd degree in \(H\) (if one exists). Notice that since the number of vertices with odd degree is even in every multigraph, there exists \(v^{\prime}\in\mathsf{VC}\cap V(H)\) such that \(v^{\prime}\neq v\) and \(v\) has odd degree in \(H\). We build a path \(P\) from \(v\) to such a \(v^{\prime}\) in \(G\) using edges in \(\widehat{E}\setminus E(H)\) as follows. Since \(v\) has odd degree in \(H\) and even degree in \(\mathsf{Graph}(\widehat{E})\), there exists \(\{u,v\}\in\widehat{E}\setminus E(H)\). We add \(\{u,v\}\) to \(P\). Now, if \(u\) has odd degree in \(H\), then \(u\in\mathsf{VC}\) and we finish; otherwise, \(u\) has even degree in \(H\), so it has odd degree in \(\mathsf{Graph}(E(H)\cup E(P))\), and so there exists \(\{u,u^{\prime}\}\in\widehat{E}\setminus(E(H)\cup E(P))\). This process is finite, and ends when \(P\) reaches \(v^{\prime}\in\mathsf{VC}\) with odd degree in \(H\). Now, since there exists a path \(P\) from \(v\) to \(v^{\prime}\) in \(\mathsf{Graph}(\widehat{E})\) such that \(E(P)\cap E(H)=\emptyset\), there exists a simple path \(P^{\prime}\) from \(v\) to \(v^{\prime}\) in \(\mathsf{Graph}(\widehat{E})\), such that \(E(P^{\prime})\cap E(H)=\emptyset\). Observe that \(|P^{\prime}|\leq 2|\mathsf{VC}|\), and there are at most \(|\mathsf{VC}|\) vertices in \(P^{\prime}\) from \(V(G)\setminus\mathsf{VC}\). We add the edges of \(P^{\prime}\) to \(H\), that is, \(H\leftarrow\mathsf{Graph}(E(H)\cup E(P^{\prime}))\) (e.g. see the dashed paths in Figure 2(c)), and continue with this process until every vertex in \(H\) has even degree. Observe that, by the end of this process, we have added at most \(|\mathsf{VC}|^{2}\) vertices from \(V(G)\setminus\mathsf{VC}\) to \(H\). So, in particular, for every \(u^{*}\in\mathsf{EQ}\), we have added at most \(|\mathsf{VC}|^{2}\) vertices from \((V(G)\setminus\mathsf{VC})\cap u^{*}\) to \(H\).
Overall, for every \(u^{*}\in\mathsf{EQ}\), \(|u^{*}\cap V(H)|\leq 2^{|\mathsf{N}_{G^{*}}(u^{*})|}+|\mathsf{VC}|^{2}\). In addition, notice that \(H\) is a submultigraph of \(\mathsf{Graph}(\widehat{E})\), so each \(\{u,v\}\in E(H)\) appears at most twice in \(E(H)\). Moreover, observe that \(V(H)\cap\mathsf{VC}=V(\mathsf{Graph}(\widehat{E}))\cap\mathsf{VC}\), and since we assume that \(v_{\mathsf{init}}\in\mathsf{VC}\), then \(v_{\mathsf{init}}\in V(H)\). We define \(\mathsf{CC}=E(H)\), so Conditions 1-3 of Definition 5.11 are satisfied. Now, observe that every vertex in \(\mathsf{Graph}(\widehat{E}\setminus\mathsf{CC})\) has even degree, and \(\mathsf{VC}\cap V(\mathsf{Graph}(\widehat{E}\setminus\mathsf{CC}))\) is a vertex cover of \(\mathsf{Graph}(\widehat{E}\setminus\mathsf{CC})\). Therefore, from Lemma 5.9, there exists a multiset of cycles \(\mathcal{C}\) in \(\mathsf{Graph}(\widehat{E}\setminus\mathsf{CC})\) such that Conditions 4-6 of Definition 5.11 are satisfied. So, \((\mathsf{CC},\mathcal{C})\) is an \(\widehat{E}\)-valid pair. This completes the proof.
### Vertex Type
Now, to construct the equations for the ILP instance, we present the variables. First, we have a variable for each _vertex type_. We begin by showing how we derive the vertex type of a vertex from a solution, and then we define formally the term vertex type. For this purpose, we introduce some definitions.
An intuition for a vertex type is as follows. Let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that the conditions of Lemma 3.4 hold, and for every \(1\leq i\leq k\) let \((\mathsf{CC}_{i},\mathcal{C}_{i})\) be an \(\widehat{E}_{i}\)-valid pair. For every \(u\in u^{*}\in\mathsf{EQ}\), we derive multisets with elements from \(\mathsf{N}_{G^{*}}(u^{*})\) as follows. First, for every \(1\leq i\leq k\), we derive the multiset of neighbors of \(u\) in \(\mathsf{Graph}(\mathsf{CC}_{i})\), that is \(\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC}_{i})}(u)\). Second, for every \(1\leq i\leq k\) and \(C\in\mathcal{C}_{i}\) such that \(u\in V(C)\), we derive the multiset \(\{v,v^{\prime}\}\), where \(v\) and \(v^{\prime}\) are the vertices appear right before and right after \(u\) in \(C\), respectively. Now, recall that in a solution, every edge is covered by at least one robot. So, given a vertex \(u\in u^{*}\in\mathsf{EQ}\) and an edge \(\{u,v\}\in E(G)\), there exists \(1\leq i\leq k\) such that \(\{u,v\}\) is covered by the \(i\)-th robot, that is, \(\{u,v\}\in\widehat{E}_{i}\). Since \(\mathsf{CC}_{i}\cup\bigcup_{C\in\mathcal{C}_{i}}E(C)=\widehat{E}_{i}\), either \(\{u,v\}\in\mathsf{CC}_{i}\) or \(\{u,v\}\in E(C)\), for some \(C\in\mathcal{C}_{i}\). Thus, \(u\) is in at least one multiset we derived. In the vertex types, we consider all the possible options for such multisets.
In the next definition, for a given cycle \(C\), we define the _pairs of edges_ of \(C\), denoted \(\mathsf{EdgePairs}(C)\): for each \(u\in V(G)\setminus\mathsf{VC}\), we have the pair \((\{u,v\},\{u,v^{\prime}\})\), where \(v\) and \(v^{\prime}\) are the vertices appear right before and right after \(u\) in \(C\). Later, we will derive for each such a pair, the multiset \(\{v,v^{\prime}\}\) and "allocate" it to \(u\).
Let \(G\) be a connected graph and let \(\mathsf{VC}\) be a vertex cover of \(G\). We denote the set of cycles of length at most \(\mathsf{max}\{4,2|\mathsf{VC}|\}\) in \(G\) by \(\mathsf{Cyc}_{G}\). Let \(C=(v_{0},\ldots,v_{\ell}=v_{0})\in\mathsf{Cyc}_{G}\). We denote by \(\mathsf{EQ}(C)\) the cycle in \(G^{*}\) obtained from \(C\) by replacing each \(v_{i}\in V(C)\cap(V(G)\setminus\mathsf{VC})\) by \(u^{*}\in\mathsf{EQ}\), where \(v_{i}\in u^{*}\), for every \(1\leq i\leq\ell\).
[Pairs of a Cycle] Let \(G\) be a connected graph, let \(\mathsf{VC}\) be a vertex cover of \(G\) and let \(C=(v_{0},\ldots,v_{\ell}=v_{0})\in\mathsf{Cyc}_{G}\cup\mathsf{Cyc}_{G^{*}}\). Then, the _pairs_ of edges _of \(C\) is the multiset \(\mathsf{EdgePairs}(C)=\{\{\{v_{i-1},v_{i}\},\{v_{i},v_{i+1}\}\}\ |\ 1\leq i\leq\ell-1,v_{i}\in \mathsf{IND}\cup\mathsf{EQ}\}\).
Recall that, given a multigraph \(G\) and \(u\in V(G)\), the multiset of neighbors of \(u\) in \(G\) is denoted by \(\widehat{\mathsf{N}}_{G}(u)=\{v\in V\ |\ \{u,v\}\in E(G)\}\) (with repetition).
[Deriving Vertex Types From a Solution] Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\), let \(\mathsf{VC}\) be a vertex cover of \(G\) and let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that conditions of Lemma 3.2 hold. For every \(1\leq i\leq k\) let \((\mathsf{CC}_{i},\mathcal{C}_{i})\) be an \(\widehat{E}_{i}\)-valid pair. In addition, for every \(1\leq i\leq k\) and \(u\in u^{*}\in\mathsf{EQ}\) such that \(u\in V(\mathsf{Graph}(\widehat{E}_{i}))\), let \(\mathsf{NeiSparse}_{\mathcal{C}_{i}}(u)=\{\{v,v^{\prime}\}\ |\ C\in\mathcal{C}_{i},\{u,v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\}\) be a set. For every \(u\in u^{*}\in\mathsf{EQ}\), let \(\mathsf{NeiSubsets}(u)=\bigcup_{1\leq i\leq k}\{\widehat{\mathsf{N}}_{ \mathsf{Graph}(\mathsf{CC}_{i})}(u)\}\cup\mathsf{NeiSparse}_{\mathcal{C}_{i}}(u)\) be a set. Then, for every \(u\in u^{*}\in\mathsf{EQ}\), \(\mathsf{DerVerTyp}(\{(\mathsf{CC}_{i},\mathcal{C}_{i})\}_{1\leq i\leq k},u)=( u^{*},\mathsf{NeiSubsets}(u))\).
Whenever \(\{(\mathsf{CC}_{i},\mathcal{C}_{i})\}_{1\leq i\leq k}\) is clear from context, we refer to \(\mathsf{DerVerTyp}(\{(\mathsf{CC}_{i},\mathcal{C}_{i})\}_{1\leq i\leq k}\), \(u)\) as \(\mathsf{DerVerTyp}(u)\). Observe that every element in \(\mathsf{NeiSparse}_{\mathcal{C}_{i}}(u)\) is a multiset of two vertices (which might be the same vertex).
Now, observe that each vertex in every multiset derived in Lemma 3.2 appears at most twice: each edge appears at most twice in \(\mathsf{CC}_{i}\), and every multiset derived from a cycle has exactly two elements in it. In addition, since the degree of each vertex is even in \(\mathsf{Graph}(\mathsf{CC}_{i})\), every multiset derived in Lemma 3.2 has an even number of elements. So, we will consider only multisets with these restriction.
For a set \(A\) we define the multiset \(A\times 2=\{a,a\ |\ a\in A\}\). That is, each element in \(A\) appears exactly twice in \(A\times 2\).
Now, we define the term vertex type:
[Vertex Type] Let \(G\) be a connected graph and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(u^{*}\in\mathsf{EQ}\) and let \(\mathsf{NeiSubsets}\subseteq 2^{\mathsf{N}_{G^{*}}(u^{*})\times 2}\). Then, \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiSubsets})\) is a vertex type if for every \(\mathsf{NeiSub}\in\mathsf{NeiSubsets}\), \(|\mathsf{NeiSub}|\) is even, and \(\mathsf{N}_{G^{*}}(u^{*})\subseteq\bigcup\mathsf{NeiSubsets}\).
We denote the set of vertex types by \(\mathsf{VerTypS}\).
In the following lemma we show the "correctness" of Definition 3.2, that is, for every \(u\in\mathsf{IND}\), \(\mathsf{DerVerTyp}(u)\) is indeed a vertex type.
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that conditions of Lemma 3.2 hold and for every \(1\leq i\leq k\) let \((\mathsf{CC}_{i},\mathcal{C}_{i})\) be an \(\widehat{E}_{i}\)-valid pair. Then, for every \(u\in\mathsf{IND}\), \(\mathsf{DerVerTyp}(u)\) is a vertex type.
Proof.: We show that, for every \(u\in u^{*}\in\mathsf{EQ}\), \(\mathsf{DerVerTyp}(u)=(u^{*},\mathsf{NeiSubsets}(u))\) is a vertex type. Let \(u\in u^{*}\in\mathsf{EQ}\). First, we show that for every \(\mathsf{NeiSub}\in\mathsf{NeiSubsets}(u)\), \(\mathsf{NeiSub}\in 2^{\mathsf{N}_{G^{*}}(u^{*})\times 2}\) and \(|\mathsf{NeiSub}|\) is even. For every \(\mathsf{NeiSub}\in\mathsf{NeiSubsets}(u)\) there exists \(1\leq i\leq k\) such that \(\mathsf{NeiSub}\in\{\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC}_{i})}(u) \}\cup\mathsf{NeiSparse}_{\mathcal{C}_{i}}(u)\).
* If \(\mathsf{NeiSub}=\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC}_{i})}(u)\), then \(\mathsf{NeiSub}=\{v\in V(\mathsf{Graph}(\mathsf{CC}_{i}))\mid\{u,v\}\in\mathsf{ CC}_{i}\}\). Since \(\mathsf{CC}_{i}\) is a \(\overline{G}\)-submultigraph, each \(\{u,v\}\in\mathsf{CC}_{i}\) appears at most twice in \(\mathsf{CC}_{i}\). Then, each \(v\in\mathsf{NeiSub}\) appears at most twice in \(\mathsf{NeiSub}\), and so \(\mathsf{NeiSub}\in 2^{\mathsf{N}_{G^{*}}(u^{*})\times 2}\). In addition, the degree of \(u\) in \(\mathsf{Graph}(\mathsf{CC}_{i})\) is even, so \(|\mathsf{NeiSub}|\) is even.
* Otherwise, \(\mathsf{NeiSub}\in\mathsf{NeiPairs}_{\mathsf{C}_{i}}(u)\), and every multiset in \(\mathsf{NeiPairs}_{\mathsf{C}_{i}}(u)\) has exactly two vertices, so \(\mathsf{NeiSub}\in 2^{\mathsf{N}_{G^{*}}(u^{*})\times 2}\) and \(|\mathsf{NeiSub}|\) is even.
Now, we show that \(\mathsf{N}_{G^{*}}(u^{*})\subseteq\bigcup\mathsf{NeiSubsets}(u)\). Let \(v\in\mathsf{N}_{G^{*}}(u^{*})\). From Condition 3 of Lemma 3.4, \(E(G)\subseteq\widehat{E}_{1}\cup\ldots\cup\widehat{E}_{k}\). Then, there exists \(1\leq i\leq k\) such that \(\{u,v\}\in\widehat{E}_{i}\). Since \((\mathsf{CC}_{i},\mathcal{C}_{i})\) is an \(\widehat{E}_{i}\)-valid pair, \(\widehat{E}_{i}=\mathsf{CC}_{i}\cup\bigcup_{C\in\mathcal{C}_{i}}E(C)\). So, if \(\{u,v\}\in\mathsf{CC}_{i}\), then \(v\in\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC}_{i})}(u)\); otherwise, there exists \(C\in\mathcal{C}_{i}\) such that \(\{u,v\}\in E(C)\). Therefore, there exists \(v^{\prime}\in V(G)\) such that \(\{\{u,v\},\{u,v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\), so \(\{v,v^{\prime}\}\in\mathsf{NeiPairs}_{\mathsf{C}_{i}}(u)\), thus \(v\in\bigcup\mathsf{NeiSubsets}(u)\). So, for every \(u\in u^{*}\in\mathsf{EQ}\), \(\mathsf{DerVerTyp}(u)=(u^{*},\mathsf{NeiSubsets}(u))\) is a vertex type.
### Robot Type
Now, we continue to introduce the variables we need for the ILP instance. We have a variable for each _robot type_. First, we show how we derive a robot type for each robot from a solution, and then we will present the definition of an abstract type. An intuition for a robot type is as follows. In Definition 5.14, where we derive vertex types, we saw how we derive \(\mathsf{NeiSubsets}(u)\) for each \(u\in V(G)\setminus\mathsf{VC}\). Some of the multisets in \(\mathsf{NeiSubsets}(u)\) are derived from \(\mathsf{Graph}(\mathsf{CC}_{i})\), where \((\mathsf{CC}_{i},\mathcal{C}_{i})\) is an \(\widehat{E}_{i}\)-valid pair associated with each robot \(i\). Now, we look on the "puzzle" from the perspective of the robots. For each \(u\in V(\mathsf{Graph}(\mathsf{CC}_{i}))\setminus\mathsf{VC}\), the multiset \(\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC}_{i})}(u)\) is "allocated" for the vertex type of \(u\). For an induced submultigraph \(\overline{H}\) of \(\overline{G}\), we present two notations: _the multiset \(\mathsf{NeiOfInd}(\overline{H})\)_, and an _allocation_ for this multiset. When \(\overline{H}=\overline{G}(\mathsf{Graph}(\mathsf{CC}_{i}))\), \(\mathsf{NeiOfInd}(\overline{H})\) is the set of pairs \((u_{i}^{*},\mathsf{NeiSub})\) we will later allocate to vertex types.
[NeiOfInd(\(\overline{H}\))] Let \(G\) be a connected graph and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(\overline{H}\) be a submultigraph of \(\overline{G}\). Then, \(\mathsf{NeiOfInd}(\overline{H})=\{(u_{i}^{*},\widehat{\mathsf{N}}_{ \overline{H}}(u_{i}^{*}))\mid u_{i}^{*}\in V(\overline{H})\}\) as a multiset.
A _vertex allocation_ of \(\mathsf{NeiOfInd}(\overline{H})\) is a function that assigns a vertex type to each \((u_{i}^{*},\mathsf{NeiSub})\in\mathsf{NeiOfInd}(\overline{H})\). We allocate \((u_{i}^{*},\mathsf{NeiSub})\) to a vertex type that "expects" to get \(\mathsf{NeiSub}\), that is, a vertex type \((u^{*},\mathsf{NeiSubsets})\) where \(\mathsf{NeiSub}\in\mathsf{NeiSubsets}\).
[Vertex Allocation of \(\mathsf{NeiOfInd}(\overline{H})\)] Let \(G\) be a connected graph and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(\overline{H}\) be a submultigraph of \(\overline{G}\). A _vertex allocation_ of \(\mathsf{NeiOfInd}(\overline{H})\) is a function \(\mathsf{Alloc}_{\overline{H}}:\mathsf{NeiOfInd}(\overline{H})\to\mathsf{VerTypeS}\) such that for every \((u_{i}^{*},\mathsf{NeiSub})\in\mathsf{NeiOfInd}(\overline{H})\), \(\mathsf{Alloc}_{\overline{H}}(u_{i}^{*},\mathsf{NeiSub})=(u^{*},\mathsf{NeiSubsets})\) where \(\mathsf{NeiSub}\in\mathsf{NeiSubsets}\).
In addition to the vertex allocation, the type of each robot \(i\) is associated with a vector of non-negative integers \(\mathsf{NumOfCyc}=(N_{i,2},N_{i,3},N_{i,5},N_{i,6},\ldots,N_{i,2|\mathsf{VC}|})\): for every \(2\leq j\leq 2|\mathsf{VC}|\), \(j\neq 4\), \(N_{i,j}\) is the number of cycles of length exactly \(j\) in \(\mathcal{C}_{i}\), where \((\mathsf{CC}_{i},\mathcal{C}_{i})\) be an \(\widehat{E}_{i}\)-valid pair.
[Deriving Robot Types From a Solution] Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\), let \(\mathsf{VC}\) be a vertex cover of \(G\) and let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that the conditions of Lemma 3.4 hold. For every \(1\leq i\leq k\), let \((\mathsf{CC}_{i},\mathcal{C}_{i})\) be an \(\widehat{E}_{i}\)-valid pair. For every \(1\leq i\leq k\), let \(G^{\prime}_{i}=\overline{G}(\mathsf{Graph}(\mathsf{CC}_{i}))\) with an isomorphism \(\alpha_{i}:\mathsf{C}_{i}\to\mathsf{C}_{i}\). Let \(\mathsf{C}_{i}\) be a connected graph, let \(\mathsf{C}_{i}\) be a connected graph, let \(\mathsf{C}_{i}\) be a connected graph, let \(\mathsf{C}_{i}\) be a connected graph, let \(\mathsf{C}_{i}\) be a connected graph, let \(\mathsf{C}_{i}\) be a connected graph, let \(\mathsf{C}_{i}\) be a connected graph, let \(\mathsf{C}_{i}\) be a connected graph, \(\mathsf{C}_{i}\) be
\(V(G^{\prime}_{i})\to V(\mathsf{Graph}(\mathsf{CC}_{i}))\), and let \(\mathsf{CC}^{\prime}_{i}=E(G^{\prime}_{i})\). For every \(1\leq i\leq k\) and \((u^{*}_{j},\mathsf{NeiSub})\in\mathsf{NeiOfInd}(G^{\prime}_{i})\), let \(\mathsf{Alloc}_{G^{\prime}_{i}}((u^{*}_{j},\mathsf{NeiSub}))=\mathsf{DerVerTyp}( \alpha_{i}(u^{*}_{j}))\). For every \(1\leq i\leq k\) and \(2\leq j\leq 2|\mathsf{VC}|\), \(j\neq 4\), let \(N_{i,j}\) be the number of cycles of size \(j\) in \(\mathcal{C}_{i}\), and for every \(1\leq i\leq k\) let \(\mathsf{NumOfcy}_{i}=(N_{i,2},N_{i,3},N_{i,5},N_{i,6},\ldots,N_{i,2|\mathsf{VC }|})\). Then, for every \(1\leq i\leq k\), let \(\mathsf{DerRobTyp}(\{(\mathsf{CC}_{j},C_{j})\}_{1\leq j\leq k},i)=(\mathsf{CC}^ {\prime}_{i},\mathsf{Alloc}_{G^{\prime}_{i}},\mathsf{NumOfcy}_{i})\).
Whenever \(\{(\mathsf{CC}_{i},\mathcal{C}_{i})\}_{1\leq i\leq k}\) is clear from context, we refer to \(\mathsf{DerRobTyp}(\{(\mathsf{CC}_{i},\mathcal{C}_{i})\}_{1\leq i\leq k}\), \(i)\) as \(\mathsf{DerRobTyp}(i)\).
Now, we define the term robot type. As mentioned, a robot type is first associated with \(\mathsf{CC}\subseteq E(\overline{G})\) and a vertex allocation of \(\mathsf{NeiOfInd}(\mathsf{Graph}(\mathsf{CC}))\). We demand that \(\mathsf{Graph}(\mathsf{CC})\) is connected, every vertex in \(\mathsf{Graph}(\mathsf{CC})\) has even degree and \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\mathsf{CC}))\), similarly to Conditions 2 and 3 of Definition 5.11. This way we ensure that the multiset \(\widehat{E}\) we will build for the a robot makes \(\mathsf{Graph}(\widehat{E})\) connected: we will later associate with a robot only cycles \(C\) such that \(V(C)\cap V(\mathsf{Graph}(\mathsf{CC}))\neq\emptyset\). In addition, we also ensure that every vertex in \(\mathsf{Graph}(\widehat{E})\) will have even degree in \(\mathsf{Graph}(\widehat{E})\): each such vertex has even degree in \(\mathsf{Graph}(\mathsf{CC})\), and we will add only cycles to this graph, so the degree of each vertex will remain even. Therefore, we ensure that the "local" properties of each robot, given by Conditions 1 and 2 of Lemma 3.4, are preserved. The vector of non-negative integers \(\mathsf{NumOfcy}=(N_{2},N_{3},N_{5},N_{6},\ldots,N_{2|\mathsf{VC}|})\) determines how many cycles of each length (other than 4) we will associate with the robot. Observe that, due to Condition 4 of Definition 5.11, each \(N_{j}\) is bounded by \(2|\mathsf{VC}|^{2}\).
[Robot Type] Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Then, \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfcy})\) is a robot type if the following conditions are satisfied:
1. \(\mathsf{CC}\subseteq E(\overline{G})\).
2. \(\mathsf{Graph}(\mathsf{CC})\) is connected, every vertex in \(\mathsf{Graph}(\mathsf{CC})\) has even degree and \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\mathsf{CC}))\).
3. \(\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}\) is a vertex allocation of \(\mathsf{NeiOfInd}(\mathsf{Graph}(\mathsf{CC}))\).
4. \(\mathsf{NumOfcy}=(N_{2},N_{3},N_{5},N_{6},\ldots,N_{2|\mathsf{VC}|})\), where \(0\leq N_{i}\leq 2|\mathsf{VC}|^{2}\) for every \(2\leq i\leq 2|\mathsf{VC}|\), \(i\neq 4\).
We denote the set of robot types by \(\mathsf{RobTypS}\).
In the following lemma we show the "correctness" of Definition 5.19, that is, for every \(1\leq i\leq k\), \(\mathsf{DerRobTyp}(i)\) is indeed a robot type.
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that conditions of Lemma 3.4 hold, and for every \(1\leq i\leq k\), let \((\mathsf{CC}_{i},\mathcal{C}_{i})\) be an \(\widehat{E}_{i}\)-valid pair. Then, for every \(1\leq i\leq k\), \(\mathsf{DerVerTyp}(i)\) is a robot type.
Proof.: Let \(1\leq i\leq k\). We show that \(\mathsf{DerVerTyp}(i)\) is a robot type by proving that the conditions of Definition 5.20 hold. Since \((\mathsf{CC}_{i},\mathcal{C}_{i})\) is an \(\widehat{E}_{i}\)-valid pair, from Condition 1 of Definition 5.11, \(\mathsf{Graph}(\mathsf{CC}_{i})\) is a \(\overline{G}\)-submultigraph. So, \(G^{\prime}_{i}=\overline{G}(\mathsf{Graph}(\mathsf{CC}_{i}))\) is a submultigraph of \(\overline{G}\), with an isomorphism \(\alpha_{i}:V(\overline{G})\to V(\mathsf{Graph}(\mathsf{CC}_{i}))\) satisfies the conditions of Definition 5.6. Thus, \(\mathsf{CC}^{\prime}_{i}=E(G^{\prime}_{i})\subseteq E(\overline{G})\), and therefore, Condition 1 of Definition 5.20 holds.
From Condition 2 of Definition 5.11, \(\mathsf{Graph}(\mathsf{CC}_{i})\) is connected, \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\mathsf{CC}_{i}))\) and every vertex in \(\mathsf{Graph}(\mathsf{CC}_{i})\) has even degree. So, \(\mathsf{Graph}(\mathsf{CC}^{\prime}_{i})\) is connected, every vertex in \(\mathsf{Graph}(\mathsf{CC}^{\prime}_{i})\) has even degree and \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\mathsf{CC}^{\prime}_{i}))\). Thus, Condition 2 of Definition 5.20 holds.
Now, let \((u_{j}^{*},\mathsf{Neib})\in\mathsf{NeibOfInd}(G_{i}^{\prime})\). Then, \(\mathsf{Alloc}_{G_{i}^{\prime}}((u_{j}^{*},\mathsf{Neib}))=\mathsf{DerVerTyp}( \alpha_{i}(u_{j}^{*}))\), so \(\mathsf{Alloc}_{G_{i}^{\prime}}((u_{j}^{*},\mathsf{Neib}\mathsf{Sub}))=(z^{*}, \mathsf{NeibSubsets}(\alpha_{i}(u_{j}^{*})))\), where \(\mathsf{NeibSubsets}(\alpha_{i}(u_{j}^{*}))=\bigcup_{1\leq t\leq k}(\{\hat{ \mathsf{N}}_{\mathsf{Graph}(\mathsf{CC}_{t})}(\alpha_{i}(u_{j}^{*}))\}\cup \mathsf{NeibPairs}_{\mathcal{C}_{t}}(\alpha_{i}(u_{j}^{*})))(\text{see Definition \ref{eq:C1}})\). Observe that \(\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC}_{i})}\)\((\alpha_{i}(u_{j}^{*}))=\mathsf{Neib}\mathsf{Sub}\), thus \(\mathsf{Neib}\mathsf{Sub}\in\mathsf{NeibSubsets}(\alpha_{i}(u_{j}^{*}))\). In addition, from Condition 2 of Definition 5.6, \(\alpha_{i}(u_{j}^{*})\in u^{*}\), so \(z^{*}=u^{*}\). Also, from Lemma 5.16, \(\mathsf{DerVerTyp}(\alpha_{i}(u_{j}^{*}))\in\mathsf{VerTyp}\mathsf{S}\). Therefore, \(\mathsf{Alloc}_{G_{i}^{\prime}}\) is a vertex allocation of \(\mathsf{NeibOfInd}(G_{i}^{\prime})=\mathsf{NeibOfInd}(\mathsf{Graph}(\mathsf{ CC}_{i}^{\prime}))\), so Condition 3 of Definition 5.20 holds.
Now, since \((\mathsf{CC}_{i},\mathcal{C}_{i})\) is an \(\widehat{E}_{i}\)-valid pair, from Condition 1 of Definition 5.11, at most \(2|\mathsf{VC}|^{2}\) cycles in \(\mathcal{C}_{i}\) have length other than \(4\). Therefore, for every \(2\leq j\leq 2|\mathsf{VC}|\), \(0\leq N_{j}\leq 2|\mathsf{VC}|^{2}\). So, Condition 4 of Definition 5.20 holds.
We proved that all of the conditions of Definition 5.20 hold, so \(\mathsf{DerVerTyp}(i)\) is a robot type. This ends the proof.
### Cycle Type
Lastly, we have a variable for each _cycle type_. For every \(1\leq i\leq k\), let \((\mathsf{CC}_{i},\mathcal{C}_{i})\) be an \(\widehat{E}_{i}\)-valid pair. First, we show how we derive a cycle type for each \(C\in\mathcal{C}_{i}\), for every \(1\leq i\leq k\), and then we will present the definition. An intuition for a cycle type is as follows. In Lemma 5.14, where we derive vertex types, we saw how we derive \(\mathsf{NeibSubsets}(u)\) for each \(u\in V(G)\setminus\mathsf{VC}\). Some of the multisets in \(\mathsf{NeibSubsets}(u)\) are derived from cycles in \(\mathsf{CC}_{i}\), for some \(1\leq i\leq k\). Now, we look on the "puzzle" from the perspective of the cycles. For each \(\{\{u,v\},\{u,v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\), the multiset \(\{v,v^{\prime}\}\) is "allocated" for the vertex type of \(u\).
Similarly to the definition of a vertex allocation of \(\mathsf{NeibOfInd}(G^{\prime})\) (Definition 5.18), we have the following definition, for allocation of \(\mathsf{EdgePairs}(C)\):
[Vertex Allocation of \(\mathsf{EdgePairs}(C)\)] Let \(G\) be a connected graph, let \(\mathsf{VC}\) be a vertex cover of \(G\) and let \(C\in\mathsf{Cyc}_{G}\cup\mathsf{Cyc}_{G^{*}}\). A _vertex allocation_of \(\mathsf{EdgePairs}(C)\) is a function \(\mathsf{PaAlloc}_{C}:\mathsf{EdgePairs}(C)\rightarrow\mathsf{VerType}\), such that for every \(\{\{v_{i-1},v_{i}\},\{v_{i},v_{i+1}\}\}\in\mathsf{EdgePairs}(C)\), \(\{v_{i-1},v_{i+1}\}\in\mathsf{NeibSubsets}\) and \(v_{i}\in u^{*}\) or \(v_{i}=u^{*}\), where \(\mathsf{PaAlloc}_{C}(\{\{v_{i-1},v_{i}\},\{v_{i},v_{i+1}\}\})=(u^{*},\mathsf{Neib Subsets})\).
Let \(C=(v_{0},\ldots,v_{\ell}=v_{0})\in\mathsf{Cyc}_{G}\). Recall that we denote by \(\mathsf{EQ}(C)\) the cycle in \(G^{*}\) obtained from \(C\) by replacing each \(v_{i}\in V(C)\cap(V(G)\setminus\mathsf{VC})\) by \(u^{*}\in\mathsf{EQ}\), where \(v_{i}\in u^{*}\), for every \(1\leq i\leq\ell\). Similarly, let \(\mathsf{PaAlloc}_{C}\) be a vertex allocation of \(\mathsf{EdgePairs}(C)\). We denote by \(\mathsf{EQ}(\mathsf{PaAlloc}_{C})\) the function \(\mathsf{EQ}(\mathsf{PaAlloc}_{C}):\mathsf{EdgePairs}(\mathsf{EQ}(C))\rightarrow \mathsf{VerType}\) defined as follows: for every \(\{\{v_{i-1},u^{*}\},\{u^{*},v_{i+1}\}\}\in\mathsf{EdgePairs}(\mathsf{EQ}(C))\), \(\mathsf{EQ}(\mathsf{PaAlloc}_{C})(\{\{v_{i-1},u^{*}\},\{u^{*},v_{i+1}\}\})= \mathsf{Alloc}_{C}(\{\{v_{i-1},v_{i}\},\{v_{i},v_{i+1}\}\})\). Observe that \(\mathsf{EQ}(\mathsf{PaAlloc}_{C})\) is a vertex allocation of \(\mathsf{EdgePairs}(\mathsf{EQ}(C))\):
Let \(C\in\mathsf{Cyc}_{G}\) and let \(\mathsf{PaAlloc}_{C}\) be a vertex allocation of \(\mathsf{EdgePairs}(C)\). Then, \(\mathsf{EQ}(\mathsf{PaAlloc}_{C})\) is a vertex allocation of \(\mathsf{EdgePairs}(\mathsf{EQ}(C))\).
[Deriving Cycle Types From a Solution] Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\), let \(\mathsf{VC}\) be a vertex cover of \(G\) and let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that conditions of Lemma 3.4 hold. For every \(1\leq i\leq k\), \(C\in\mathcal{C}_{i}\) and \(\{\{u,v\},\{u,v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\), let \(\mathsf{PaAlloc}_{i,C}(\{\{v,u\},\{u,v^{\prime}\}\})=\mathsf{DerVerTyp}(u)\). Then, for every \(1\leq i\leq k\) and \(C\in\mathcal{C}_{i}\), let
\(\mathsf{DerCycTyp}(\{(\mathsf{CC}_{j},\mathcal{C}_{j})\}_{1\leq j\leq k},i,C)=( \mathsf{EQ}(C),\mathsf{EQ}(\mathsf{PaAlloc}_{i,C}),\mathsf{DerRobTyp}(i))\).
Whenever \(\{(\mathsf{CC}_{i},\mathcal{C}_{i})\}_{1\leq i\leq k}\) is clear from context, we refer to \(\mathsf{DerCycTyp}(\{(\mathsf{CC}_{j},\mathcal{C}_{j})\}_{1\leq j\leq k},\)\(i,C)\) as \(\mathsf{DerCycTyp}(i,C)\).
Now, we define the term cycle type. In addition to the vertex allocation of \(\mathsf{EdgePairs}(C)\), we have the robot type \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfcyc})\) associated with the cycle type. In order to maintain the connectivity of \(\mathsf{Graph}(\widehat{E})\), we demand that \(V(\mathsf{Graph}(\mathsf{CC}))\cap V(C)\cap\mathsf{VC}\neq\emptyset\) (see the discussion before Definition 5.20).
[Cycle Type] Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(C\in\mathsf{Cyc}_{G^{\star}}\), let \(\mathsf{PaAlloc}_{C}\) be a vertex allocation of \(\mathsf{EdgePairs}(C)\) and let \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfcyc})\) be a robot type. Then, \(\mathsf{CycTyp}=(C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\) is a cycle type if \(V(\mathsf{Graph}(\mathsf{CC}))\cap V(C)\cap\mathsf{VC}\neq\emptyset\).
We denote the set of cycle types by \(\mathsf{CycTypS}\).
In the following lemma we show the "correctness" of Definition 5.24, that is, for every \(1\leq i\leq k\), \(C\in\mathcal{C}_{i}\), \(\mathsf{DerCycTyp}(i,C)\) is indeed a cycle type.
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that conditions of Lemma 3.4 hold and for every \(1\leq i\leq k\) let \((\mathsf{CC}_{i},\mathcal{C}_{i})\) be an \(\widehat{E}_{i}\)-valid pair. Then, for every \(1\leq i\leq k\), \(C\in\mathcal{C}_{i}\), \(\mathsf{DerCycTyp}(i,C)\) is a cycle type.
Proof.: Let \(1\leq i\leq k\) and \(C\in\mathcal{C}_{i}\). We show that \(\mathsf{DerCycTyp}(i,C)\) is a cycle type. First, from Condition 4 of Definition 5.11, every cycle in \(\mathcal{C}_{i}\) of length other than \(4\) is simple. So, the length of \(C\) is at most \(\mathsf{max}\{4,2|\mathsf{VC}|\}\), and thus, \(\mathsf{EQ}(C)\in\mathsf{Cyc}_{G^{\star}}\).
Second, we show that \(\mathsf{PaAlloc}_{i,C}\), defined in Definition 5.24, is a vertex allocation of \(\mathsf{EdgePairs}(C)\). Let \(\{\{u,v\},\{u,v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\). Then, \(\mathsf{PaAlloc}_{i,C}(\{\{v,u\},\{u,v^{\prime}\}\})=\mathsf{DerVerTyp}(u)\). First, from Lemma 5.16, \(\mathsf{DerVerTyp}(u)\) is a vertex type. Now, by Definition 5.14, \(\mathsf{DerVerTyp}(u)=(u^{\star},\mathsf{NeiSubsets}(u))\), where \(u\in u^{\star}\), \(\mathsf{NeiSubsets}(u)=\bigcup_{1\leq j\leq k}(\{\widehat{\mathsf{N}}_{\mathsf{ Graph}(\mathsf{CC}_{j})}(u)\}\cup\mathsf{NeiPass}_{\mathcal{C}_{j}}(u))\) and \(\mathsf{NeiPass}_{\mathcal{C}_{j}}(u)=\{\{v,v^{\prime}\}\mid C\in\mathcal{C}_{j },\{\{u,v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\}\). So, since \(\{\{u,v\},\{u,v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\), then \(\{v,v^{\prime}\}\in\mathsf{NeiPairS}_{\mathcal{C}_{i}}(u)\subseteq\mathsf{Nei Subsets}(u)\). Therefore, \(\mathsf{PaAlloc}_{i,C}\) is a vertex allocation of \(\mathsf{EdgePairs}(C)\), and from Observation 5.23, \(\mathsf{EQ}(\mathsf{PaAlloc}_{i,C})\) is a vertex allocation of \(\mathsf{EdgePairs}(\mathsf{EQ}(C))\).
Now, since \((\mathsf{CC}_{i},\mathcal{C}_{i})\) is an \(\widehat{E}_{i}\)-valid pair, from Condition 3 of Definition 5.11, \(V(\mathsf{Graph}(\mathsf{CC}_{i}))\)\(\cap\mathsf{VC}=V(\mathsf{Graph}(\widehat{E}_{i}))\cap\mathsf{VC}\). From Definition 5.19, \(\mathsf{DerRobTyp}(i)=((\mathsf{CC}^{\prime}_{i}),\mathsf{Alloc}_{G^{\prime}_{i}}\),
\(\mathsf{NumOfcyc}_{i})\), where \(\mathsf{CC}^{\prime}_{i}=E(G^{\prime}_{i})\) and \(G^{\prime}_{i}=\overline{G}(\mathsf{Graph}(\mathsf{CC}_{i}))\). Now, Condition 1 of Definition 5.6 implies that \(V(G^{\prime}_{i})\cap\mathsf{VC}=V(\mathsf{Graph}(\widehat{E}_{i}))\cap\mathsf{VC}\), so \(V(\mathsf{Graph}(\mathsf{CC}^{\prime}))\cap\mathsf{VC}=V(\mathsf{Graph}(\widehat{ E}_{i}))\)\(\cap\mathsf{VC}\). From Observation 5.10, \(V(\mathsf{Graph}(\widehat{E}_{i}))\cap\mathsf{VC}\cap V(C)\neq\emptyset\), thus \(V(\mathsf{Graph}(\mathsf{CC}^{\prime}))\cap\mathsf{VC}\cap V(C)\neq\emptyset\). In addition, from Lemma 5.21, \(\mathsf{DerRobTyp}(i)\) is a robot type. Overall, \(\mathsf{DerCycTyp}(i,C)\) is a cycle type.
### The Instance \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\) of the ILP Problem
Now, we are ready to present our reduction to the ILP problem. We have the following variables:
* For every \(\mathsf{VerTyp}\in\mathsf{VerTypS}\), we have the variable \(x_{\mathsf{VerTyp}}\).
* For every \(\mathsf{RobTyp}\in\mathsf{RobTypS}\), we have the variable \(x_{\mathsf{RobTyp}}\).
* For every \(\mathsf{CycTyp}\in\mathsf{CycTypS}\), we have the variable \(x_{\mathsf{CycTyp}}\).
Each variable stands for the number of elements of each type. We give intuition for each of the following equations:
**Equation 1: Robot Type for Each Robot.** In this equation, we make sure that the total sum of robot types is exactly \(k\), that is, there is exactly one robot type for each robot:
\[1.\sum_{\text{RobTyp}\in\text{RobTypS}}x_{\text{RobTyp}}=k.\]
**Equation 2: Vertex Type for Each Vertex.** For every \(u^{*}\in\text{EQ}\), we denote the set of \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiSubsets})\in\mathsf{VerTypS}\) by \(\mathsf{VerTypS}_{u^{*}}\). In this equation, we make sure, for every \(u^{*}\in\text{EQ}\), that the total sum of vertex types in \(\mathsf{VerTypS}_{u^{*}}\) is exactly \(|u^{*}|\). That is, there is exactly one vertex type \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiSubsets})\) for each \(u\in u^{*}\):
2. For every \(u^{*}\in\text{EQ}\), \(\sum_{\mathsf{VerTyp}\in\mathsf{VerTypS}_{u^{*}}}x_{\mathsf{VerTyp}}=|u^{*}|\)
**Equation 3: Assigning Enough Subsets to Each Vertex Type.** We have the following notations:
* For every \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiSubsets})\in\mathsf{VerTypS}\), every \(\mathsf{NeiSub}=\{v,v^{\prime}\}\in\mathsf{NeiSubsets}\) and \(1\leq j\leq 2|\mathsf{VC}|\), we denote \(\mathsf{CycTypS}(\mathsf{VerTyp},\mathsf{NeiSub},j)=\{\mathsf{CycTyp}=(C, \mathsf{PaAlloc}_{C},\text{RobTyp})\in\mathsf{CycTypS}\mid\) \(|\{\{\{u^{*},v\},\{u^{*},v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\mid\text{ PaAlloc}_{C}(\{\{u^{*},v\},\{u^{*},v^{\prime}\}\})=\mathsf{VerTyp}\}|=j\}\). That is, \(\mathsf{CycTypS}(\mathsf{VerTyp},\mathsf{NeiSub},j)\) is the set of cycle types that assign \(\mathsf{NeiSub}\) to \(\mathsf{VerTyp}\) exactly \(j\) times.
* For every \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiSubsets})\in\mathsf{VerTypS}\), every \(\mathsf{NeiSub}\in\mathsf{NeiSubsets}\) and \(1\leq j\leq 2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}\), \(\text{RobTypS}(\mathsf{VerTyp},\mathsf{NeiSub},j)=\{\text{RobTyp}=(\mathsf{ CC},\text{Alloc}_{\text{Graph(CC)}},\text{NumOfCyc})\in\text{RobTypS}\mid\) \(|\{(u^{*}_{i},\mathsf{NeiSub})\in\mathsf{NeiOfind}(\text{Graph(CC)})\mid \text{Alloc}_{\text{Graph(CC)}}((u^{*}_{i},\mathsf{NeiSub}))=\mathsf{VerTyp} \}|=j\}\). That is, \(\mathsf{RobTypS}(\mathsf{VerTyp},\mathsf{NeiSub},j)\) is the set of robot types that assign \(\mathsf{NeiSub}\) to \(\mathsf{VerTyp}\) exactly \(j\) times.
Recall that, for a vertex \(u\in u^{*}\in\text{EQ}\) of vertex type \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiSubsets})\), \(\mathsf{NeiSubsets}\) encodes "how" the edges incident to \(u\) are covered. That is, for every \(\mathsf{NeiSub}\in\mathsf{NeiSubsets}\), there exists a robot \(i\in[k]\) which covers the multiset of edges with one endpoint in \(u\) and the other being a vertex in \(\mathsf{NeiSub}\). A robot is able to cover this exact multiset of edges if \((u^{*}_{i},\mathsf{NeiSub})\in\mathsf{NeiOfind}(\text{Graph(CC}_{i})\) or \(\mathsf{NeiSub}=\{v,v^{\prime}\}\) and \(\{\{u^{*},v\},\{u^{*},v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\) for \(C\in\mathcal{C}_{i}\), where \(\mathsf{CC}_{i}\) and \(\mathcal{C}_{i}\) satisfy the conditions of Lemma 5.12. Therefore, for every \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiSubsets})\in\mathsf{VerTypS}\) and every \(\mathsf{NeiSub}\in\mathsf{NeiSubsets}\), we ensure we have at least one \(\mathsf{NeiSub}\) assigned to each vertex of \(\mathsf{VerTyp}\):
3. For every \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiSubsets})\in\mathsf{VerTypS}\), and every \(\mathsf{NeiSub}\in\mathsf{NeiSubsets}\),
\[\sum_{\begin{subarray}{c}j=1\\ 2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}\end{subarray}}^{2}\sum_{ \begin{subarray}{c}j=1\\ 2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}\end{subarray}}\sum_{ \begin{subarray}{c}k\in\mathsf{WebTypS}(\mathsf{VerTyp},\mathsf{NeiSub},j) \end{subarray}}j\cdot x_{\mathsf{CycTyp}}+\]
Observe that Equations 1-3 ensure that every edge with one endpoint in \(V\setminus\mathsf{VC}\) is covered.
**Equation 4: Covering Each Edge with Both Endpoints in \(\mathsf{VC}\).** We have the following notations:
* For every \(\{u,v\}\in E\) such that \(u,v\in\mathsf{VC}\), \(\mathsf{CycTypS}(\{u,v\})=\{\mathsf{CycTyp}=(C,\mathsf{PaAlloc}_{C},\text{RobTyp}) \in\mathsf{CycTypS}\mid\{u,v\}\in E(C)\}\). That is, \(\mathsf{CycTypS}(\{u,v\})\) is the set of cycle types \(\mathsf{CycTyp}=(C,\mathsf{PaAlloc}_{C},\text{RobTyp})\) where \(C\) covers \(\{u,v\}\).
* For every \(\{u,v\}\in E\) such that \(u,v\in\mathsf{VC}\), \(\mathsf{RobTypS}(\{u,v\})=\{\text{RobTyp}=(\mathsf{CC},\text{Alloc}_{\text{ Graph(CC)}},\text{NumOfCyc})\in\text{RobTypS}\mid\{u,v\}\in\mathsf{CC}\}\). That is, \(\mathsf{RobTypS}(\{u,v\})\) is the set of robot types \(\text{RobTyp}=(\mathsf{CC},\text{Alloc}_{\text{Graph(CC)}})\) where \(\mathsf{CC}\) covers \(\{u,v\}\).
In this equation we ensure that each \(\{u,v\}\in E\) with both endpoints in \(\mathsf{VC}\) is covered at least once:
1. For every \(\{u,v\}\in E\) such that \(u,v\in\mathsf{VC}\), \[\sum_{\mathsf{CycType}\in\mathsf{CycTypeS}\{(u,v)\}}x_{\mathsf{CycType}}+\sum_{ \mathsf{RobType}\in\mathsf{RobTypeS}\{(u,v)\}}x_{\mathsf{RobType}}\geq 1.\]
Observe that Equations 1-4 ensure that every edge in \(G\) is covered.
**Equation 5: Assigning the Exact Amount of Cycles with Length Other Than \(4\) to Each Robot Type.** We have the following notation:
1. For every \(\mathsf{RobType}\in\mathsf{RobTypeS}\) and for every \(2\leq j\leq 2|\mathsf{VC}|\), \[\mathsf{CycTypeS}(\mathsf{RobType},j)=\{\mathsf{CycType}=(C,\mathsf{PaAlloc}_{C },\mathsf{RobType})\in\mathsf{CycTypeS}\ |\ |C|=j\}.\] That is, \(\mathsf{CycTypeS}(\mathsf{RobType},j)\) is the set of cycle types, stand for a cycle of length \(j\) and assigned to a robot with robot type \(\mathsf{RobType}\).
Let \(\mathsf{RobType}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfCyc})\in\mathsf{RobTypeS}\) be a robot type. In this equation, we verify that the number of cycles of length other than \(4\) assigned to robots with robot type \(\mathsf{RobType}\) is exactly as determined in \(\mathsf{NumOfCyc}\):
1. For every \(\mathsf{RobType}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfCyc})\in\mathsf{RobTypeS}\) and for every \[2\leq j\leq 2|\mathsf{VC}|,\,j\neq 4,\sum_{\mathsf{CycTypeS}(\mathsf{RobType},j)}x _{\mathsf{CycType}}=N_{j}\cdot x_{\mathsf{RobType}},\]
where \(\mathsf{NumOfCyc}=(N_{2},N_{3},N_{5},N_{6},\ldots,N_{2|\mathsf{VC}|})\).
**Equation 6: Verifying the Budget Limit.** Let \(\mathsf{RobType}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfCyc})\)\(\in\mathsf{RobTypeS}\) be a robot type, and let \(i\in[k]\) be robot with robot type \(\mathsf{RobType}\). From Lemma 5.12, there exist \(\mathsf{CC}_{i}\subseteq\widehat{E}_{i}\) and a multiset of cycles \(\mathcal{C}_{i}\) in \(\mathsf{Graph}(\widehat{E}_{i})\), such that \(\mathsf{CC}_{i}\cup\bigcup_{C\in\mathcal{C}_{i}}E(C)=\widehat{E}_{i}\). Now, as we determine the robot type of the \(i\)-th robot to be \(\mathsf{RobType}\), \(|\mathsf{CC}_{i}|=|\mathsf{CC}|\), and the number of cycles in \(\mathcal{C}_{i}\) of length other than \(4\) is fixed by \(\mathsf{NumOfCyc}\). We have the following notation:
1. \(\mathsf{Bud}(\mathsf{RobType})=|\mathsf{CC}|+\sum_{2\leq j\leq 2|\mathsf{VC}|,j \neq 4}N_{j}\cdot j\), where \(\mathsf{NumOfCyc}=(N_{2},N_{3},N_{5},N_{6},\ldots,\)
2. \(N_{2|\mathsf{VC}|})\).
That is, \(\mathsf{Bud}(\mathsf{RobType})\) is the number of edges in \(\mathsf{CC}_{i}\cup\bigcup_{C\in\mathcal{C}_{i}}E(C)\) excluding the number of edges in cycles of length \(4\) in \(\mathcal{C}_{i}\). Therefore, \(B-\mathsf{Bud}(\mathsf{RobType})\) is the budget left for the robot for the cycles in \(\mathcal{C}_{i}\) of length \(4\). Now, we take the largest number which is a multiple of \(4\) and less or equal to \(B-\mathsf{Bud}(\mathsf{RobType})\), to be the budget left for cycles of length \(4\). So, we have the following notation:
1. For every \(\mathsf{RobType}\in\mathsf{RobTypeS}\), \(\mathsf{CycBud}(\mathsf{RobType})=\lfloor(B-\mathsf{Bud}(\mathsf{RobType})) \cdot\frac{1}{4}\rfloor\cdot 4\).
2. For every \(\mathsf{RobType}\in\mathsf{RobTypeS}\), \[\sum_{\mathsf{CycTypeS}(\mathsf{RobType},4)}4\cdot x_{\mathsf{CycTypeS}}\leq x _{\mathsf{RobType}}\cdot\mathsf{CycBud}(\mathsf{RobType}).\]
**Summary of Equations.** In conclusion, given an instance \((G,v_{\mathsf{init}},k,B)\) of CGE problem, \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\) is the instance of the ILP problem described by the following equations:
1. \(\sum_{\mathsf{RobType}\in\mathsf{RobTypeS}}x_{\mathsf{RobType}}=k\).
2. For every \(u^{*}\in\mathsf{EQ}\), \[\sum_{\mathsf{VerType}\in\mathsf{VerTypeS}_{u^{*}}}x_{\mathsf{VerType}}=|u^{*}|.\]
3. For every \(\mathsf{VerType}=(u^{*},\mathsf{NeiSubsets})\in\mathsf{VerTypeS}\), and every \(\mathsf{NeiSub}\in\mathsf{NeiSubsets}\),
\[\sum_{j=1}^{2|\mathsf{VC}|}\sum_{\mathsf{CyTp}\in\mathsf{CyTpS}( \mathsf{VerTyp},\mathsf{NegSub},j)}j\cdot x_{\mathsf{CyTp}}+\] \[\sum_{j=1}^{2|\mathsf{VC}|}\sum_{\mathsf{RobType}\in\mathsf{RobTypeS }(\mathsf{VerTyp},\mathsf{NegSub},j)}j\cdot x_{\mathsf{RobType}}\geq x_{ \mathsf{VerTyp}}.\]
4. For every \(\{u,v\}\in E\) such that \(u,v\in\mathsf{VC}\), \[\sum_{\mathsf{CyTp}\in\mathsf{CyCyTpS}(\{u,v\})}x_{\mathsf{CyTp}}+\sum_{ \mathsf{RobType}\in\mathsf{RobTypeS}(\{u,v\})}x_{\mathsf{RobType}}\geq 1.\]
5. For every \(\mathsf{RobType}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfcy})\in\mathsf{RobTypeS}\) and for every \(2\leq j\leq 2|\mathsf{VC}|\), \(j\neq 4\), \[\sum_{\mathsf{CyTp}\in\mathsf{CyCyTpS}(\mathsf{RobType},j)}x_{\mathsf{CyCyTp }}=N_{j}\cdot x_{\mathsf{RobType}},\] where \(\mathsf{NumOfcy}=(N_{2},N_{3},N_{5},N_{6},\ldots,N_{2|\mathsf{VC}|})\).
6. For every \(\mathsf{RobType}\in\mathsf{RobTypeS}\), \[\sum_{\mathsf{CyTp}\in\mathsf{CyCyTpS}(\mathsf{RobType},4)}4\cdot x_{ \mathsf{CyTp}}\leq x_{\mathsf{RobType}}\cdot\mathsf{CyEnd}(\mathsf{RobType}).\]
### Correctness: Forward Direction
Now, we turn to prove the correctness of the reduction. In particular, we have the following lemma:
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\) and let \(k,B\in\mathbb{N}\). Then, \((G,v_{\mathsf{init}},k,B)\) is a yes-instance of the \(\operatorname{CGE}\), if and only if \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\) is a yes-instance of the Integer Linear Programming.
We split the proof of the correctness of Lemma 5.27 to two lemmas. We begin with the proof of the first direction:
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\) and let \(k,B\in\mathbb{N}\). If \((G,v_{\mathsf{init}},k,B)\) is a yes-instance of the \(\operatorname{CGE}\) problem, then \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\) is a yes-instance of the Integer Linear Programming.
Towards the proof of Lemma 5.28, we present the function \(\mathsf{RobExpToILP}\). This function gets as input, for every \(1\leq i\leq k\), \((\mathsf{CC}_{i},\mathcal{C}_{i})\) that is an \(\widehat{E}_{i}\)-valid pair. Then, for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypeS}\cup\mathsf{CyCyTpS}\), \(\mathsf{RobExpToILP}(z)\) is the number of elements of type \(z\) derived by Definitions 5.14, 5.19 and 5.24:
[RobExpToILP] Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(\mathsf{VC}\) be a vertex cover of \(G\) and let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that conditions of Lemma 3 hold. For every \(1\leq i\leq k\), let \((\mathsf{CC}_{i},\mathcal{C}_{i})\) be an \(\widehat{E}_{i}\)-valid pair. Then:
1. For every \(\mathsf{VerTyp}\in\mathsf{VerTypS}\), \(\mathsf{RobExpToILP}(\{(\mathsf{CC}_{j},\mathcal{C}_{j})\}_{1\leq j\leq k}, \mathsf{VerTyp})=|\{u\in V(G)\setminus\mathsf{VC}\mid\mathsf{DerVerTyp}(u)= \mathsf{VerTyp}\}|\).
2. For every \(\mathsf{RobTyp}\in\mathsf{Rob}\mathsf{TypS}\), \(\mathsf{RobExpToILP}(\{(\mathsf{CC}_{j},\mathcal{C}_{j})\}_{1\leq j\leq k}, \mathsf{RobTyp})=|\{1\leq i\leq k\mid\mathsf{DerRobType}(i)=\mathsf{RobType} \}|\).
3. For every \(\mathsf{CyCyTp}\in\mathsf{CyCyTpS}\), \(\mathsf{RobExpToILP}(\{(\mathsf{CC}_{j},\mathcal{C}_{j})\}_{1\leq j\leq k}, \mathsf{CyCyTp})=|\{(i,C)\mid 1\leq i\leq k,C\in\mathcal{C}_{i},\mathsf{DerCyTp}(i,C)= \mathsf{CyCyTp}\}|\).
Whenever \(\{(\mathsf{CC}_{i},\mathcal{C}_{i})\}_{1\leq i\leq k}\) is clear from context, we refer to
\(\mathsf{RobExpToILP}(\{(\mathsf{CC}_{j},\mathcal{C}_{j})\}_{1\leq j\leq k}, \mathsf{VerTyp})\), \(\mathsf{RobExpToILP}(\{(\mathsf{CC}_{j},\mathcal{C}_{j})\}_{1\leq j\leq k}, \mathsf{RobTyp})\) and
\(\mathsf{RobExpToILP}(\{(\mathsf{CC}_{j},\mathcal{C}_{j})\}_{1\leq j\leq k}, \mathsf{CyCyTp})\) as \(\mathsf{RobExpToILP}(\mathsf{VerTyp})\), \(\mathsf{RobExpToILP}(\mathsf{RobType})\) and \(\mathsf{RobExpToILP}(\mathsf{CyCyTp})\), respectively.
In the next lemma, we prove that the values given by Definition 5.29 satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\):
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(\mathsf{VC}\) be a vertex cover of \(G\) and let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that conditions of Lemma 3.4 hold. For every \(1\leq i\leq k\), let \((\mathsf{CC}_{i},\mathcal{C}_{i})\) be an \(\widehat{E}_{i}\)-valid pair. Then, the values \(x_{z}=\mathsf{RobExpToLP}(z)\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\).
For the sake of readability, we split Lemma 5.30 and its proof into two lemmas: In Lemma 5.31 we prove that the values \(x_{z}=\mathsf{RobExpToLP}(z)\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), satisfy inequalities 1-3 of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\), and in Lemma 5.32 we prove that these values satisfy inequalities 4-6.
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(\mathsf{VC}\) be a vertex cover of \(G\) and let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that conditions of Lemma 3.4 hold. For every \(1\leq i\leq k\), let \((\mathsf{CC}_{i},\mathcal{C}_{i})\) be an \(\widehat{E}_{i}\)-valid pair. Then, the values \(x_{z}=\mathsf{RobExpToLP}(z)\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), satisfy inequalities 1-3 of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\).
Proof.: By Lemma 5.21, for every \(1\leq i\leq k\), \(\mathsf{DerVerTyp}(i)\) is a robot type, and thus, Equation 1 is satisfied.
Similarly, from Lemma 5.16, for every \(u\in u^{*}\in\mathsf{EQ}\), \(\mathsf{DerVerTyp}(u)\) is a vertex type, where \(\mathsf{DerVerTyp}(u)=(u^{*},\mathsf{NeiSubsets})\), for some \(\mathsf{NeiSubsets}\subseteq 2^{\mathsf{H}_{G^{*}}(u^{*})\times 2}\). Therefore, Equations 2 are satisfied.
Now, let \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiSubsets})\in\mathsf{VerTypS}\), let \(u\in u^{*}\) such that \(\mathsf{DerVerTyp}(u)=\mathsf{VerTyp}\), and let \(\mathsf{NeiSub}\in\mathsf{NeiSubsets}\). So, by Definition 5.14, \(\mathsf{NeiSubsets}=\mathsf{NeiSubsets}(u)=\bigcup_{1\leq i\leq k}(\{ \widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC}_{i})}(u)\}\cup\mathsf{NeiPair }_{\mathcal{C}_{i}}(u))\), where \(\mathsf{NeiPair}_{\mathcal{C}_{i}}(u)=\{\{v,v^{\prime}\}\mid C\in\mathcal{C}_ {i},\{\{u,v\},\{u,v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\}\). Thus, there exists \(1\leq i\leq k\) such that at least one among the following conditions holds:
1. \(\mathsf{NeiSub}\in\mathsf{NeiPair}_{\mathcal{C}_{i}}(u)\). Therefore, there exists \(C\in\mathcal{C}_{i}\) such that \(\{\{u,v\},\{u,v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\) and \(\mathsf{NeiSub}=\{v,v^{\prime}\}\). Thus, by Definition 5.24, \(\mathsf{DerCycTyp}(i,C)=(\mathsf{EQ}(C),\mathsf{EQ}(\mathsf{PaAlloc}_{i,C}), \mathsf{DerRobTyp}(i))\), where \(\mathsf{PaAlloc}_{i,C}\) is a vertex allocation of \(\mathsf{EdgePairs}(C)\), and \(\mathsf{PaAlloc}_{i,C}(\{\{v,u\},\{u,v^{\prime}\}\})=\mathsf{DerVerTyp}(u)= \mathsf{VerTyp}\). So, \(u\) contributes at least one to \(|\{\{\{u,v\},\{u,v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\mid\mathsf{PaAlloc}_{i,C}(\{ \{u,v\},\{u,v^{\prime}\}\})=\mathsf{VerTyp}\}|\), thus, \(u\) contributes at least one to \(|\{\{\{u^{*},v\},\{u^{*},v^{\prime}\}\}\in\mathsf{EdgePairs}(\mathsf{EQ}(C)) \mid\mathsf{EQ}(\mathsf{PaAlloc}_{i,C})(\{\{u^{*},v\},\{u^{*},v^{\prime}\}\})= \mathsf{VerTyp}\}|\). Now, since \(C\in\mathsf{Cyc}_{G}\), \(\mathsf{EQ}(C)\in\mathsf{Cyc}_{G^{*}}\), so the length of \(\mathsf{EQ}(C)\) is bounded by \(\mathsf{max}\{4,2|\mathsf{VC}|\}\), and thus, \(|\mathsf{EdgePairs}(\mathsf{EQ}(C))|\leq 2|\mathsf{VC}|\). In addition, by Lemma 5.26, \(\mathsf{DerCycTyp}(i,C)=\mathsf{CycTyp}\) is a cycle type. Therefore, there exists \(1\leq t\leq 2|\mathsf{VC}|\) such that \(\mathsf{CycTyp}\in\mathsf{CycTypS}(\mathsf{VerTyp},\mathsf{NeiSub},t)\). Thus, \(u\) contributes at least one to \(\sum_{r=1}^{2|\mathsf{VC}|}\sum_{\mathsf{CyType}\in\mathsf{CyCyCyCyCy}}\)\(r\cdot x_{\mathsf{CycTyp}}\).
2. \(\mathsf{NeiSub}\in\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC}_{i})}(u)\). By Definition 5.19, \(\mathsf{DerCycTyp}(i)=(\mathsf{CC}^{\prime}_{i},\mathsf{Alloc}_{G^{\prime}_{i}}, \mathsf{NumOfcy}_{i})\), where (i) \(\mathsf{CC}^{\prime}_{i}=E(G^{\prime}_{i})\), (ii) \(G^{\prime}_{i}=\overline{G}(\mathsf{Graph}(\mathsf{CC}_{i}))\) with an isomorphism \(\alpha_{i}:V(G^{\prime}_{i})\to V(\mathsf{Graph}(\mathsf{CC}_{i}))\) and (iii) for every \((u^{*}_{j},\mathsf{NeiSub})\in\mathsf{NeiOfInd}(G^{\prime}_{i})\), \(\mathsf{Alloc}_{G^{\prime}_{i}}((u^{*}_{j},\mathsf{NeiSub}))=\mathsf{DerVerTyp}( \alpha_{i}(u^{*}_{j}))\). Let \(j\in\mathbb{N}\) such that \(\alpha_{i}(u^{*}_{j})=u\), so \(\mathsf{Alloc}_{G^{\prime}_{i}}((u^{*}_{j},\mathsf{NeiSub}))=\mathsf{Der VerTyp}(\alpha_{i}(u^{*}_{j}))=\mathsf{VerTyp}\). Therefore, \(u\) contributes at least one to \(|\{(u^{*}_{i},\mathsf{NeiSub})\in\mathsf{NeiOfInd}(\mathsf{Graph}(\mathsf{CC}_{i}) )\mid\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC}_{i})}((u^{*}_{i},\mathsf{NeiSub}))= \mathsf{VerTyp}\}|\). In addition, by Lemma 5.21, \(\mathsf{DerRobTyp}(i)=\mathsf{RobTyp}\) is a robot type. Now, since \(\mathsf{Graph}(\mathsf{CC}_{i})\) is a \(\overline{G}\)-submultigraph (see Definition 5.4), \(|V(\mathsf{Graph}(\mathsf{CC}_{i}))\cap u^{*}|\leq 2^{|\mathsf{N}_{G^{*}}(u^{*})|}+| \mathsf{VC}|^{2}\leq 2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}\), so \(|\{(u^{*}_{i},\mathsf{NeiSub})\in\mathsf{NeiOfInd}(\mathsf{Graph}(\mathsf{CC}_{i}) )|\leq 2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}\). Therefore, there exists \(1\leq\mathsf{NeiSub
\(t\leq 2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}\) such that \(\mathsf{RobTyp}\in\mathsf{RobTypS}(\mathsf{VerTyp},\mathsf{NeSub},t)\). Thus, \(u\) contributes at
\[\text{least one to}\sum_{r=1}^{2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}}\sum_{\mathsf{ RobTyp}\in\mathsf{RobTypS}(\mathsf{VerTyp},\mathsf{NeSub},r)}r\cdot x_{ \mathsf{RobTyp}}.\]
Therefore, each \(u\in V(G)\setminus\mathsf{VC}\) such that \(\mathsf{DerVerTyp}(u)=\mathsf{VerTyp}=(u^{*},\mathsf{NeSubsets})\), contributes at least one to
\[\sum_{j=1}^{2^{|\mathsf{VC}|}}\sum_{\mathsf{Cy}\mathsf{Typ}\in\mathsf{Cy} \mathsf{Cy}\mathsf{TypS}(\mathsf{VerTyp},\mathsf{NeSub},j)}j\cdot x_{ \mathsf{Cy}\mathsf{Typ}}+\sum_{j=1}^{2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}}\sum_{ \mathsf{RobTypS}(\mathsf{VerTyp},\mathsf{NeSub},j)}j\cdot x_{\mathsf{RobTyp}}.\]
Thus, since \(\mathsf{RobExpToLP}(\mathsf{VerTyp})=|\{u\in V(G)\setminus\mathsf{VC}\mid \mathsf{DerVerTyp}(u)=\mathsf{VerTyp}\}|\) and \(x_{\mathsf{VerTyp}}=\mathsf{RobExpToLP}(\mathsf{VerTyp})\), we get that Equations 3 are satisfied.
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(\mathsf{VC}\) be a vertex cover of \(G\) and let \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that conditions of Lemma 3.2 hold. For every \(1\leq i\leq k\), let \((\mathsf{CC}_{i},\mathcal{C}_{i})\) be an \(\widehat{E}_{i}\)-valid pair. Then, the values \(x_{z}=\mathsf{RobExpToLP}(z)\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CyCyTypS}\), satisfy inequalities 4-6 of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\).
Proof.: Let \(\{u,v\}\in E\) such that \(u,v\in\mathsf{VC}\). By Condition 3 of Lemma 3.2, \(E(G)\subseteq\widehat{E}_{1}\cup\ldots\cup\widehat{E}_{k}\). So, there exists \(1\leq i\leq k\) such that \(\{u,v\}\in\widehat{E}_{i}\). Since \((\mathsf{CC}_{i},\mathcal{C}_{i})\) is an \(\widehat{E}_{i}\)-valid pair, by Condition 6 of Definition 5.11, \(\mathsf{CC}_{i}\cup\bigcup_{C\in\mathcal{C}_{i}}E(C)=\widehat{E}_{i}\). So, at least one among the following two cases holds:
1. \(\{u,v\}\in\mathsf{CC}_{i}\). By Definition 5.19, \(\mathsf{DerRobTyp}(i)=(\mathsf{CC}^{\prime}_{i},\mathsf{Alloc}_{G^{\prime}_{i}}, \mathsf{NumOfCyc}_{i})\), where \(\mathsf{CC}^{\prime}_{i}=E(G^{\prime}_{i})\) and \(G^{\prime}_{i}=\overline{G}(\mathsf{Graph}(\mathsf{CC}_{i}))\). So, \(\{u,v\}\in\mathsf{CC}^{\prime}_{i}\). Now, by Lemma 5.21, \(\mathsf{DerRobTyp}(i)=\mathsf{RobTyp}\) is a robot type. Therefore, \(\mathsf{RobTyp}\in\mathsf{RobTypS}(\{u,v\})\), thus \(\sum_{\mathsf{RobTyp}\in\mathsf{RobTopS}(\{u,v\})}x_{\mathsf{RobTyp}}\geq 1\).
2. There exists \(C\in\mathcal{C}_{i}\) such that \(\{u,v\}\in E(C)\). By Definition 5.24, \(\mathsf{DerCycTyp}(i,C)=(\mathsf{EQ}(C),\mathsf{EQ}(\mathsf{PaAlloc}_{i,C})\), and observe that \(\{u,v\}\in\mathsf{EQ}(C)\). In addition, by Lemma 5.26, \(\mathsf{DerCycTyp}(i,C)=\mathsf{CyCyp}\) is a cycle type. Therefore, \(\mathsf{CyCyp}\in\mathsf{CyCypS}(\{u,v\})\), so \(\sum_{\mathsf{CyCypS}\in\mathsf{CyCypS}(\{u,v\})}x_{\mathsf{CyCyp}}\geq 1\). Either way, we get that \(\sum_{\mathsf{CyCypS}\in\mathsf{CyCypS}(\{u,v\})}x_{\mathsf{ CyCypS}}+\sum_{\mathsf{RobTypS}\in\mathsf{RobTypS}(\{u,v\})}x_{\mathsf{RobTyp}}\geq 1\), so Equations 4 are satisfied.
Now, let \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfCyc})\in\mathsf{RobTypS}\), let \(2\leq j\leq 2|\mathsf{VC}|\), \(j\neq 4\), and let \(1\leq i\leq k\) such that \(\mathsf{DerRobTyp}(i)=\mathsf{RobTyp}\). By Definition 5.19, \(N_{j}\) is the number of cycles of length \(j\) in \(\mathcal{C}_{i}\). Now, for every \(C\in\mathcal{C}_{i}\), by Definition 5.24, \(\mathsf{DerRobTyp}(i,C)=(\mathsf{EQ}(C),\mathsf{EQ}(\mathsf{PaAlloc}_{i,C})\), and in particular, \(|C|=|\mathsf{EQ}(C)|\). In addition, by Lemma 5.26, for every \(C\in\mathcal{C}_{i}\), \(\mathsf{DerCycTyp}(i,C)=\mathsf{CycTyp}\) is a cycle type. Therefore, \(|\{C\in\mathcal{C}_{i}\mid\mathsf{DerRobTyp}(i,C)=(\mathsf{EQ}(C),\mathsf{EQ}( \mathsf{PaAlloc}_{i,C}),|\mathsf{EQ}(C)|=j\}|=N_{j}\). So, Equations 5 are satisfied.
Now, let \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfCyc})\in\mathsf{RobTypS}\), and let \(1\leq i\leq k\) such that \(\mathsf{DerRobTyp}(i)=\mathsf{RobTyp}\). Since \((\mathsf{CC}_{i},\mathcal{C}_{i})\) is an \(\widehat{E}_{i}\)-valid pair, by Condition 6 of Definition 5.11, \(\mathsf{CC}_{i}\cup\bigcup_{C\in\mathcal{C}_{i}}E(C)=\widehat{E}_{i}\). So, \(|\widehat{E}_{i}|=|\mathsf{CC}_{i}|+\sum_{C\in\mathcal{C}_{i}}|C|\). By Definition 5.19, \(\mathsf{DerRobTyp}(i)=(\mathsf{CC}^{\prime}_{i},\mathsf{Alloc}_{G^{\prime}_{i}}, \mathsf{NumOfCyc}_{i})\), where \(\mathsf{CC}^{\prime}_{i}=E(G^{\prime}_{i})\) and \(G^{\prime}_{i}=\overline{G}(\mathsf{Graph}(\mathsf{CC}_{i}))\). Thus, \(|\mathsf{CC}|=|\mathsf{CC}^{\prime}_{i}|=|\mathsf{CC}_{i}|\). In addition, by Definition 5.19, for every \(2\leq j\leq 2|\mathsf{VC}|\), \(j\neq 4\), \(N_{i,j}\) is the number of cycles of size \(j\) in \(\mathcal{C}_{i}\), where \(\mathsf{NumOfCyc}=(N_{2},N_{3},N_{5},N_{6},\ldots,N_{2|\mathsf{VC}|})\). Moreover, by Condition 5 of Definition 5.11, the cycles in \(\mathcal{C}_{i}\) of length other than \(4\) are simple. So, for every \(C\in\mathcal{C}_{i}\), \(|C|=4\) or \(2\leq|C|\leq 2|\mathsf{VC}|\). Therefore,
\[|\widehat{E}_{i}|=|\mathsf{CC}_{i}|+\sum_{C\in\mathcal{C}_{i}}|C|=| \mathsf{CC}|+\sum_{2\leq j\leq 2|\mathsf{VC}|,j\neq 4}\sum_{C\in\mathcal{C}_{i},|C|=j}j+\sum_{C\in\mathcal{C}_{i},|C|=4}4=\] \[|\mathsf{CC}|+\sum_{2\leq j\leq 2|\mathsf{VC}|,j\neq 4}N_{j}+\sum_{C \in\mathcal{C}_{i},|C|=4}4.\]
Recall that \(\mathsf{Bud}(\mathsf{Rob}\mathsf{Typ})=|\mathsf{CC}|+\sum_{2\leq j\leq 2| \mathsf{VC}|,j\neq 4}N_{j}\). Thus,
\[|\widehat{E}_{i}|=\mathsf{Bud}(\mathsf{Rob}\mathsf{Typ})+\sum_{C\in\mathcal{C }_{i},|C|=4}4=\mathsf{Bud}(\mathsf{Rob}\mathsf{Typ})+|\{C\in\mathcal{C}_{i} \ |\ |C|=4\}|\cdot 4\text{.}\]
Now, by Condition 4 of Lemma 3.4, \(|\widehat{E}_{i}|\leq B\). So, \(\mathsf{Bud}(\mathsf{Rob}\mathsf{Typ})+|\{C\in\mathcal{C}_{i}\ |\ |C|=4\}|\cdot 4\leq B\), implies that \(|\{C\in\mathcal{C}_{i}\ |\ |C|=4\}|\leq(B-\mathsf{Bud}(\mathsf{Rob}\mathsf{Typ})) \cdot\frac{1}{4}\). Observe that \(|\{C\in\mathcal{C}_{i}\ |\ |C|=4\}|\in\mathbb{N}\), thus \(|\{C\in\mathcal{C}_{i}\ |\ |C|=4\}|\leq|(B-\mathsf{Bud}(\mathsf{Rob}\mathsf{Typ})) \cdot\frac{1}{4}|\), so \(|\{C\in\mathcal{C}_{i}\ |\ |C|=4\}|\cdot 4\leq|(B-\mathsf{Bud}(\mathsf{Rob} \mathsf{Typ}))\cdot\frac{1}{4}|\cdot 4=\mathsf{CycBud}(\mathsf{Rob} \mathsf{Typ})\). Therefore, the number of cycles of length \(4\) in \(\mathcal{C}_{i}\) is bounded by \(\mathsf{CycBud}(\mathsf{Rob}\mathsf{Typ})\cdot\frac{1}{4}\).
Now, for every \(C\in\mathcal{C}_{i}\), by Definition 5.24, \(\mathsf{DerCycTyp}(i,C)=(\mathsf{EQ}(C),\mathsf{EQ}(\mathsf{PaAlloc}_{i,C}),\)\(\mathsf{DerRob}\mathsf{Typ}(i))=(\mathsf{EQ}(C),\mathsf{EQ}(\mathsf{PaAlloc}_{i,C}),\)\(\mathsf{Rob}\mathsf{Typ})\). In addition, by Lemma 5.26, \(\mathsf{DerCycTyp}(i,C)\) is a cycle type. So, for every \(C\in\mathcal{C}_{i}\), \(\mathsf{DerCycTyp}(i,C)\in\mathsf{CycTypS}(\mathsf{Rob}\mathsf{Typ},4)\) if and only if \(|C|=4\). Therefore, \(|\{C\in\mathcal{C}_{i}\ |\ \mathsf{DerCycTyp}(i,C)\in\mathsf{CycTypS}(\mathsf{Rob} \mathsf{Typ},4)\}|=|\{C\in\mathcal{C}_{i}\ |\ |C|=4\}|\leq\mathsf{CycBud}(\mathsf{Rob} \mathsf{Typ})\cdot\frac{1}{4}\). Now, for every \(\mathsf{CycTyp}\in\mathsf{CycTypS}\), \(\mathsf{AcycTyp}=\mathsf{Rob}\mathsf{ExpToILP}(\mathsf{CycTyp})=|\{(i,C)\ |\ 1\leq i \leq k,C\in\mathcal{C}_{i},\mathsf{DerCycTyp}(i,C)=\mathsf{CycTyp}\}|\). Thus, \(i\) contributes at most \(\mathsf{CycBud}(\mathsf{Rob}\mathsf{Typ})\cdot\frac{1}{4}\) to \(\sum_{\mathsf{CycType}\in\mathsf{CycTypS}(\mathsf{Rob} \mathsf{Typ},4)}x_{\mathsf{CycTyp}}\). So,
\[\sum_{\mathsf{CycType}\in\mathsf{CycTypS}(\mathsf{Rob}\mathsf{Typ},4)}4\cdot x _{\mathsf{CycTyp}}\leq x_{\mathsf{Rob}\mathsf{Typ}}\cdot\mathsf{CycBud}( \mathsf{Rob}\mathsf{Typ})\text{.}\]
Therefore, Equations 6 are satisfied. This completes the proof.
Lemmas 5.31 and 5.32 conclude the correctness of Lemma 5.30.
Now, we invoke Lemma 5.30 in order to prove the correctness of Lemma 5.28:
Proof.: Assume that \((G,v_{\mathsf{init}},k,B)\) is a yes-instance of the CGE problem. By Lemma 3.4, there exist \(k\) multisets \(\widehat{E}^{\prime}_{1},\ldots,\widehat{E}^{\prime}_{k}\) such that the conditions of Lemma 3.4 hold. So, by Observation 5.1, there exist \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that the conditions of Lemma 3.4 hold and for every \(1\leq i\leq k\), each \(\{u,v\}\in\widehat{E}_{i}\) appears at most twice in \(\widehat{E}_{i}\). Thus, for every \(1\leq i\leq k\), by Lemma 5.12, there exists \((\mathsf{CC}_{i},\mathcal{C}_{i})\) that is an \(\widehat{E}_{i}\)-valid pair. By Lemma 5.30, the values \(x_{z}=\mathsf{Rob}\mathsf{ExpToILP}(z)\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{Rob}\mathsf{TypS}\cup\mathsf{CycTypS}\), satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). Therefore, \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\) is a yes-instance of the Integer Linear Programming.
### Correctness: Reverse Direction
In the next lemma, we state the revers direction of Lemma 5.28:
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\) and let \(k,B\in\mathbb{N}\). If \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\) is a yes-instance of Integer Linear Programming, then \((G,v_{\mathsf{init}},\)\(k,B)\) is a yes-instance of the CGE problem.
Towards the proof of Lemma 5.33, given values \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{Rob}\mathsf{TypS}\cup\mathsf{CycTypS}\), that satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\), we show how to construct multisets \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) satisfy the conditions of Lemma 3.4. Recall that intuitively, for every \(z\in\mathsf{VerTypS}\cup\mathsf{Rob}\mathsf{TypS}\cup\mathsf{CycTypS}\), \(x_{z}\) stands for the number of elements of type \(z\) in a solution. First, we define a vertex type for each vertex in \(V(G)\setminus\mathsf{VC}\). To this end, for every
\(u\in u^{*}\in\mathsf{EQ}\), we arbitrary picks a vertex type \((u^{*},\mathsf{NeiSubsets})\in\mathsf{VerTypS}\) such that the total number of vertices we chose for a vertex type \(\mathsf{VerTyp}\) is exactly \(x_{\mathsf{VerTyp}}\):
[Deriving Subsets from an ILP Solution] Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(x_{z}\), for every \(z\in x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), be values that satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). For every \(u^{*}\in\mathsf{EQ}\), let \(\mathsf{VerTypS}_{u^{*}}=\{(u^{*},\mathsf{NeiSubsets})\in\mathsf{VerTypS}\}\) and \(\mathsf{VerTypFun}_{u^{*}}:u^{*}\to\mathsf{VerTypS}_{u^{*}}\) be such that for every \(\mathsf{VerTyp}\in\mathsf{VerTypS}_{u^{*}}\), \(|\{u\in u^{*}\mid\mathsf{VerTypS}_{u^{*}}(u)=\mathsf{VerTyp}\}|=x_{\mathsf{ VerTyp}}\). Then, for every \(u\in u^{*}\in\mathsf{EQ}\), \(\mathsf{ILPDFVerTyp}(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup \mathsf{CycTypS}\},u)=\mathsf{VerTypS}_{u^{*}}(u)\).
Whenever \(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\}\) is clear from context, we refer to \(\mathsf{ILPDFVerTyp}(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup \mathsf{CycTypS}\},u)\) as \(\mathsf{ILPDFVerTyp}(u)\).
Observe that since \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\), then, in particular, Equations 2 are satisfied. That is, for every \(u^{*}\in\mathsf{EQ}\), \(\sum_{\mathsf{VerTypS}\in\mathsf{VerTypS}_{u^{*}}}x_{\mathsf{VerTyp}}=|u^{*}|\). Therefore, for every \(u^{*}\in\mathsf{EQ}\), there exists a function \(\mathsf{VerTypS}_{u^{*}}\) such that for every \(\mathsf{VerTyp}\in\mathsf{VerTypS}_{u^{*}}\mid\{u\in u^{*}\mid\mathsf{VerTypS} _{u^{*}}(u)=\mathsf{VerTyp}\}|=x_{\mathsf{VerTyp}}\), as defined in Definition 3. Thus, \(\mathsf{ILPDFVerTyp}\) is well defined.
Next, we define a robot type for each robot. For every \(i\in[k]\), we arbitrary determine a robot type for the \(i\)-th robot such that the total number of robots we choose for a robot type \(\mathsf{RobTyp}\) is exactly \(x_{\mathsf{RobTyp}}\):
[Deriving Subsets from an ILP Solution] Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), be values that satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). Let \(\mathsf{RobTypFun}:[k]\to\mathsf{RobTypS}\) be such that for every \(\mathsf{RobTyp}\in\mathsf{RobTypS}\), \(|\{i\in[k]\mid\mathsf{RobTypFun}(i)=\mathsf{RobTyp}\}|=x_{\mathsf{RobTyp}}\). Then, for every \(1\leq i\leq k\), \(\mathsf{ILPDFRobTyp}(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS} \cup\mathsf{CycTypS}\},i)=\mathsf{RobTypFun}(i)\).
Whenever \(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\}\) is clear from context, we refer to \(\mathsf{ILPDFRobTyp}(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS} \cup\mathsf{CycTypS}\},i)\) as \(\mathsf{ILPDFRobTyp}(i)\).
Observe that since \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), satisfy Equation 1, so \(\sum_{\mathsf{RobTyp
\(\mathsf{ILPDFSubToAlloc}(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup \mathsf{CycTypS}\},\mathsf{VerTyp},\mathsf{NeSub})=\{(\mathsf{CycTyp},i,t)\)
\(\mid 1\leq j\leq 2|\mathsf{VC}|,\mathsf{CycTyp}\in\mathsf{CycTypS}(\mathsf{VerTyp}, \mathsf{NeSub},j),1\leq i\leq x_{\mathsf{CycTyp}},1\leq t\leq j\}\cup\{(i,t)\mid 1\leq j\leq 2 ^{|\mathsf{VC}|}+|\mathsf{VC}|^{2},1\leq i\leq k,\mathsf{ILPDFRobTyp}(i)\in \mathsf{RobTypS}(\mathsf{VerTyp},\mathsf{NeSub},j),1\leq t\leq j\}\).
Whenever \(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\}\) is clear from context, we refer to \(\mathsf{ILPDFSubToAlloc}(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS} \cup\mathsf{CycTypS}\},\mathsf{VerTyp},\mathsf{NeSub})\) as
\(\mathsf{ILPDFSubToAlloc}(\mathsf{VerTyp},\mathsf{NeSub})\).
Now, we allocate the subsets we derived in Definition 5.36 to vertices. For every \(\mathsf{VerTyp}=(u^{*},\mathsf{NeSubsets})\in\mathsf{VerTypS}\) and \(\mathsf{NeSub}\in\mathsf{NeSubsets}\), we arbitrary allocate each \(\mathsf{Sub}\in\mathsf{ILPDFSubToAlloc}\) (\(\mathsf{VerTyp},\mathsf{NeSub}\)) to a vertex in \(u^{*}\), while ensuring that each vertex of a vertex type \(\mathsf{VerTyp}\) gets at least one item allocated.
[Deriving Vertex Allocation of \(\mathsf{ILPDFSubToAlloc}\) from an ILP Solution] Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), be values that satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). For every \(\mathsf{VerTyp}=(u^{*},\mathsf{NeSubsets})\in\mathsf{VerTypS}\) and every \(\mathsf{NeSub}\in\mathsf{NeSubsets}\), let \(\mathsf{SubAlloc}_{\mathsf{VerTyp},\mathsf{NeSub}}:\)\(\mathsf{ILPDFSubToAlloc}(\mathsf{VerTyp},\mathsf{NeSub})\to u^{*}\) be a function such that:
* For every \(u\in u^{*}\) such that \(\mathsf{ILPDFVerTyp}(u)=\mathsf{VerTyp}\), there exists \(\mathsf{Sub}\in\mathsf{ILPDFSubToAlloc}\)\((\mathsf{VerTyp},\mathsf{NeSub})\) such that \(\mathsf{SubAlloc}_{\mathsf{VerTyp},\mathsf{NeSub}}(\mathsf{Sub})=u\).
Then, for every \(\mathsf{VerTyp}=(u^{*},\mathsf{NeSubsets})\in\mathsf{VerTypS}\), \(\mathsf{NeSub}\in\mathsf{NeSubsets}\) and \(\mathsf{Sub}\in\mathsf{ILPDFSubToAlloc}\)\((\mathsf{VerTyp},\mathsf{NeSub}),\)
\(\mathsf{ILPDFSubAlloc}(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS} \cup\mathsf{CycTypS}\},\mathsf{VerTyp},\mathsf{NeSub},\mathsf{Sub})=\)
\(\mathsf{SubAlloc}_{\mathsf{VerTyp},\mathsf{NeSub}}(\mathsf{Sub})\).
Whenever \(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\}\) is clear from context, we refer to \(\mathsf{ILPDFSubAlloc}(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS} \cup\mathsf{CycTypS}\},\mathsf{VerTyp},\mathsf{NeSub},\mathsf{Sub})\) as
\(\mathsf{ILPDFSubAlloc}(\mathsf{VerTyp},\mathsf{NeSub},\mathsf{Sub})\).
Recall that, by Equation 3, for every \(\mathsf{VerTyp}=(u^{*},\mathsf{NeSubsets})\in\mathsf{VerTypS}\), and every \(\mathsf{NeSub}\in\mathsf{NeSub}\),
\[\sum_{j=1}^{2|\mathsf{VC}|}\sum_{\mathsf{CycType}\in\mathsf{CycType }\in\mathsf{CycType}\in\mathsf{CycTyp},\mathsf{NeSub},j)}j\cdot x_{\mathsf{ CycType}}+\] \[\sum_{j=1}^{2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}}\sum_{\mathsf{RobTyp }\in\mathsf{RobTypS}(\mathsf{VerTyp},\mathsf{NeSub},j)}j\cdot x_{\mathsf{ RobTyp}}\geq x_{\mathsf{VerTyp}}.\]
Observe that the left member of the equation equals to the number of allocations of \(\mathsf{NeSub}\) we have for vertices of vertex type \(\mathsf{VerTyp}\). That is,
\[\mathsf{ILPDFSubAlloc}(\mathsf{VerTyp},\mathsf{NeSub},\mathsf{Sub})=\sum_{j= 1}^{2|\mathsf{VC}|}\sum_{\mathsf{CycType}\in\mathsf{CycTypeS}(\mathsf{VerTyp },\mathsf{NeSub},j)}j\cdot x_{\mathsf{CycType}}+\] \[\sum_{j=1}^{2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}}\sum_{\mathsf{RobTyp }\in\mathsf{RobTypS}(\mathsf{VerTyp},\mathsf{NeSub},j)}j\cdot x_{\mathsf{ RobTyp}}\text{.}\]
\(\mathsf{RobTypS}\cup\mathsf{CycTypS}\), satisfy Equations 3, we have enough subsets to allocate. In addition, \(u^{*}\neq\emptyset\). So, there exists a function \(\mathsf{SubAlloc}_{\mathsf{VerTyp},\mathsf{NeSub}}\) as defined in Definition 5.37. Therefore, \(\mathsf{ILPDFSubAlloc}\) is well defined.
Now, we look on the vertex allocation of \(\mathsf{ILPDFSubToAlloc}\), defined in Definition 5.37, from the perspective of the robots. Let \(1\leq i\leq k\), and let \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfcyc})\)\(\in\mathsf{RobTypS}\) such that \(\mathsf{ILPDFRobTyp}(i)=\mathsf{RobTyp}\). In addition, \(\mathsf{NeSub}\in 2^{\mathsf{N}_{G^{*}}(u^{*})\times 2}\)
\(\mathsf{VerTyp}\in\mathsf{VerTypS}\) and \(1\leq r\leq 2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}\) such that \(\mathsf{RobTyp}\in\mathsf{RobTypS}(\mathsf{VerTyp},\mathsf{NeiSub},r)\). Notice that this implies \(\mathsf{SetOfSub}(u^{*},\mathsf{NeiSub},\mathsf{VerTyp})=\{(u^{*}_{j},\mathsf{ NeiSub})\in\mathsf{NeiOfInd}(\mathsf{Graph}(\mathsf{CC})\)\(|\)\(\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}\)\(((u^{*}_{j},\mathsf{NeiSub}))=\mathsf{VerTyp}\}\) has exactly \(r\) elements. So, there exists exactly \(r\) allocations of \(\mathsf{NeSub}\) associated with the \(i\)-th robot in \(\mathsf{ILPDerSubToAlloc}(\mathsf{VerTyp},\mathsf{NeiSub})\). In the following definition, we "execute" the allocations associated with the \(i\)-th robot, that is, we replace each \(u^{*}_{j}\in V(\mathsf{Graph}(\mathsf{CC}))\) by a vertex \(u\in u^{*}\), derived by \(\mathsf{ILPDerSubAlloc}\). Observe that the allocations of the \(i\)-th robot in \(\mathsf{ILPDerSubToAlloc}(\mathsf{VerTyp},\mathsf{NeiSub})\) are labeled \(1\) to \(r\). That is, \((i,1),\ldots,(i,r)\in\mathsf{ILPDerSubToAlloc}(\mathsf{VerTyp},\mathsf{NeiSub})\). So, first we arbitrary label each element in \(\mathsf{SetOfSub}(u^{*},\mathsf{NeiSub},\mathsf{VerTyp})\) by some unique \(t\in[r]\), and then we replace the vertex by \(\mathsf{ILPDerSubAlloc}(\mathsf{VerTyp},\mathsf{NeiSub},(i,t))\).
Recall that a permutation of a multiset \(A\) is a bijection \(\mathsf{Permut}_{A}:A\to[A]\).
[Deriving a Transformation of \(\mathsf{CC}\) from an ILP Solution] Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), be values that satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). Let \(1\leq i\leq k\), and let \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfCyc})\in\mathsf{RobTypS}\) such that \(\mathsf{ILPDerRobTyp}(i)=\mathsf{RobTyp}\). For every \(\mathsf{NeiSub}\in 2^{\mathsf{N}_{G^{\prime}}(u^{*})\times 2}\), \(\mathsf{VerTyp}\in\mathsf{VerTypS}\) and \(1\leq r\leq 2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}\) such that \(\mathsf{RobTyp}\in\mathsf{RobTypS}(\mathsf{VerTyp},\mathsf{NeiSub},r)\), let \(\mathsf{SetOfSub}(u^{*},\mathsf{NeiSub},\mathsf{VerTyp})=\{(u^{*}_{j}, \mathsf{NeiSub})\in\mathsf{NeiOfInd}(\mathsf{Graph}(\mathsf{CC})\)\(|\)\(\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}\)\((u^{*}_{j},\mathsf{NeiSub}))=\mathsf{VerTyp}\}\) and let \(\mathsf{Permut}_{u^{*},\mathsf{NeiSub},\mathsf{VerTyp}}:\mathsf{SetOfSub}(u^{*}, \mathsf{NeiSub},\mathsf{VerTyp})\to[r]\) be a permutation. Now, let \(G^{\prime}\) be the multigraph obtained from \(\mathsf{Graph}(\mathsf{CC})\) by replacing each \(u^{*}_{j}\) by \(\mathsf{ILPDerSubAlloc}\)\((\mathsf{VerTyp},\mathsf{NeiSub},(i,\mathsf{Permut}_{u^{*},\mathsf{NeiSub}, \mathsf{VerTyp}}(u^{*}_{j},\mathsf{NeiSub}))\), where \((u^{*}_{j},\mathsf{NeiSub})\in\mathsf{NeiOfInd}(\mathsf{Graph}(\mathsf{CC}))\) and \(\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}(u^{*}_{j},\mathsf{NeiSub})= \mathsf{VerTyp}\). Then, \(\mathsf{ILPDerCCTransf}(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS} \cup\mathsf{CycTypS}\},i)=E(G^{\prime})\).
Whenever \(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\}\) is clear from context, we refer to \(\mathsf{ILPDerCCTransf}(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS} \cup\mathsf{CycTypS}\},i)\) as \(\mathsf{ILPDerCCTransf}(i)\).
Let \(1\leq i\leq k\), and let \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfCyc})\in\mathsf{RobTypS}\) such that \(\mathsf{ILPDerRowType}(i)=\mathsf{RobTyp}\). In the next lemma, we show that \(\mathsf{ILPDerCCTransf}(i)\) has the properties which will be useful later: (i) \(\mathsf{ILPDerCCTransf}(i)\) is a multiset with elements from \(E(G)\), (ii) \(|\mathsf{ILPDerCCTransf}(i)|=|\mathsf{CC}|\), (iii) vertices from \(\mathsf{VC}\) are not replaced, and (iv) the properties of \(\mathsf{CC}\) given by Condition 2 of Definition 2 are maintained also in \(\mathsf{ILPDerCCTransf}(i)\).
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), be values that satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). Let \(1\leq i\leq k\), and let \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfCyc})\in\mathsf{RobTypS}\) such that \(\mathsf{ILPDerRobTyp}(i)=\mathsf{RobTyp}\). Then, the following conditions hold:
1. \(\mathsf{ILPDerCCTransf}(i)\) is a multiset with elements from \(E(G)\).
2. \(|\mathsf{ILPDerCCTTransf}(i)|=|\mathsf{CC}|\).
3. \(V(\mathsf{Graph}(\mathsf{CC}))\cap\mathsf{VC}=V(\mathsf{Graph}(\mathsf{ILPDer }\mathsf{RobExp}(i))\cap\mathsf{VC}\).
4. \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\mathsf{ILPDerCCTransf}(i)))\), \(\mathsf{Graph}(\mathsf{ILPDerCCTransf}(i))\) is connected and every vertex in \(\mathsf{Graph}(\mathsf{ILPDerCCTransf}(i))\) has even degree in it.
Proof.: We prove that \(\mathsf{ILPDerCCTransf}(i)\) is a multiset with elements from \(E(G)\). By Condition 1 of Definition 2, \(\mathsf{CC}\subseteq E(\overline{G})\). Therefore, it is enough to show that each \(u^{*}_{j}\in V(\mathsf{Graph}(\mathsf{CC}))\) is replaced by some \(u\in u^{*}\). Let \(u^{*}_{j}\in V(\mathsf{Graph}(\mathsf{CC}))\). By Definition 5, \((u^{*}_{i},\mathsf{N}_{\mathsf{Graph}(\mathsf{CC})}(u^{*}_{i}))\in\mathsf{NeiOfInd }(\mathsf{Graph}(\mathsf{CC}))\). Since \(\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}\) is a vertex allocation of \(\mathsf{NeiOfInd}(\mathsf{Graph}(\mathsf{CC}))\), by Definition 5, there exists \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiSubSubets})\in\mathsf{VerTypS}\)
such that \(\mathsf{Alloc}_{\mathsf{Graph(CC)}}((u^{*}_{j},\mathsf{NeiSub}))=\mathsf{VerTyp}\), where \(\mathsf{NeiSub}=\widehat{\mathsf{N}}_{\mathsf{Graph(CC)}}(u^{*}_{i})\in\mathsf{ NeiSubsets}\). Thus, there exists \(1\leq r\leq 2^{\mathsf{|VC|}}+\mathsf{|VC|}^{2}\) such that \(\mathsf{RobTyp}\in\mathsf{RobTypS}(\mathsf{VerTyp},\mathsf{NeiSub},r)\).
Now, let \(\mathsf{SetOfSub}(u^{*},\mathsf{NeiSub},\mathsf{VerTyp})=\{(u^{*}_{j},\mathsf{ NeiSub})\in\mathsf{NeiOfInd}(\mathsf{Graph(CC)}\mid\)\(\mathsf{Alloc}_{\mathsf{Graph(CC)}}\)\((u^{*}_{j},\mathsf{NeiSub}))=\mathsf{VerTyp}\}\) and let \(\mathsf{Permut}_{u^{*},\mathsf{NeiSub},\mathsf{VerTyp}}:\mathsf{SetOfSub}(u^{*}, \mathsf{NeiSub},\)\(\mathsf{VerTyp})\to[r]\) be the permutation defined in Definition 5.38. Let \(t\in[r]\) be such that \(\mathsf{Permut}_{u^{*},\mathsf{NeiSub},\mathsf{VerTyp}}(u^{*}_{j},\mathsf{NeiSub })=t\). Since \(\mathsf{RobTyp}\in\mathsf{RobTypS}(\mathsf{VerTyp},\mathsf{NeiSub},r)\), by Definition 5.36, \((i,1),\ldots,(i,r)\in\mathsf{ILPDFSubToAlloc}\)\((\mathsf{VerTyp},\mathsf{NeiSub})\). By Definition 5.38, \(u^{*}_{j}\) is replaced by \(\mathsf{ILPDFSubAlloc}(\mathsf{VerTyp},\mathsf{NeiSub},(i,t))\), and by Definition 5.37, \(\mathsf{ILPDFSubAlloc}(\mathsf{VerTyp},\mathsf{NeiSub},(i,t))\in u^{*}\). Therefore, Condition 1 holds.
Observe that \(|\mathsf{ILPDFCTransf}(i)|=|\mathsf{CC}|\), so Condition 2 holds. In addition, it is easy to see, by Definition 5.38, that only vertices from \(V(G)\setminus\mathsf{VC}\) are replaced, so Condition 3 holds.
By Condition 2 of Definition 5.20, \(v_{\mathsf{init}}\in V(\mathsf{Graph(CC)}\). Recall that we assume \(v_{\mathsf{init}}\in\mathsf{VC}\), so \(v_{\mathsf{init}}\in V(\mathsf{Graph(ILPDFCTransf}(i)))\). In addition, by Condition 2 of Definition 5.20, \(\mathsf{Graph(CC)}\) is connected. Thus, observe that \(\mathsf{Graph(ILPDFCCTransf}(i))\) is also connected. Now, we show that every vertex in \(\mathsf{Graph(ILPDFCCTransf}(i))\) has even degree in it. By Condition 2 of Definition 5.20, every vertex in \(\mathsf{Graph(CC)}\) has even degree in it. Let \(u\in V(\mathsf{Graph(ILPDFCCTransf}(i)))\). If \(u\in\mathsf{VC}\), observe that its degree in
\(\mathsf{Graph(ILPDFCCTransf}(i))\) is equal to its degree in \(\mathsf{Graph(CC)}\), so it is even. Otherwise \(u\in u^{*}\in\mathsf{EQ}\), and its degree in \(\mathsf{Graph(ILPDFCCTransf}(i))\) equals to the sum of degrees of the vertices \(u^{*}_{j}\) in \(\mathsf{Graph(CC)}\), which are replaced by \(u\), by Definition 5.38. The degree of each such \(u^{*}_{j}\) is even in \(\mathsf{Graph(CC)}\), so the degree of \(u\) in \(\mathsf{Graph(ILPDFCCTransf}(i))\) is also even. Therefore, Condition 4 holds.
Now, we look at the vertex allocation of \(\mathsf{ILPDFSubToAlloc}\), defined in Definition 5.37, from the perspective of the cycles. In the next definition we execute a processes similar to Definition 5.38. Let \(\mathsf{CycTyp}=(C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\in\mathsf{CycTypS}\) and let \(1\leq i\leq x_{\mathsf{CycTyp}}\). In addition, let \(\mathsf{NeiSub}=\{v,v^{\prime}\}\in 2^{\mathsf{N}_{G^{*}}(u^{*})\times 2}\), \(\mathsf{VerTyp}\in\mathsf{VerTypS}\) and let \(1\leq r\leq 2|\mathsf{VC|}\) such that \(\mathsf{CycTyp}\in\mathsf{CycTypS}(\mathsf{VerTyp},\mathsf{NeiSub},r)\). Notice that this implies \(\mathsf{SetOfSub}(u^{*},\mathsf{NeiSub},\mathsf{VerTyp})=\{\{(u^{*},v),\{u^{* },v^{\prime}\}\}\}\in\mathsf{EdgePairs}(C)\mid\mathsf{PaAlloc}_{C}=\mathsf{ VerTyp}\}\) has exactly \(r\) elements. So, there exists exactly \(r\) allocations of \(\mathsf{NeiSub}\) associated with the \(i\)-th cycle of \(\mathsf{CycTyp}\) in \(\mathsf{ILPDFSubToAlloc}(\mathsf{VerTyp},\mathsf{NeiSub})\). In the following definition, we "execute" the allocations associated with the \(i\)-th cycle of \(\mathsf{CycTyp}\), that is, we replace each \(u^{*}\in V(C)\) by a vertex \(u\in u^{*}\), derived by \(\mathsf{ILPDFSubAlloc}\). Observe that the allocations of the \(i\)-th cycle of \(\mathsf{CycTyp}\) in \(\mathsf{ILPDFSubToAlloc}(\mathsf{VerTyp},\mathsf{NeiSub})\) are labeled \(1\) to \(r\). That is, \((\mathsf{CycTyp},i,1),\ldots,(\mathsf{CycTyp},i,r)\in\mathsf{ILPDFSubToAlloc}( \mathsf{VerTyp},\mathsf{NeiSub})\). So, first we arbitrary label each element in \(\mathsf{SetOfSub}(u^{*},\mathsf{NeiSub},\mathsf{VerTyp})\) by some unique \(t\in[r]\), and then we replace the vertex by \(\mathsf{ILPDFSubAlloc}(\mathsf{VerTyp},\mathsf{NeiSub},(\mathsf{CycTyp},i,t))\).
[Deriving a Transformation of a Cycle from an ILP Solution] Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), be values that satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). Let \(\mathsf{CycTyp}=(C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\in\mathsf{CycTypS}\) and let \(1\leq i\leq x_{\mathsf{CycTyp}}\). For every \(\mathsf{NeiSub}=\{v,v^{\prime}\}\in 2^{\mathsf{N}_{G^{*}}(u^{*})\times 2}\), \(\mathsf{VerTyp}\in\mathsf{VerTypS}\) and \(1\leq r\leq 2|\mathsf{VC|}\) such that \(\mathsf{CycTyp}\in\mathsf{CycTypS}(\mathsf{VerTyp},\mathsf{NeiSub},r)\), let \(\mathsf{SetOfSub}(u^{*},\mathsf{NeiSub},\mathsf{VerTyp})=\{\{(\{u^{*},v\},\{u^{* },v^{\prime}\})\}\in\mathsf{EdgePairs}(C)\mid\mathsf{PaAlloc}_{C}=\mathsf{ VerTyp}\}\) and let \(\mathsf{Permut}_{u^{*},\mathsf{NeiSub},\mathsf{VerTyp}}:\mathsf{SetOfSub}(u^{*}, \mathsf{NeiSub},\mathsf{VerTyp})\to[r]\) be a permutation. Then, \(\mathsf{ILPDFCycTransf}(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS} \cup\mathsf{CycTypS},\mathsf{CycTyp},i)\) is the cycle obtained from \(C\) by replacing each \(u^{*}\in V(C)\) by \(\mathsf{ILPDFSubAlloc}(\mathsf{VerTyp},\{v,v^{\prime}\},(\mathsf{CycTyp},i, \mathsf{Permut}_{u^{*},\{v,v^{\prime}\},\mathsf{VerTyp}}(u^{*},\{\{u^{*},v\},\{u^{*},v^{ \prime}\}\}))\), where \(v\) and \(v^{\prime}\) are the two vertices that appear before and after \(u^{*}\) in \(C\), respectively.
Whenever \(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\}\) is clear from context, we refer to \(\mathsf{ILPDFcyCTransf}(\{x_{z}\mid z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup \mathsf{CycTypS}\},\mathsf{CycTyp},i)\) as \(\mathsf{ILPDFcyCTransf}(\)\(\mathsf{CycTyp},i)\).
Let \(\mathsf{CycTyp}=(C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\in\mathsf{CycTypS}\) and let \(1\leq i\leq x_{\mathsf{CycTyp}}\). In the next lemma, we show that \(\mathsf{ILPDFcyCTransf}(\mathsf{CycTyp},i)\) has some properties that will be useful later.
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), be values that satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). Let \(\mathsf{CycTyp}=(C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\in\mathsf{CycTypS}\) and let \(1\leq i\leq x_{\mathsf{CycTyp}}\). Then, the following conditions hold:
1. \(\mathsf{ILPDFcyCTransf}(\mathsf{CycTyp},i)\) is a cycle in \(G\).
2. \(|C|=|\mathsf{ILPDFcyCTransf}(\mathsf{CycTyp},i)|\).
3. \(V(C)\cap\mathsf{VC}=V(\mathsf{ILPDFcyCTransf}(\mathsf{CycTyp},i))\cap\mathsf{ VC}\).
Proof.: We prove that \(\mathsf{ILPDFcyCTransf}(\mathsf{CycTyp},i)\) is a cycle in \(G\). By Definition 5.25, \(C\in\mathsf{Cyc}_{G^{*}}\), so it is enough to show that every vertex \(u^{*}\in V(C)\) is replaced by some \(u\in u^{*}\). Let \(u^{*}\in V(C)\). Let \(v\) and \(v^{\prime}\) be the two vertices that appear before and after \(u^{*}\) in \(C\), respectively, and let \(\mathsf{NeiSub}=\{v,v^{\prime}\}\). By Definition 5.13, \(\{\{u^{*},v\},\{u^{*},v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\). Since \(\mathsf{PaAlloc}_{C}\) is a vertex allocation of \(\mathsf{EdgePairs}(C)\), by Definition 5.22, there exists \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiSubsets})\in\mathsf{VerTypS}\) such that \(\mathsf{NeiSub}\in\mathsf{NeiSubsets}\) and \(\mathsf{PaAlloc}_{C}(\{\{u^{*},v\},\{u^{*},v^{\prime}\}\})=\mathsf{VerTyp}\). Thus, there exists \(1\leq r\leq|\mathsf{VC}|^{2}\) such that \(\mathsf{CycTyp}\in\mathsf{CycTypS}(\mathsf{VerTyp},\mathsf{NeiSub},r)\).
Now, let \(\mathsf{SetOfSub}(u^{*},\mathsf{NeiSub},\mathsf{VerTyp})=\{\{\{u^{*},v\},\{ u^{*},v^{\prime}\}\}\}\in\mathsf{EdgePairs}(C)\mid\mathsf{PaAlloc}_{C}\)\(=\mathsf{VerTyp}\}\) and let \(\mathsf{Permut}_{u^{*},\mathsf{NeiSub},\mathsf{VerTyp}}:\mathsf{SetOfSub}(u^{*}, \mathsf{NeiSub},\mathsf{VerTyp})\to[r]\) be the permutation defined in Definition 5.40. Let \(t\in[r]\) be such that \(\mathsf{Permut}_{u^{*},\mathsf{NeiSub},\mathsf{VerTyp}}(u^{*}_{j},\mathsf{NeiSub })=t\). Since \(\mathsf{CycTyp}\in\mathsf{CycTypS}(\mathsf{VerTyp},\mathsf{NeiSub},r)\), by Definition 5.36, \((\mathsf{CycTyp},i,1),\ldots,(\mathsf{CycTyp},i,\)\(r)\in\mathsf{ILPDFSubToAlloc}\) (\(\mathsf{VerTyp},\mathsf{NeiSub}\)). By Definition 5.40, \(u^{*}_{j}\) is replaced by
\(\mathsf{ILPDFSubAlloc}(\mathsf{VerTyp},\mathsf{NeiSub},(\mathsf{CycTyp},i,t))\), and by Definition 5.37,
\(\mathsf{ILPDFSubAlloc}(\mathsf{VerTyp},\mathsf{NeiSub},(\mathsf{CycTyp},i,t))\in u ^{*}\). Therefore, Condition 1 holds.
Observe that \(|C|=|\mathsf{ILPDFcyCTransf}(\mathsf{CycTyp},i)|\), so Condition 2 holds. In addition, it is easy to see, by Definition 5.40, that only vertices from \(V(G)\setminus\mathsf{VC}\) are replaced, so Condition 3 holds.
Next, we allocate the cycles to the robots. We allocate cycles of type \(\mathsf{CycTyp}=(C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\in\mathsf{CycTypS}\) to robots of type \(\mathsf{RobTyp}\), while preserving the budget limitation of the robots: Each robot of type \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfCyc})\in\mathsf{RobTypS}\) gets exactly \(N_{j}\) cycles of length \(j\), for every \(2\leq j\leq 2|\mathsf{VC}|\), \(j\neq 4\), where \(\mathsf{NumOfCyc}=(N_{2},N_{3},N_{5},N_{6},\ldots,N_{2|\mathsf{VC}|})\). Then, the cycles of type \(\mathsf{CycTyp}\in\mathsf{CycTypS}(\mathsf{RobTyp},4)\) are allocated to robots of type \(\mathsf{RobTyp}\) "equally". That is, the number of cycles of length \(4\) allocated to a robot \(i\) of type \(\mathsf{RobTyp}\) is larger by at most \(1\) than the number of cycles of length \(4\) allocated to other robot \(i^{\prime}\) of type \(\mathsf{RobTyp}\).
[Deriving Robot Allocation of Cycles from an ILP Solution] Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), be values that satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). For every \(\mathsf{CycTyp}=(C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\in\mathsf{CycTypS}\), let \(\mathsf{CycAlloc}_{\mathsf{CycTyp}}:[x_{\mathsf{CycTyp}}]\to[k]\) be a function such that the following conditions hold:
1. For every \(i\in[x_{\mathsf{CycTyp}}]\), \(\mathsf{ILPDFRobTyp}(\mathsf{CycAlloc}_{\mathsf{CycTyp}}(i))=\mathsf{RobTyp}\).
2. _For every_ \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph(CC)}},\mathsf{NumOfcyc}) \in\mathsf{RobTypS}\)_,_ \(2\leq j\leq 2|\mathsf{VC}|,\,j\neq 4\)_, and_ \(i\in[k]\) _such that_ \(\mathsf{ILPDerRobTyp}(i)=\mathsf{RobTyp}\)_, it holds that_ \[\sum_{\mathsf{CycType}\in\mathsf{CycTypeS}(\mathsf{RobTyp},j)}|\{t \in[x_{\mathsf{CycType}}]\ |\ \mathsf{CycTypeFun}_{\mathsf{CycType}}(t)=i\}|=N_{j},\] _where_ \(\mathsf{NumOfcyc}=(N_{2},N_{3},N_{5},N_{6},\ldots,N_{2|\mathsf{VC}|})\)_._
3. _For every_ \(i,i^{\prime}\in[k]\) _such that_ \(\mathsf{ILPDerRobTyp}(i)=\mathsf{ILPDerRobTyp}(i^{\prime})=\mathsf{RobTyp}\)_, it holds_ \[|\sum_{\mathsf{CycType}\in\mathsf{CycTypeS}(\mathsf{RobTyp},4)}| \{t\in[x_{\mathsf{CycType}}]\ |\ \mathsf{CycAlloc}_{\mathsf{CycType}}(t)=i\}|-\] \[\sum_{\mathsf{CycType}\in\mathsf{CycTypeS}(\mathsf{RobTyp},4)}| \{t\in[x_{\mathsf{CycType}}]\ |\ \mathsf{CycAlloc}_{\mathsf{CycType}}(t)=i^{ \prime}\}||\leq 1.\]
_Then, for every \(\mathsf{CycType}\in\mathsf{CycTypeS}\) and \(i\in[x_{\mathsf{CycType}}]\), \(\mathsf{ILPDerCycAlloc}(\{x_{z}\ |\ z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypeS}\}, \mathsf{CycType},i)=\mathsf{CycAlloc}_{\mathsf{CycType}}(i)\)._
Whenever \(\{x_{z}\ |\ z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypeS}\}\) is clear from context, we refer to \(\mathsf{ILPDerCycAlloc}(\{x_{z}\ |\ z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypeS}\}, \mathsf{CycType},i)\) as \(\mathsf{ILPDerCycAlloc}(\ \mathsf{CycType},i)\).
Observe that we assume \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypeS}\), satisfy Equations 5 and 6. So, for every \(\mathsf{CycType}=(C,\mathsf{PaAlloc}_{\mathsf{CC}},\mathsf{RobTyp})\in\mathsf{ CycTypeS}\) such that \(x_{\mathsf{CycType}}\geq 1\), there exists \(j\in[k]\) such that \(\mathsf{ILPDerRobTyp}(j)=\mathsf{RobTyp}\). In addition, by Equations 5, for every \(\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph(CC)}},\mathsf{NumOfcyc })\in\mathsf{RobTypS}\) and for every \(2\leq j\leq 2|\mathsf{VC}|\), \(j\neq 4\), \(\sum_{\mathsf{CycTypeS}(\mathsf{RobTyp},j)}x_{\mathsf{CycType}}=N_{j}\cdot x_{ \mathsf{RobTyp}}\), where \(\mathsf{NumOfcyc}=(N_{2},N_{3},N_{5},N_{6},\ldots,N_{2|\mathsf{VC}|})\). Thus, there exist some functions \(\mathsf{CycAlloc}_{\mathsf{CycType}}:[x_{\mathsf{CycType}}]\to[k]\), for every \(\mathsf{CycType}\in\mathsf{CycTypeS}\), as defined in Definition 5.42. Therefore, \(\mathsf{ILPDerCycAlloc}\) is well defined.
Lastly, we present the function \(\mathsf{ILPDerRobExp}\). This function gets \(i\in[k]\) as input, and returns the multiset of edges, derived by the functions defined in this section, for the \(i\)-th robot. In particular, each robot gets the "transformed" \(\mathsf{CC}\), where \(\mathsf{ILPDerRobTyp}(i)=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph(CC)}}, \mathsf{NumOfcyc})\), and the "transformed" cycles, allocated to the \(i\)-th robot by Definition 5.42.
[ILPTRobRobExp] Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypeS}\), be values that satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). Then, for every \(1\leq i\leq k\), \(\mathsf{ILPDerToRobExp}(\{x_{z}\ |\ z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypeS}\},i)= \mathsf{ILPDerCCTransf}(i)\cup\{E(\mathsf{ILPDerCycTransf}(\mathsf{CycType},j))\ |\ \mathsf{CycType}\in\mathsf{CycTypeS},j\in[x_{\mathsf{CycType}}]\) such that \(\mathsf{ILPDerCycAlloc}(\ \mathsf{CycType},j)=i\}\).
Whenever \(\{x_{z}\ |\ z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypeS}\}\) is clear from context, we refer to \(\mathsf{ILPTRoRobExp}(\{x_{z}\ |\ z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypeS}\},i)\) as \(\mathsf{ILPTRoRobExp}(i)\).
Towards the proof of Lemma 5.33, we prove that the multisets defined in Definition 5.43 satisfy the conditions of Lemma 3.4:
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypeS}\), be values that satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). Then, \(\mathsf{ILPTRoRobExp}(1),\ldots,\mathsf{ILPTRoRobExp}(k)\) are multisets which satisfy the conditions of Lemma 3.4.
For the sake of readability, we split Lemma 5.44 and its proof into an observation and three lemmas. In Observation 5.45 we prove that \(\mathsf{ILPTRoRobExp}(i)\) is a multiset with elements
from \(E(G)\). In Lemmas 5.46 we prove that Conditions 1 and 2 of Lemma 3.4 hold. In Lemmas 5.47 and 5.48 we prove that Conditions 3 and 4 of Lemma 3.4 hold, respectively.
Let \(1\leq i\leq k\). Then, \(\mathsf{ILPToRobExp}(i)\) is a multiset with elements from \(E(G)\).
Proof.: By Condition 1 of Definition 5.39, \(\mathsf{ILPDerCCTransf}(i)\) is a multiset with elements from \(E(G)\). In addition, for every \(\mathsf{CycTyp}\in\mathsf{CycTypS}\) and \(j\in[x_{\mathsf{CycTyp}}]\), by Condition 1 of Definition 5.41, \(\mathsf{ILPDerCycTransf}(\mathsf{CycTyp},j)\) is a cycle in \(G\), so, indeed, \(E(\mathsf{ILPDerCycTransf}(\mathsf{CycTyp},\)\(j))\) is a multiset with elements from \(E(G)\).
Proof.: _Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), be values that satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). Then, \(\mathsf{ILPToRobExp}(1),\ldots,\mathsf{ILPToRobExp}(k)\) are multisets that satisfy Conditions 1 and 2 of Lemma 3.4._
Proof.: Let \(1\leq i\leq k\). By Condition 4 of Lemma 5.39, \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\mathsf{ILPDerCCTransf}(i)))\), so \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\mathsf{ILPToRobExp}(i))\). Therefore, Condition 1 of Lemma 3.4 is satisfied.
Let \(\mathsf{ILPDerRobTyp}(i)=\mathsf{RobTyp}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{ Graph}(\mathsf{CC})},\mathsf{NumOfcy})\). We prove that \(\mathsf{Graph}(\mathsf{ILPToRobExp}(i))\) is connected. By Condition 4 of Lemma 5.39, \(\mathsf{Graph}(\mathsf{ILPDerCCTransf}(i))\) is connected; so, it is enough to show that, for every \(v\in V(\mathsf{Graph}(\mathsf{ILPToRobExp}(i)))\setminus V(\mathsf{Graph}( \mathsf{ILPDerCCTransf}(i)))\), there exists a path from \(v\) to some
\(u\in V(\mathsf{Graph}(\mathsf{ILPDerCCTransf}(i)))\). Since \(\mathsf{ILPToRobExp}(i)=\mathsf{ILPDerCCTransf}(i)\cup\)
\(\{E(\mathsf{ILPDerCycTransf}(\mathsf{CycTyp},j))\mid\mathsf{CycTyp}\in\mathsf{ CycTypS},j\in[x_{\mathsf{CycTyp}}]\text{ such that }\)
\(\mathsf{ILPDerCycAlloc}(\mathsf{CycTyp},j)=i\), there exists \(\mathsf{CycTyp}=(C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\in\mathsf{CycTypS},j\in[x_{\mathsf{CycTyp}}]\) such that \(\mathsf{ILPDerCycAlloc}(\mathsf{CycTyp},j)=i\) and \(v\in V(\mathsf{ILPDerCycTransf}(\mathsf{CycTyp},j))\). Now, since \(\mathsf{ILPDerCycAlloc}(\mathsf{CycTyp},j)=i\), from Condition 1 of Definition 5.42, \(\mathsf{ILPToRobExp}(i)=\mathsf{RobTyp}\). So, from Definition 5.25, \(V(\mathsf{Graph}(\mathsf{CC}))\cap V(C)\cap\mathsf{VC}\neq\emptyset\).
Now, by Condition 3 of Lemma 5.41, \(V(C)\cap\mathsf{VC}=V(\mathsf{ILPDerCycTransf}(\mathsf{CycTyp},j))\cap\mathsf{ VC}\), and from Condition 3 of Lemma 5.39, \(V(\mathsf{Graph}(\mathsf{CC}))\cap\mathsf{VC}=V(\mathsf{Graph}(\mathsf{ILPToRob Exp}(i))\cap\mathsf{VC}\). This implies that \(V(\mathsf{Graph}(\mathsf{ILPToRobExp}(i))\cap\mathsf{VC}\cap V(\mathsf{ Graph}(\mathsf{CC}))\neq\emptyset\), so let \(u\in V(\mathsf{Graph}(\mathsf{ILPToRobExp}(i))\cap\mathsf{VC}\cap V(\mathsf{ Graph}(\mathsf{CC}))\). In addition, from Condition 1 of Lemma 5.41, \(\mathsf{ILPDerCycTransf}\) is a cycle in \(G\).
Thus, there exists a path from \(v\) to some \(u\in V(\mathsf{Graph}(\mathsf{ILPDerCCTransf}(i)))\), and therefore, \(\mathsf{Graph}(\mathsf{ILPDerCCTransf}(i))\) is connected.
Now, we show that every \(u\in V(\mathsf{Graph}(\mathsf{ILPDerCCTransf}(i)))\) has even degree in \(\mathsf{Graph}(\mathsf{ILPDerCCTransf}(i))\). From Condition 4 of Lemma 5.39, every vertex in
\(\mathsf{Graph}(\mathsf{ILPDerCCTransf}(i))\) has even degree, and from Condition 1 of Lemma 5.41, \(\mathsf{ILPDerCycTransf}(\mathsf{CycTyp},i)\) is a cycle in \(G\).
Therefore, every \(u\in V(\mathsf{Graph}(\mathsf{ILPDerCCTransf}(i)))\) has even degree in
\(\mathsf{Graph}(\mathsf{ILPDerCCTransf}(i))\). Thus, Condition 2 of Lemma 3.4 is satisfied.
Proof.: _Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypS}\), be values that satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). Then, \(\mathsf{ILPToRobExp}(1),\ldots,\mathsf{ILPToRobExp}(k)\) are multisets that satisfy Condition 3 of Lemma 3.4._
Proof.: We show that for every \(\{u,v\}\in E(G)\), there exists \(1\leq i\leq k\) such that \(\{u,v\}\in\mathsf{ILPToRobExp}(i)\). Let \(\{u,v\}\in E\). We have the following two cases:
**Case 1: \(u,v\in\mathsf{VC}\).** By Equation 4,
\[\sum_{\begin{subarray}{c}\mathsf{CycType}\in\mathsf{CycType}(\{u,v\}) \end{subarray}}x_{\mathsf{CycType}}+\sum_{\begin{subarray}{c}\mathsf{RobType} \in\mathsf{RobType}\mathsf{S}(\{u,v\})\end{subarray}}x_{\mathsf{RobType}}\geq 1.\]
If \(\sum_{\begin{subarray}{c}\mathsf{CycType}\in\mathsf{CycType}\mathsf{S}(\{u,v\}) \end{subarray}}x_{\mathsf{CycType}}\geq 1\), then there exists \(\mathsf{CycTyp}\in\mathsf{CycType}\) such that \(x_{\mathsf{CycType}}\geq 1\) and \(\mathsf{CycTyp}\in\mathsf{CycType}\mathsf{S}(\{u,v\})\).
Therefore, there exists \(1\leq i\leq k\) such that \(\mathsf{ILPDF}\mathsf{CycAlloc}(\mathsf{CycTyp},1)=i\). Thus, \(\{u,v\}\in E(\mathsf{ILPDF}\mathsf{CycTransf}(\mathsf{CycTyp},1))\subseteq \mathsf{ILPToRobExp}(i)\).
Otherwise, \(\sum_{\begin{subarray}{c}\mathsf{CycType}\in\mathsf{CycType}\mathsf{S}(\{u,v \})\end{subarray}}x_{\mathsf{CycType}}=0\), so \(\sum_{\begin{subarray}{c}\mathsf{RobType}\in\mathsf{RobType}\mathsf{S}(\{u,v \})\end{subarray}}x_{\mathsf{RobType}}\geq 1\). Therefore, there exists \(1\leq i\leq k\) and \(\mathsf{RobType}=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfCy})\in\mathsf{RobTypS}(\{u,v\})\), such that \(\mathsf{ILPDF}\mathsf{RobTop}(i)=\mathsf{RobType}\). Thus, \(\{u,v\}\in\mathsf{CC}\), so, \(\{u,v\}\in\mathsf{ILPDF}\mathsf{CCTransf}(i)\subseteq\mathsf{ILPToRobExp}(i)\), and therefore \(\{u,v\}\in\mathsf{ILPToRobExp}(i)\).
**Case 2:**\(u\in u^{*}\in\mathsf{EQ},v\in\mathsf{VC}\). Therefore, there exists \(\mathsf{VerTyp}=(u^{*},\mathsf{NeibSubsets})\in\mathsf{VerTypS}_{u}\), such that \(\mathsf{ILPDF}\mathsf{VerTyp}(u)=\mathsf{VerTyp}\). By Definition 5.14, there exists \(\mathsf{NeibSub}\in\mathsf{NeibSub}\) such that \(v\in\mathsf{NeibSub}\). In addition, there exists \(\mathsf{Sub}\in\mathsf{ILPDF}\mathsf{SubToAlloc}(\mathsf{VerTyp},\)\(\mathsf{NeibSub})\) such that \(\mathsf{ILPDF}\mathsf{SubAlloc}(\mathsf{VerTyp},\mathsf{NeibSub},\mathsf{Sub})=u\). We have the following two subcases:
**Case 2.1:**\(\mathsf{Sub}=(\mathsf{CycTyp},j,t)\)**for some**\(\mathsf{CycType}=(C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\in\mathsf{CycTypS}\)**, where**\(\mathsf{CycType}\in\mathsf{CycTypeS}(\mathsf{VerTyp},\mathsf{NeibSub},r)\), \(1\leq j\leq x_{\mathsf{CycType}},1\leq t\leq r\)**. Observe that, in this case \(\mathsf{NeibSub}=\{v,v^{\prime}\}\) for some \(v\in V(G)\), and \(\{\{u^{*},v\},\{u^{*},v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\). So, let \(\mathsf{SetOfSub}(u^{*},\mathsf{NeibSub},\mathsf{VerTyp})=\{\{u^{*},v\},\{u^ {*},v^{\prime}\}\}\in\mathsf{EdgePairs}(C)\mid\mathsf{PaAlloc}_{C}=\mathsf{VerTyp}\}\) and let \(\mathsf{Permut}_{u^{*},\mathsf{NeibSub},\mathsf{VerTyp}}:\mathsf{SetOfSub}(u^{* },\mathsf{Neib},\mathsf{VerTyp})\to[|\mathsf{SetOfSub}(u^{*},\mathsf{NeibSub}, \mathsf{VerTyp})|]\) be the permutation defined in Definition 5.40. Notice that \(|\mathsf{SetOfSub}(u^{*},\mathsf{NeibSub},\)\(\mathsf{VerTyp})|=r\). Thus, there exists \((\{u^{*},v\},\{u^{*},v^{\prime}\})\in\mathsf{SetOfSub}(u^{*},\mathsf{NeibSub}, \mathsf{VerTyp})\) such that \(\mathsf{Permut}_{u^{*},\mathsf{NeibSub},\mathsf{VerTyp}}\) (\((\{u^{*},v\},\{u^{*},v^{\prime}\})\)) \(=t\). Then, by Definition 5.40, there exists \(u^{*}\in V(C)\), where \(v\) and \(v^{\prime}\) are the vertices come before and after \(u^{*}\) in \(C\), respectively, that is replaced by \(\mathsf{ILPDF}\mathsf{SubAlloc}(\mathsf{VerTyp},\mathsf{NeibSub},(\mathsf{ CycTyp},j,\mathsf{Permut}_{u^{*},\mathsf{NeibSub},\mathsf{VerTyp}}(u^{*},\{\{u^ {*},v^{\prime}\})\})=\mathsf{ILPDF}\mathsf{SubAlloc}(\mathsf{VerTyp},\mathsf{NeibSub },(\mathsf{CycTyp},j,t))=u\).
So, \(\{u,v\}\in E(\mathsf{ILPDF}\mathsf{CycTransf}(\mathsf{CycTyp},j))\). Now, by Definition 5.42, there exists \(1\leq i\leq k\) such that \(\mathsf{ILPDF}\mathsf{CycAlloc}(\mathsf{CycTyp},j)=i\). Thus, by Definition 5.43,
\(E(\mathsf{ILPDF}\mathsf{CycTransf}(\mathsf{CycTyp},j))\subseteq\mathsf{ILPToRob Exp}(i)\), and therefore, \(\{u,v\}\in\mathsf{ILPToRobExp}(i)\).
**Case 2.2:**\(\mathsf{Sub}=(i,t)\)**, where**\(\mathsf{ILPDF}\mathsf{RobType}(i)\in\mathsf{RobType}\mathsf{S}(\mathsf{VerTyp},\mathsf{NeibSub},r)\), \(1\leq r\leq 2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2},1\leq i\leq k\) and \(1\leq t\leq r\). Let \(\mathsf{SetOfSub}(u^{*},\mathsf{NeibSub},\mathsf{VerTyp})=\{(u^{*}_{j}, \mathsf{NeibSub})\in\mathsf{NeibOfInd}(\mathsf{Graph}(\mathsf{CC})\mid \mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}\) (\((u^{*}_{j},\mathsf{NeibSub}))=\mathsf{VerTyp}\) and let \(\mathsf{Permut}_{u^{*},\mathsf{NeibSub},\mathsf{VerTyp}}:\mathsf{SetOfSub}(u^{*}, \mathsf{NeibSub},\mathsf{VerTyp})\to[|\mathsf{SetOfSub}(u^{*},\mathsf{NeibSub },\mathsf{VerTyp})|]\) be the permutation defined in Definition 5.38. Notice that \(|\mathsf{SetOfSub}(u^{*},\mathsf{NeibSub},\mathsf{VerTyp})|=r\). Thus, there exists \((u^{*}_{j},\mathsf{NeibSub})\in\mathsf{SetOfSub}(u^{*},\mathsf{NeibSub}, \mathsf{VerTyp})\) such that \(\mathsf{Permut}_{u^{*},\mathsf{NeibSub}^{\prime},\mathsf{VerTyp}}(u^{*}_{j}, \mathsf{NeibSub})=t\). Therefore, by Definition 5.38, \(u^{*}_{j}\) is replaced by \(\mathsf{ILPDF}\mathsf{SubAlloc}(\mathsf{VerTyp},\mathsf{NeibSub},\)\((i,\mathsf{Permut}_{u^{*},\mathsf{NeibSub},\mathsf{VerTyp}}(u^{*}_{j},\mathsf{NeibSub}))=\mathsf{ILPDF} \mathsf{SubAlloc}(\mathsf{VerTyp},\mathsf{NeibSub}(i,t))=u\). Observe that since \((u^{*}_{j},\mathsf{NeibSub})\in\mathsf{NeibOfInd}(\mathsf{Graph}(\mathsf{CC})\), by Definition 5.17, it follows that \(\widehat{\mathsf{N}}_{\mathsf{Graph}(\mathsf{CC})}(\)\(u^{*}_{j})=\mathsf{NeibSub}\). So, \(\{u,v\}\in\mathsf{ILPDF}\mathsf{CCTransf}(i)\). Therefore, by Definition 5.43,
\(\mathsf{ILPDF}\mathsf{CCTransf}(i)\subseteq\mathsf{ILPToRobExp}(i)\), and therefore, \(\{u,v\}\in\mathsf{ILPToRobExp}(i)\).
Therefore, we proved that for every \(\{u,v\}\in E(G)\), there exists \(1\leq i\leq k\) such that \(\{u,v\}\in\mathsf{ILPToRobExp}(i)\), so, Condition 3 of Lemma 3.4 is satisfied.
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k,B\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Let \(x_{z}\), for every \(z\in\mathsf{VerTypS}\cup\mathsf{RobTypS}\cup\mathsf{CycTypeS}\), be values that satisfy the inequalities of \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). Then, \(\mathsf{ILPToRobExp}(1),\ldots,\mathsf{ILPToRobExp}(k)\) are
multisets that satisfy Condition 4 of Lemma 3.4._
Proof.: We show that for every \(i\in[k]\), \(|\mathsf{ILPToRobExp}(i)|\leq B\). Let \(i\in[k]\) and let \(\mathsf{ILPDerRobTyp}(i)=(\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}, \mathsf{NumOfcyc})\). By Definition 5.43, \(\mathsf{ILPToRobExp}(i)=\mathsf{ILPDerCCTransf}(i)\cup\{E(\mathsf{ILPDerCycTransf} (\mathsf{CycTyp},j))\mid\mathsf{CycTyp}\in\mathsf{CycTyp},j\in[x_{\mathsf{CycTyp }}]\) such that \(\mathsf{ILPDerCycAlloc}(\mathsf{CycTyp},j)=i\}\). By Condition 2 of Lemma 5.39, \(|\mathsf{ILPDerCCTransf}(i)|=|\mathsf{CC}|\) and by Condition 2 of Lemma 5.41, for every \(\mathsf{CycTyp}=(C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\in\mathsf{CycTypS}\) and \(j\in[x_{\mathsf{CycTyp}}]\), \(|\mathsf{ILPDerCycTransf}(\mathsf{CycTyp},j)|=|C|\). Thus,
\[|\mathsf{ILPToRobExp}(i)|=|\mathsf{CC}|+\sum_{(C,\mathsf{PaAlloc}_{C},\mathsf{ RobTyp})\in\mathsf{CycTypS},j\in[x_{\mathsf{CycTyp}}]\mathsf{ILPDerCycAlloc}(\mathsf{CycTyp},j)=i}|C|.\]
Recall that, for every \(\mathsf{RobTyp}\in\mathsf{RobTypS}\) and for every \(2\leq j\leq 2|\mathsf{VC}|\), \(\mathsf{CycTypS}(\mathsf{RobTyp},j)\)\(=\{\mathsf{CycTyp}=(C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\in\mathsf{CycTypS}\mid|C|=j\}\). Now, by Definition 5.25, for every \((C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\in\mathsf{CycTypS}\), \(|C|\leq 2|\mathsf{VC}|\). Moreover, from Condition 1 of Definition 5.42, for every \((C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\in\mathsf{CycTypS}\) and \(j\in[x_{\mathsf{CycTyp}}]\) such that \(\mathsf{ILPDerCycAlloc}(\mathsf{CycTyp},j)=i\), \(\mathsf{ILPDerRobTyp}(i)=\mathsf{RobTyp}\). Therefore,
\[|\mathsf{ILPToRobExp}(i)|=|\mathsf{CC}|+\] \[\sum_{2\leq j\leq 2|\mathsf{VC}|,j\neq 4\text{ }\mathsf{CycTyp} \in\mathsf{CycTypS}(\mathsf{RobTyp},j)}|\{1\leq t\leq x_{\mathsf{CycTyp}} \mid\mathsf{ILPDerCycAlloc}(\mathsf{CycTyp},t)=i\}|\cdot j+\] \[\sum_{\mathsf{CycTypeTypeS}(\mathsf{RobTyp},4)}|\{1\leq t\leq x_{ \mathsf{CycTyp}}\mid\mathsf{ILPDerCycAlloc}(\mathsf{CycTyp},t)=i\}|\cdot 4.\]
Now, by Condition 2 of Definition 5.42, for every \(2\leq j\leq 2|\mathsf{VC}|\), \(j\neq 4\),
\[|\mathsf{ILPToRobExp}(i)|=|\mathsf{CC}|+\sum_{2\leq j\leq 2| \mathsf{VC}|,j\neq 4}N_{j}\cdot j+\] \[\sum_{\mathsf{CycType}\in\mathsf{CycTypS}(\mathsf{RobTyp},4)}|\{1 \leq t\leq x_{\mathsf{CycTyp}}\mid\mathsf{ILPDerCycAlloc}(\mathsf{CycTyp},t)=i\}|\cdot 4.\]
Recall that
\[\mathsf{Bud}(\mathsf{RobTyp})=|\mathsf{CC}|+\sum_{2\leq j\leq 2| \mathsf{VC}|,j\neq 4}N_{j}\cdot j,\text{ so}\] \[|\mathsf{ILPToRobExp}(i)|=\mathsf{Bud}(\mathsf{RobTyp})+\] \[\sum_{\mathsf{CycType}\in\mathsf{CycTypS}(\mathsf{RobTyp},4)}|\{1 \leq t\leq x_{\mathsf{CycTyp}}\mid\mathsf{ILPDerCycAlloc}(\mathsf{CycTyp},t)=i\}|\cdot 4.\]
Now, by Condition 3 of Definition 5.42, for every \(a,b\in[k]\) such that \(\mathsf{ILPDerRobTyp}(a)=\mathsf{ILPDerRobTyp}(b)=\mathsf{RobTyp}\),
\[|\sum_{\mathsf{CycTypS}\in\mathsf{CycTypS}(\mathsf{RobTyp},4)}|\{t\in[x_{ \mathsf{CycTyp}}]\mid\mathsf{ILPDerCycAlloc}(\mathsf{CycTyp},t)=a\}|-\] \[\sum_{\mathsf{CycType}\in\mathsf{CycTypS}(\mathsf{RobTyp},4)}|\{t \in[x_{\mathsf{CycTyp}}]\mid\mathsf{ILPDerCycAlloc}(\mathsf{CycTyp},t)=b\}| \leq 1.\]
This implies that
\[|\{1\leq t\leq x_{\mathsf{CycTyp}}\mid\mathsf{ILPDerCycAlloc}(\mathsf{CycTyp},t)=i\}|\leq\] \[\big{[}\frac{\sum_{\mathsf{CycType}\in\mathsf{CycTypS}(\mathsf{RobTyp},4 )}x_{\mathsf{CycTyp}}}{\big{[}r\in[k]\mid\mathsf{ILPDerRobTyp}(r)=\mathsf{ RobTyp}\big{]}}|.\]
By Definition 5.35,
\[|\{r\in[k]\mid\mathsf{ILPDerRobTop}(r)=\mathsf{RobTyp}\}|=x_{\mathsf{RobTyp}},\]
so
\[\sum_{\mathsf{CycTypeTypeS}(\mathsf{RobTyp},4)}|\{1\leq t\leq x_{ \mathsf{CycTyp}}\mid\mathsf{ILPDerCycAlloc}(\mathsf{CycTyp},t)=i\}|\leq\]
\(\lceil\frac{\sum_{\mathsf{CyTp}\in\mathsf{CyT}\mathsf{PyS}}(\mathsf{RobT}\mathsf{ Typ},4)\,\
Second, recall that a robot type is a triple \((\mathsf{CC},\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})},\mathsf{NumOfCyc})\) satisfies the conditions of Definition 5.20. Notice that \(\overline{G}\) (see Definition 5.4), has at most \(|\mathsf{VC}|+2^{|\mathsf{VC}|}\cdot(2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2})\) vertices and at most \(2|\mathsf{VC}|^{2}+2|\mathsf{VC}|\cdot(2^{|\mathsf{VC}|}+|\mathsf{VC}|)=2^{ \mathcal{O}(|\mathsf{VC}|)}\) edges. Thus, the number of options for choosing \(\mathsf{CC}\subset E(\overline{G})\) is bounded by \(2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\). Now, for each such \(\mathsf{CC}\subset E(\overline{G})\), \(|\mathsf{NeiDofInd}(\mathsf{Graph}(\mathsf{CC}))|\leq 2^{|\mathsf{VC}|}\cdot(2^{| \mathsf{VC}|}+|\mathsf{VC}|^{2})=2^{\mathcal{O}(|\mathsf{VC}|)}\) (see Definition 5.17). Therefore, since \(|\mathsf{VerTypS}|\leq 2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\), the number of options for \(\mathsf{Alloc}_{\mathsf{Graph}(\mathsf{CC})}\) is bounded by \((2^{2^{\mathcal{O}(|\mathsf{VC}|)}})2^{\mathcal{O}(|\mathsf{VC}|)}=2^{2^{ \mathcal{O}(|\mathsf{VC}|)}}\cdot 2^{\mathcal{O}(|\mathsf{VC}|)}=2^{2^{\mathcal{O}(| \mathsf{VC}|)}}\) (see Definition 5.18). The number of options for \(\mathsf{NumOfCyc}\) (see Definition 5.20) is bounded by \((2|\mathsf{VC}|^{2})^{2|\mathsf{VC}|}=|\mathsf{VC}|^{\mathcal{O}(|\mathsf{VC}|)}\). Thus, \(|\mathsf{RobTypS}|\leq 2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\cdot 2^{2^{\mathcal{O}(| \mathsf{VC}|)}}\cdot|\mathsf{VC}|^{\mathcal{O}(|\mathsf{VC}|)}=2^{2^{\mathcal{O }(|\mathsf{VC}|)}}\).In addition, creating \(\mathsf{RobTypS}\) takes \(2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\) time, once \(\mathsf{VerTypS}\) is computed.
Next, recall that a cycle type is a triple \((C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\) satisfies the conditions of Definition 5.25. Observe that the number of vertices in \(G^{*}\) is bounded by \(|\mathsf{VC}|+2^{|\mathsf{VC}|}\), and the number of cycles of length at most \(2|\mathsf{VC}|\) in \(G^{*}\) is bounded by \(\sum_{2\leq i\leq 2|\mathsf{VC}|}(|\mathsf{VC}|+2^{|\mathsf{VC}|})^{i}\leq 2| \mathsf{VC}|(|\mathsf{VC}|+2^{|\mathsf{VC}|})^{2|\mathsf{VC}|}=2^{\mathcal{O} (|\mathsf{VC}|^{2})}\). Now, for each \(C\in\mathsf{Cyc}_{G^{*}}\), \(|\mathsf{EdgePairs}(C)|\leq|\mathsf{VC}|\) (see Definition 5.13). Thus, the number of options for \(\mathsf{PaAlloc}_{C}\) is bounded by \((2^{2^{\mathcal{O}(|\mathsf{VC}|)}})|\mathsf{VC}|=2^{2^{\mathcal{O}(|\mathsf{VC }|)}}\) (see Definition 5.22). Therefore, \(|\mathsf{CycTypS}|\leq 2^{\mathcal{O}(|\mathsf{VC}|^{2})}\cdot 2^{2^{\mathcal{O}(| \mathsf{VC}|)}}=2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\). In addition, creating \(\mathsf{CycTypS}\) takes \(2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\) time, once \(\mathsf{VerTypS}\) and \(\mathsf{RobTypS}\) are computed.
In the next lemma, we analyze the runtime and the size of the reduction.
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\) and let \(k,B\in\mathbb{N}\). Then, \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\) runs in time \(2^{2^{\mathcal{O}(|\mathsf{VC}|)}}+\mathcal{O}(|\mathsf{VC}|\cdot|V(G)|+|E(G)|)\), and returns an instance for the ILP problem of size at most \(2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\cdot\log(k+B+|V(G)|)\) with \(2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\) variables.
Proof.: First, we create \(\mathsf{VerTypS}\), \(\mathsf{RobTypS}\) and \(\mathsf{CycTypS}\) in \(2^{2^{\mathcal{O}(|\mathsf{VC}|)}}+\mathcal{O}(|\mathsf{VC}|\cdot|V(G)|+|E(G)|)\) time (see Lemma 5.49). Now, we give an upper bound for the number of equations in \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\), and the runtime of creating the equations. There is one equation in Equation 1, that takes \(\mathcal{O}(|\mathsf{RobTypS}|)\) time to create, so, by Lemma 5.49, \(2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\) time.
There is one equation for each \(u^{*}\in\mathsf{EQ}\) in Equation 2, so there are at most \(2^{|\mathsf{VC}|}\) equations. Computing \(|u^{*}|\) takes \(\mathcal{O}(2^{|\mathsf{VC}|}+|\mathsf{VC}|\cdot|V(G)|+|E(G)|)\) time (described in the proof of Lemma 5.49). The rest of the computation of creating these equations, takes \(\mathcal{O}(|\mathsf{VerTypS}|)\) time, so, by Lemma 5.49, it takes \(2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\) time.
There is one equation for every \(\mathsf{VerTypS}|\)\(\in\mathsf{VerTypS}\), and every \(\mathsf{NeiDub}\in\mathsf{NeiDubsets}\) in Equation 3, so there are at most \(|\mathsf{VerTypS}|\cdot 2^{2^{|\mathsf{VC}|}}=2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\cdot 2^{2^{ \mathcal{VO}(|\mathsf{VC}|)}}=2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\) equations. Now, we can create the sets \(\mathsf{CycTypS}(\mathsf{VerTyp},\mathsf{NeiDub},j)\), for every \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiDubsets})\in\mathsf{VerTypS}\), every \(\mathsf{NeiDub}\in\mathsf{NeiDubsets}\) and \(1\leq j\leq|\mathsf{VC}|\) as follows. For every \(\mathsf{CycTyp}=(C,\mathsf{PaAlloc}_{C},\mathsf{RobTyp})\in\mathsf{CycTypS}\) we initial a table \(\{\mathsf{NeiDub}\in 2^{\mathsf{VC}\times 2}\mid|\mathsf{NeiDub}|=2\}\times\mathsf{VerTyp}\) with \(0\) in every cell. Then, we go over each \(\{\{v_{i-1},v_{i}\},\{v_{i},v_{i+1}\}\}\in\mathsf{EdgePairs}(C)\), and add \(1\) to \((\{v_{i-1},v_{i+1}\},\mathsf{PaAlloc}_{C}(\{v_{i-1},v_{i+1}\}))\). Then, for every cell \((\mathsf{NeiDub},\mathsf{VerTyp})\) different than \(0\), we add \(\mathsf{CycTyp}\) to \(\mathsf{CycTypS}(\mathsf{VerTyp},\mathsf{NeiDub},j)\). It is easy to see the correctness of this process. Observe that it takes at most \(|\mathsf{CycTypS}|\cdot|\mathsf{VerTypS}|\cdot 2^{2^{|\mathsf{VC}|}}\cdot|\mathsf{VC}|=2^{2^{ \mathcal{O}(|\mathsf{VC}|)}}\cdot 2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\cdot 2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\cdot |\mathsf{VC}|=2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\) time. Similarly, we can create \(\mathsf{RobTypS}(\mathsf{VerTyp},\mathsf{NeiDub},j)\), for every \(\mathsf{VerTyp}=(u^{*},\mathsf{NeiDubsets})\in\mathsf{VerTypS}\), every \(\mathsf{NeiDub}\in\mathsf{NeiDubsets}\) and \(1\leq j\leq 2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}\), in at most \(|\mathsf{RobTypS}|\cdot|\mathsf{VerTypS}|\cdot(2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2}) \cdot 2^{2^{|\mathsf{VC}|}}=2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\cdot 2^{2^{ \mathcal{O}(|\mathsf{VC}|)}}\cdot(2^{|\mathsf{VC}|}+|\mathsf{VC}|^{2})\cdot 2^{2^{|\mathsf{VC}|}}\). \(2^{2^{|\mathsf{VC}|}}+|\mathsf{VC}|^{2}\). \(2^{2^{|\mathsf{VC}|}}=2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\). Now, each equation takes at most \(|\mathsf{CycTypS}|\cdot|\mathsf{RobTypS}|=2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\cdot 2^{2^{ \mathcal{O}(|\mathsf{VC}|)}}\) time. So, to sum up, there are at most \(2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\) equations in Equation 3, and they take at most \(2^{2^{\mathcal{O}(|\mathsf{VC}|)}}+2^{2^{\mathcal{O}(|\mathsf{VC}|)}}+2^{2^{ \mathcal{O}(|\mathsf{VC}|)}}\cdot 2^{2^{\mathcal{O}(|\mathsf{VC}|)}}=2^{2^{\mathcal{O}(|\mathsf{VC}|)}}\
time. Similarly, we can create \(\mathsf{RobTypS}(\{u,v\})\) for every \(\{u,v\}\in E\) such that \(u,v\in\mathsf{VC}\), in \(|\mathsf{RobTypS}|\cdot|\mathsf{VC}|\cdot|\mathsf{VC}|=2^{2^{\mathcal{O}(\mathsf{VC })}}\cdot|\mathsf{VC}|\cdot|\mathsf{VC}|=2^{2^{\mathcal{O}(\mathsf{VC})}}\) time. Now, observe that each equation takes at most \(\mathcal{O}(|\mathsf{CycTypS}|\cdot|\mathsf{RobTypS}|)=2^{2^{\mathcal{O}( \mathsf{VC})}}\) time. So, there are \(|\mathsf{VC}|\) equations in Equation 4, and they take \(2^{2^{\mathcal{O}(\mathsf{VC})}}+2^{2^{\mathcal{O}(\mathsf{VC})}}+2^{2^{ \mathcal{O}(\mathsf{VC})}}\cdot|\mathsf{VC}|=2^{2^{\mathcal{O}(\mathsf{VC})}}\) time.
There are at most \(|\mathsf{RobTypS}|\cdot 2|\mathsf{VC}|=2^{2^{\mathcal{O}(\mathsf{VC})}}\) equations in Equation 5. Observe that we can create \(\mathsf{CycTypS}(\mathsf{RobTyp},j)\) for every \(\mathsf{RobTyp}\in\mathsf{RobTypS}\) and for every \(2\leq j\leq 2|\mathsf{VC}|\), in \(|\mathsf{CycTypS}|\cdot|\mathsf{Cyc}|\cdot|\mathsf{RobTypS}|=2^{2^{\mathcal{O}( \mathsf{VC})}}\) time. Now, observe that each equation takes \(\mathcal{O}|\mathsf{CycTypS}|=2^{2^{\mathcal{O}(\mathsf{VC})}}\) time. So, there are \(2^{2^{\mathcal{O}(\mathsf{VC})}}\) equations in Equation 5, and they take \(2^{2^{\mathcal{O}(\mathsf{VC})}}+2^{2^{\mathcal{O}(\mathsf{VC})}}\cdot 2^{ \mathcal{O}(\mathsf{VC})}=2^{2^{\mathcal{O}(\mathsf{VC})}}\) time.
Finally, there are \(|\mathsf{RobTypS}|=2^{2^{\mathcal{O}(\mathsf{VC})}}\) equations in Equation 6. Computing
\(\mathsf{CycBud}(\mathsf{RobTyp})\) for every \(\mathsf{RobTyp}\in\mathsf{RobTypS}\) takes at most \(\mathcal{O}(|\mathsf{RobTypS}|\cdot 2^{\mathcal{O}(\mathsf{VC})})=2^{2^{ \mathcal{O}(\mathsf{VC})}}\) time. Each equation in Equation 6 takes \(\mathcal{O}(|\mathsf{CycTypS}|)=2^{2^{\mathcal{O}(\mathsf{VC})}}\) time. So, there are \(2^{2^{\mathcal{O}(\mathsf{VC})}}\) equations in Equation 6, and they take \(2^{2^{\mathcal{O}(\mathsf{VC})}}+2^{2^{\mathcal{O}(\mathsf{VC})}}\cdot 2^{ \mathcal{O}(\mathsf{VC})}=2^{2^{\mathcal{O}(\mathsf{VC})}}\) time.
Therefore, we get that there are at most \(2^{2^{\mathcal{O}(\mathsf{VC})}}\) variables and \(2^{2^{\mathcal{O}(\mathsf{VC})}}\) equations in \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\). In addition, notice that the coefficients of the equations are bounded by \(\mathsf{max}\{B,k,|V(G)|\}\).
In summary, \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\) works in time \(2^{2^{\mathcal{O}(\mathsf{VC})}}+\mathcal{O}(|\mathsf{VC}|\cdot|V(G)|+|E(G)|)\) and returns an instance for the ILP problem of size at most \(2^{2^{\mathcal{O}(\mathsf{VC})}}\cdot 2^{\mathcal{O}(\mathsf{VC})}\cdot\log(k+B+|V(G)|)=2^{2^{ \mathcal{O}(\mathsf{VC})}}\cdot\log(k+B+|V(G)|)\) and at most \(|\mathsf{VerTypS}|+|\mathsf{RobTypS}|+|\mathsf{CycTypS}|\leq 2^{2^{\mathcal{O}( \mathsf{VC})}}\) variables.
Now, we invoke Lemmas 5.27 and 5.50, in order to prove the following corollary:
There exists an algorithm that solves \(\mathrm{CGE}\) in \(2^{2^{2^{\mathcal{O}(\mathsf{VC})}}}\cdot(\log(k+B+|V(G)|))^{\mathcal{O}(1)}+ \mathcal{O}(\mathsf{vc}\cdot|V(G)|+|E(G)|)\) time.
Proof.: Let \((G,v_{\mathsf{init}},k,B)\) be an instance of \(\mathrm{CGE}\). By [21], there exists a \(2\)-approximation for computing a minimal vertex cover of an input in linear time. Then, we activate \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\) with the vertex cover \(\mathsf{VC}\) computed by this algorithm. Observe that Lemma 5.27 concludes the correctness of the algorithm. By Lemma 5.50, \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,\)\(B)\) runs in time \(2^{2^{\mathcal{O}(\mathsf{VC})}}+\mathcal{O}(|\mathsf{VC}|\cdot|V(G)|+|E(G)|)\), and returns an instance for the ILP problem of size at most \(2^{2^{\mathcal{O}(\mathsf{VC})}}\cdot\log(k+B+|V(G)|)\) and with \(2^{2^{\mathcal{O}(\mathsf{VC})}}\) variables. So, by Theorem 2.8, \(\mathsf{Reduction}(G,v_{\mathsf{init}},k,B)\) can be solved in time \((2^{2^{\mathcal{O}(\mathsf{VC})}})^{2^{\mathcal{O}(\mathsf{VC})}}\cdot 2^{2^{ \mathcal{O}(\mathsf{VC})}}\cdot\log(k+B+|V(G)|)^{\mathcal{O}(1)}=2^{2^{2^{ \mathcal{O}(\mathsf{VC})}}}\cdot\log(k+B+|V(G)|)^{\mathcal{O}(1)}\). Therefore, the total runtime of the algorithm is \(2^{2^{\mathcal{O}(\mathsf{VC})}}+\mathcal{O}(|\mathsf{VC}|\cdot|V(G)|+|E(G)|)+2 ^{2^{2^{\mathcal{O}(\mathsf{VC})}}}\cdot(\log(k+B+|V(G)|))^{\mathcal{O}(1)}=2^{2^ {2^{\mathcal{O}(\mathsf{VC})}}}\cdot(\log(k+B+|V(G)|))^{\mathcal{O}(1)}+ \mathcal{O}(|\mathsf{VC}|\cdot|V(G)|+|E(G)|)\). Now, since \(|\mathsf{VC}|\leq 2\cdot\mathsf{vc}\), we conclude the correctness of the corollary.
Corollary 5.51 concludes the correctness of Theorem 1.1.
## 6 Approximation Algorithm with Additive Error of \(\mathcal{O}(\mathsf{vc})\)
In this section, we prove the following result.
There exists an approximation algorithm for \(\mathrm{CGE}\) that runs in time \(\mathcal{O}((|V(G)|+|E(G)|)\cdot k)\), and returns a solution with an additive approximation of \(8\cdot\mathsf{vc}(G)\), where \(G\) is the input graph and \(k\) is the number of robots.
To prove the above theorem, we first give an approximation algorithm that takes a vertex cover \(\mathsf{VC}\) of the input graph as an additional parameter and returns a solution with an additive approximation of \(4|\mathsf{VC}|\).
```
1function\(\mathsf{ApproxAlg}(\langle G=(V,E),k,v_{\mathsf{init}},\mathsf{VC}\rangle)\);
2\(\mathsf{VC}^{\prime}\leftarrow\mathsf{VC}\cup\{v_{\mathsf{init}}\}\);
3 Make \(\mathsf{VC}^{\prime}\) connected;
4\(I\gets V(G)\setminus\mathsf{VC}^{\prime}\);
5\(\widehat{E}_{\mathsf{IND}}\leftarrow\{\{u,v\}\in E(G)\ |\ u\in I\}\);
6forevery\(u\in I\) with odd degree in \(G\)do
7 Let \(\{u,v\}\in E\);
8\(\widehat{E}_{\mathsf{IND}}\leftarrow\widehat{E}_{\mathsf{IND}}\cup\{\{u,v\}\}\);
9
10 end for
11forevery\(1\leq i\leq k\)do
12\(\widehat{E}_{i}\leftarrow\emptyset\);
13
14 end for
15forevery\(u\in I\)do
16whileThere exists \(\{u,v\}\in\widehat{E}_{\mathsf{IND}}\)do
17 Let \(\{u,v\},\{u,v^{\prime}\}\in\widehat{E}_{\mathsf{IND}}\);
18 Let \(1\leq i\leq k\) with minimum \(|\widehat{E}_{i}|\);
19\(\widehat{E}_{i}\leftarrow\widehat{E}_{i}\cup\{\{u,v\},\{u^{\prime}\}\}\);
20\(\widehat{E}_{\mathsf{IND}}\leftarrow\widehat{E}_{\mathsf{IND}}\setminus\{\{u,v\}, \{u,v^{\prime}\}\}\);
21
22 end for
23
24 end for
25
26 end for
27forevery\(\{u,v\}\in E\) such that \(u,v\in\mathsf{VC}^{\prime}\)do
28 Let \(1\leq i\leq k\) with minimum \(|\widehat{E}_{i}|\);
29\(\widehat{E}_{i}\leftarrow\widehat{E}_{i}\cup\{\{u,v\}\}\);
30
31 end for
32 Let \(T\) be a spanning tree of \(G[\mathsf{VC}^{\prime}]\);
33forevery\(1\leq i\leq k\)do
34\(\widehat{E}_{i}\leftarrow\widehat{E}_{i}\cup E(T)\);
35
36 end for
37forevery\(1\leq i\leq k\)do
38 Make \(\mathsf{VC}\mathsf{Even}\mathsf{Deg}(T,\widehat{E}_{i},\mathsf{VC}^{\prime})\);
39
40 end for
41forevery\(1\leq i\leq k\)do
42 Find an Eulerian cycle \(\mathsf{RC}_{i}\) in \(\widehat{G}[\widehat{E}_{i}]\);
43
44 end for
45return\(\{\mathsf{RC}_{i}\}_{i=1}^{k}\);
```
**Algorithm 1**\(B+\mathcal{O}(|\mathsf{VC}|)\) Approximation
### The Algorithm
We now describe the stages of our approximation algorithm, whose pseudocode is given as Algorithm 1. Here \(\mathsf{VC}\) is the input vertex cover.
**Lines 2-3: Making the vertex cover connected and adding \(v_{\mathsf{init}}\).** We make \(G[\mathsf{VC}]\) connected by adding at most \(|\mathsf{VC}|-1\) vertices to \(\mathsf{VC}\): We begin with a partition \(V_{1},\ldots V_{t}\) of \(\mathsf{VC}\) into connected components of \(G[\mathsf{VC}]\). Then, until \(V_{1},\ldots V_{t}\) unite into one set we do the following. For every \(v\in V(G)\setminus\mathsf{VC}\), if \(v\) has neighbors in two different sets among \(V_{1},\ldots V_{t}\)
then: (i) we add \(v\) to \(\mathsf{VC}\), and (ii) we unite the sets that have at least one neighbor of \(v\). Since \(G\) is connected it is easy to see that by the end of the processes \(G[\mathsf{VC}]\) is connected. Moreover, observe that this takes \(\mathcal{O}(|V(G)|+|E(G)|)\) runtime, and we add at most \(|\mathsf{VC}|-1\) vertices to the initial vertex cover. In addition, we also add \(v_{\mathsf{init}}\). The new vertex cover we obtained is denoted by \(\mathsf{VC}^{\prime}\). For an example, see Figure 4(b).
**Lines 4-9: Making the degree of each vertex in the independent set even.** We ensure that every vertex \(u\) from the independent set \(I=V\setminus\mathsf{VC}^{\prime}\) has even degree as follows. Initially, \(\widehat{E}_{\mathsf{IND}}=\{\{u,v\}\in E\mid u\in I\}\). For every \(u\in I\) with odd degree in \(G\), we simply duplicate an arbitrary edge \(\{u,v\}\) incident to \(u\), that is, we add \(\{u,v\}\) to the multiset \(\widehat{E}_{\mathsf{IND}}\) (e.g., see green edges in Figure 4(c)). Since \(I\) is an independent set, adding \(\{u,v\}\) does not change the degree of other vertices in \(I\).
**Lines 10-24: Balanced partition of the edge set to the \(k\) robots.** First, we partition the edges incident to vertices in the independent set (the multiset \(\widehat{E}_{\mathsf{IND}}\)) to \(k\) multisets (\(\widehat{E}_{i}\) for every \(1\leq i\leq k\)), representing the \(k\) robot cycles under construction. As stated in Lemma 3.4, every such \(\widehat{E}_{i}\) should represent an edge multiset of a multigraph that has an Eulerian cycle, thus representing the edge multiset of a robot cycle-graph. Therefore, we repeatedly take two edges incident to the same vertex in \(I\), to preserve its even degree in the respective \(\widehat{E}_{i}\) (e.g., see Figures 4(d)-4(g)). Next, we partition the edges with both endpoints in \(\mathsf{VC}^{\prime}\). As we seek a partition as "balanced" as possible, we repeatedly add an edge to \(\widehat{E}_{i}\) with minimum number of edges. Observe that after this stage, every edge of \(G\) is covered. Intuitively, we added edges that we "must" to add, and the partition is as balanced as it can be (e.g., see Figures 5(a)-5(d)). So, we get that \(\mathsf{max}\{|\widehat{E}_{1}|,|\widehat{E}_{2}|,\ldots,\,|\widehat{E}_{k}| \}\leq B\), where \(B\) is the optimal budget (this is proved formally in Section 6.3). Still, we have three issues: (i) A multiset \(\widehat{E}_{i}\) might not induce a connected graph. (ii) Vertices from \(\mathsf{VC}^{\prime}\) might have odd degree in the graphs induced by some of the \(\widehat{E}_{i}\)'s. (iii) Graphs induced by some of the \(\widehat{E}_{i}\)'s might not contain \(v_{\mathsf{init}}\).
**Lines 25-28: Making the robot cycle-graphs connected and adding \(v_{\mathsf{init}}\) to it by adding a spanning tree of \(G[\mathsf{VC}^{\prime}]\).** We add to each \(\widehat{E}_{i}\) the edges of a spanning tree of \(G[\mathsf{VC}^{\prime}]\). After this stage, we get that \(\mathsf{Graph}(\widehat{E}_{i})\) connected and \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\widehat{E}_{i}))\), for every \(1\leq i\leq k\) (e.g., see Figures 5(h)-5(h)).
**Lines 29-31: Making the degree of each vertex in \(\mathsf{VC}^{\prime}\) even.** Here, we use Algorithm 2 to get even degree for every vertex in each robot cycle-graph (e.g., see Figures 5(i)-5(k)).
**Lines 32-34: Finding Eulerian cycle in each robot cycle-graph.** Observe that at the beginning of this stage, for every \(1\leq i\leq k\), \(\mathsf{Graph}(\widehat{E}_{i})\) is connected, and each \(u\in V(\mathsf{Graph}(\widehat{E}_{i}))\) has even degree. Therefore, we can find an Eulerian cycle \(\mathsf{RC}_{i}\) in \(\mathsf{Graph}(\widehat{E}_{i})\).
### Algorithm 2: Making the Degree of Each Vertex in \(\mathsf{VC}^{\prime}\) Even
We now describe and prove the correctness of Algorithm 2. This algorithm gets as input a spanning tree of \(G[\mathsf{VC}^{\prime}]\), \(T\), one of the multisets \(\widehat{E}_{i}\), denoted by \(\widehat{E}\) and the vertex cover \(\mathsf{VC}^{\prime}\) from Algorithm 1. The algorithm, in every iteration, while \(|V(T)|\geq 2\), chooses a leaf \(u\) in \(T\). If \(u\) has even degree in \(\mathsf{Graph}(\widehat{E})\), we delete \(u\) from \(T\); otherwise, \(u\) has odd degree in \(\mathsf{Graph}(\widehat{E})\), so we add \(\{u,v\}\) to \(\widehat{E}\), where \(v\) is the neighbor of \(u\) in \(T\), and then we delete \(u\) from \(T\). Observe that the degree of \(u\) in \(\mathsf{Graph}(\widehat{E})\) does not change after we delete it from \(T\). Next, we prove that after the computation of Algorithm 2, the degree of each vertex in \(\mathsf{Graph}(\widehat{E})\) is even:
**Lemma 6.1**.: _Let \(1\leq i\leq k\), let \(\widehat{E}_{i}\) be the multiset obtained from Algorithm 1 in Line 28, let \(\mathsf{VC}^{\prime}\) be the vertex cover computed by Algorithm 1, and let \(T\) be a spanning tree of \(G[\mathsf{VC}^{\prime}]\). Then, \(\mathsf{MakeVCEvenDeg}(T,\widehat{E}_{i},\mathsf{VC}^{\prime})\) runs in time \(\mathcal{O}(|\mathsf{VC}^{\prime}|)\), adds at most \(|\mathsf{VC}^{\prime}|\) edges to \(\widehat{E}_{i}\), and by its end every vertex in \(\mathsf{Graph}(\widehat{E}_{i})\) has even degree._
Proof.: First, observe that every \(u\in V\setminus\mathsf{VC}^{\prime}\) has even degree in \(\mathsf{Graph}(\widehat{E})\), and since we do not add any edges incident to \(u\), its degree is even through all the stages of the algorithm. Now, we show by induction on the number of iterations of the while loop in Line 3 that, at the beginning of any iteration, the following conditions hold:
1. \(T^{\prime}\) is a tree.
2. Every \(u\in V(G)\setminus V(T^{\prime})\) has even degree in \(\mathsf{Graph}(\widehat{E})\).
For the base case, notice that at the beginning of the first iteration, \(T^{\prime}\) is a spanning tree of \(G[\mathsf{VC}^{\prime}]\), and \(V(T^{\prime})=\mathsf{VC}^{\prime}\). So, from the observation stated in the beginning of this proof, we get that both conditions hold.
Now, let \(i>1\) be the \(i\)-th iteration of the while loop. From the inductive hypothesis, the conditions hold in the beginning of the \((i-1)\)-th iteration. We show that they hold at the end of the \((i-1)\)-th iteration, and so they hold in the beginning of the \(i\)-th iteration. From the condition of the while loop, we get that \(T^{\prime}\) has at least two vertices, so there is a leaf in \(T^{\prime}\). We have the following two cases:
**Case 1.** If \(u\) has even degree in \(\mathsf{Graph}(\widehat{E})\), then the algorithm deletes \(u\) from \(T^{\prime}\). Now, \(u\) is added to \(V(G)\setminus V(T^{\prime})\) and its degree is even in \(\mathsf{Graph}(\widehat{E})\). The degrees of the rest of the vertices from \(V(G)\setminus V(T^{\prime})\) in \(\mathsf{Graph}(\widehat{E})\) stay unchanged. Therefore, we get that the second condition holds.
**Case 2.** In this case, \(u\) has odd degree in \(\mathsf{Graph}(\widehat{E})\). Now, the algorithm adds \(\{u,v\}\) to \(\widehat{E}\), where \(v\) is the neighbor of \(u\) in \(T^{\prime}\), so now \(u\) has even degree in \(\mathsf{Graph}(\widehat{E})\). The degrees of the rest of the vertices from \(V(G)\setminus V(T^{\prime})\) in \(\mathsf{Graph}(\widehat{E})\) stay unchanged. Thus, we get that the second condition holds.
In addition, observe that, \(u\) is a leaf. So, in both cases, \(T^{\prime}\) remains connected after deleting \(u\) from it, so \(T^{\prime}\) remains a tree. Therefore, in both cases, both the conditions hold. This ends the proof of the inductive hypothesis.
Now, the while loop ends when \(T^{\prime}\) has less than two vertices. In this case, \(T^{\prime}\) is an isolated vertex \(u\). By the inductive hypothesis we proved, every \(v\in V(G)\setminus V(T^{\prime})=V(G)\setminus\{u\}\) has even degree in \(\mathsf{Graph}(\widehat{E})\). Since the number of vertices with odd degree is even in every graph, we get that the degree of \(u\) in \(\mathsf{Graph}(\widehat{E})\) is even. Therefore, by the end of the algorithm, every \(u\in V\) has even degree in \(\mathsf{Graph}(\widehat{E})\).
Now, in every iteration of the while loop, we delete a vertex from \(T^{\prime}\), so we have at most \(|\mathsf{VC}^{\prime}|\) iterations. The rest of the calculations are done in \(\mathcal{O}(1)\) runtime, so the runtime of the algorithm is \(\mathcal{O}(|\mathsf{VC}^{\prime}|)\). At every iteration of the while loop we add at most one edge to \(\widehat{E}\), so the algorithm adds at most \(|\mathsf{VC}^{\prime}|\) edges to \(\widehat{E}\). This ends the proof.
### Correctness and Running Time of Algorithm 1
We aim to prove the correctness of the following lemma:
**Lemma 6.2**.: _Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Then, Algorithm 1 with the input \((G,k,v_{\mathsf{init}},\mathsf{VC})\) runs in time \(\mathcal{O}((|V(G)|+|E(G)|)\cdot k)\) and returns a solution \(\{\mathsf{RC}_{i}\}_{i=1}^{k}\) for \(\mathrm{CGE}\) with \(k\) agents such that \(\mathsf{Val}(\{\mathsf{RC}_{i}\}_{i=1}^{k})\leq B+4|\mathsf{VC}|\), where \(B\) is the minimum budget for the instance \((G,k,v_{\mathsf{init}})\)._
We split Lemma 6.2, into two lemmas: Lemma 6.3 where we prove the correctness of Algorithm 1, and Lemma 6.5 where we analyze its runtime.
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Then, Algorithm 1 with the input \((G,k,v_{\mathsf{init}},\mathsf{VC})\) returns a solution \(\{\mathsf{RC}_{i}\}_{i=1}^{k}\) for \(\mathrm{CGE}\) with \(k\) agents such that \(\mathsf{Val}(\{\mathsf{RC}_{i}\}_{i=1}^{k})\leq B+4|\mathsf{VC}|\), where \(B\) is the minimum budget for the instance \((G,k,v_{\mathsf{init}})\).
Towards the proof of Lemma 6.3, first, we prove that Algorithm 1 returns a solution for \(\mathrm{CGE}\) with \(k\) agents:
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Then, Algorithm 1 with the input \((G,k,v_{\mathsf{init}},\mathsf{VC})\) returns a solution \(\{\mathsf{RC}_{i}\}_{i=1}^{k}\) for \(\mathrm{CGE}\) with \(k\) agents.
Proof.: We prove that \(\{\mathsf{RC}_{i}\}_{i=1}^{k}\) is a solution for \(\mathrm{CGE}\) with \(k\) agents for the instance \((G,k,v_{\mathsf{init}})\). We begin by showing that for every \(1\leq i\leq k\), \(\mathsf{RC}_{i}\) is a robot cycle. From Lemma 6.1, we get that after Line 3 in Algorithm 1, each \(u\in V(\mathsf{Graph}(\widehat{E}_{i}))\) has even degree. In addition, \(\widehat{E}_{i}\) contains the edge set of a spanning tree of \(G[\mathsf{VC}^{\prime}]\), therefore \(\mathsf{Graph}(\widehat{E}_{i})\) is connected. Thus, there exists an Eulerian cycle in \(\mathsf{Graph}(\widehat{E}_{i})\), and so \(\mathsf{RC}_{i}\), constructed in Line 3, is well defined. Moreover, observe that \(v_{\mathsf{init}}\in V(\mathsf{Graph}(\widehat{E}_{i}))\). So, by Observation 3, \(\mathsf{RC}_{i}\) is a robot cycle in \(G\). Now, since every edge belongs to at least one \(\widehat{E}_{i}\), it holds that \(E(G)\subseteq\widehat{E}_{1}\cup\ldots\cup\widehat{E}_{k}\), thus \(E(\mathsf{RC}_{1})\cup E(\mathsf{RC}_{2})\cup,\ldots,\cup E(\mathsf{RC}_{k})= E(G)\). So,\(\{\mathsf{RC}_{i}\}_{i=1}^{k}\) is a solution for \(\mathrm{CGE}\) with \(k\) agents for the instance \((G,k,v_{\mathsf{init}})\).
Now, we prove the correctness of Lemma 6.3.
Proof.: In Lemma 6.4 we proved that Algorithm 1 with the input \((G,k,v_{\mathsf{init}},\mathsf{VC})\) returns a solution \(\{\mathsf{RC}_{i}\}_{i=1}^{k}\) for \(\mathrm{CGE}\) with \(k\) agents.
Now, let \(\{\mathsf{RC}^{\prime}_{i}\}_{i=1}^{k}\) be a solution for the instance \((G,k,v_{\mathsf{init}})\) with minimum \(\mathsf{Val}(\{\mathsf{RC}^{\prime}_{i}\}_{i=1}^{k})\). Let \(\mathsf{IND}=V(G)\setminus\mathsf{VC}^{\prime}\). By Observation 3, for every \(1\leq i\leq k\), \(\mathsf{RC}^{\prime}_{i}\) is an Eulerian cycle in the robot cycle-graph of \(\mathsf{RC}^{\prime}_{i}\). Therefore, each vertex in the robot cycle-graph of \(\mathsf{RC}^{\prime}_{i}\) has even degree. Notice that \(\widehat{E}_{\mathsf{IND}}\), defined in Lines 5-9 in Algorithm 1, is a multiset of minimum size such that (i) \(\{\{u,v\}\in E(G)\mid u\in\mathsf{IND}\}\subseteq\widehat{E}_{\mathsf{IND}}\) and (ii) each \(u\in\mathsf{IND}\) has even degree in \(\mathsf{Graph}(\widehat{E}_{\mathsf{IND}})\). So, the total number of edges (with repetition) with an endpoint in \(\mathsf{IND}\) in \(\mathsf{RC}^{\prime}_{1},\ldots,\mathsf{RC}^{\prime}_{k}\) is greater or equal to \(|\widehat{E}_{\mathsf{IND}}|\). Furthermore, in Lines 21-24 in Algorithm 1, every edge with both endpoints in \(\mathsf{VC}^{\prime}\) is added to exactly one \(\widehat{E}_{i}\). In addition, observe that we
allocate, in each iteration of the loops in Lines 14 and 21, the edges to a multiset with minimum elements. Therefore, we get that for the multisets \(\widehat{E}_{i}\), for every \(1\leq i\leq k\), defined up until Line 24, it follows that \(\mathsf{max}\{|\widehat{E}_{1}|,|\widehat{E}_{2}|,\ldots,|\widehat{E}_{k}|\} \leq\mathsf{max}\{|E(\mathsf{RC}^{\prime}_{1})|,|E(\mathsf{RC}^{\prime}_{2})|, \ldots,|E(\mathsf{RC}^{\prime}_{k})|\}\). Now, in Line 27, we add the edges of a spanning tree of \(G[\mathsf{VC}^{\prime}]\) to \(\widehat{E}_{i}\), for every \(1\leq i\leq k\), so we add \(|\mathsf{VC}^{\prime}|\) edges. In Line 30, by Lemma 6.1, we add at most \(|\mathsf{VC}^{\prime}|\) additional edges to \(\widehat{E}_{i}\), for every \(1\leq i\leq k\). Recall that \(\mathsf{VC}^{\prime}\) is obtained from \(\mathsf{VC}\) by adding at most \(|\mathsf{VC}|-1\) vertices to make \(G[\mathsf{VC}^{\prime}]\) connected, and by adding \(v_{\mathsf{init}}\). So, \(|\mathsf{VC}^{\prime}|\leq 2|\mathsf{VC}|\). Overall, we get that \(\mathsf{Val}(\{\mathsf{RC}_{i}\}_{i=1}^{k})\leq\mathsf{max}\{|E(\mathsf{RC}^ {\prime}_{1})|,|E(\mathsf{RC}^{\prime}_{2})|,\ldots,|E(\mathsf{RC}^{\prime}_{ k})|\}+2|\mathsf{VC}^{\prime}|=B+2|\mathsf{VC}^{\prime}|\leq B+4|\mathsf{VC}|\). This ends the proof.
Let \(G\) be a connected graph, let \(v_{\mathsf{init}}\in V(G)\), let \(k\in\mathbb{N}\) and let \(\mathsf{VC}\) be a vertex cover of \(G\). Then, Algorithm 1 with the input \((G,k,v_{\mathsf{init}},\mathsf{VC})\) runs in time \(\mathcal{O}((|V(G)|+|E(G)|)\cdot k)\).
Proof.: We analyze the runtime of Algorithm 1 as follows. Making \(\mathsf{VC}^{\prime}\) connected in Line 3, takes \(\mathcal{O}(|V(G)|+|E(G)|)\) time. In Line 16, to find \(1\leq i\leq k\) such that \(|\widehat{E}_{i}|\) is minimum, we do as follows. At the \(j\)-th iteration \(i=j(mod\;k)+1\). Thus, there are at most \(\mathcal{O}|E(G)|\) iterations, and every iteration takes \(\mathcal{O}(1)\) time. We mark the last index of multiset to get pairs of edges in Line 16 by \(t\). Observe that by the end of Line 20, \(|\widehat{E}_{1}|=\cdots=|\widehat{E}_{t}|=|\widehat{E}_{t+1}|+2=\cdots=| \widehat{E}_{k}|+2\). Then, in Line 22 to find \(1\leq i\leq k\) such that \(|\widehat{E}_{i}|\) is minimum, we first take \(i=t+1(mod\;k)\) until \(k\). Then, we add the next edge to sets in the reverse order, that is, from \(k\) to \(1\) and so on. Observe that the chosen \(i\) is indeed such that \(|\widehat{E}_{i}|\) is minimum. Therefore, there are at most \(|E(G)|\) iterations in Line 20, each takes \(\mathcal{O}(1)\) time. Finding a spanning tree of \(G[\mathsf{VC}^{\prime}]\) in Line 25 takes \(\mathcal{O}(|V(G)|+|E(G)|)\) time [5]. By Lemma 6.1, each iteration in Line 30 takes \(\mathcal{O}(|\mathsf{VC}|)\) time, so in total, all the \(k\) iterations take \(\mathcal{O}(k\cdot|\mathsf{VC}|)\) time. Finding an Eulerian cycle in Line 33 in each \(\mathsf{Graph}(\widehat{E}_{i})\) takes \(\mathcal{O}(|E(G)|)\) time [12], so in total, the \(k\) iterations take \(\mathcal{O}((|V(G)|+|E(G)|)\cdot k)\) time. Therefore, Algorithm 1 runs in time \(\mathcal{O}((|V(G)|+|E(G)|)\cdot k)\). This completes the proof.
The proofs of Lemmas 6.1 and 6.2 conclude the correctness of Lemma 6.1.
Note that there exists a \(2\)-approximation for computing a minimal vertex cover of an input graph [21]. This result together with Lemma 6.1 conclude the correctness of Theorem 1.1.
## 7 W[1]-Hardness for \(\mathrm{Cge}\)
In this section, we aim to prove the following theorem:
Cge _is W[1]-hard with respect to \(k\) even on trees whose treedepth is bounded by \(3\)._
We prove Theorem 1.1 by showing a reduction from Exact Bin Packing (see Definition 2.1).
First, we show that unary Exact Bin Packing is W[1]-hard with respect to \(k\). It is known that unary Bin Packing is W[1]-hard with respect to \(k\)[16]. So, we give a reduction from Bin Packing to Exact Bin Packing in order to prove the following lemma:
Unary Exact Bin Packing is W[1]-hard with respect to \(k\).
Proof.: Let \((I,s,B,k)\) be an instance of Bin Packing problem. Let \(t=B\cdot k-\sum_{i\in I}s(i)\) and let \(s^{\prime}:I\cup\{i_{1},\ldots,i_{t}\}\to\mathbb{N}\) be a function defined as follows. For every \(i\in I\), \(s^{\prime}(i)=s(i)\), and for every \(i_{t}\in\{i_{1},\ldots,i_{t}\}\), \(s^{\prime}(i_{t})=1\). Observe that \((I\cup\{i_{1},\ldots,i_{t}\},s^{\prime},B,k)\) is an instance of
Exact Bin Packing. We show that \((I,s,B,k)\) is a yes-instance of Bin Packing if and only if \((I\cup\{i_{1},\ldots,i_{t}\},s^{\prime},B,k)\) is a yes-instance of Exact Bin Packing.
Assume that \((I,s,B,k)\) is a yes-instance of Bin Packing. Let \(I_{1},\ldots,I_{k}\) be a partition of \(I\) into disjoint sets such that for every \(1\leq j\leq k\), \(\sum_{i\in I_{j}}s(i)\leq B\). For every \(1\leq j\leq k\), let \(t_{j}=B-\sum_{i\in I_{j}}s(i)\). Let \(I^{\prime}_{1},\ldots,I^{\prime}_{k}\) be a partition of \(\{i_{1},\ldots,i_{t}\}\) into \(k\) disjoint sets such that for every \(1\leq j\leq k\), \(|I^{\prime}_{j}|=t_{j}\). Observe that there exists such a partition since \(\sum_{1\leq j\leq k}t_{j}=t\). Clearly, \(I_{1}\cup I^{\prime}_{1},\ldots,I_{k}\cup I^{\prime}_{k}\) is a partition of \(I\cup\{i_{1},\ldots,i_{t}\}\) into disjoint sets such that for every \(1\leq j\leq k\), \(\sum_{i\in I_{j}\cup I^{\prime}_{j}}s(i)=B\). Therefore, \((I\cup\{i_{1},\ldots,i_{t}\},s^{\prime},B,k)\) is a yes-instance of Exact Bin Packing.
Now, assume that \((I\cup\{i_{1},\ldots,i_{t}\},s^{\prime},B,k)\) is a yes-instance of Exact Bin Packing. Let \(I_{1},\ldots,I_{k}\) be a partition of \(I\cup\{i_{1},\ldots,i_{t}\}\) into disjoint sets such that for every \(1\leq j\leq k\), \(\sum_{i\in I_{j}}s(i)=B\). Observe that \(I_{1}\setminus\{i_{1},\ldots,i_{t}\},\ldots,I_{k}\setminus\{i_{1},\ldots,i_{t}\}\) is a partition of \(I\) into disjoint sets such that for every \(1\leq j\leq k\), \(\sum_{i\in I_{j}}s(i)\leq B\). Therefore, \((I,s,B,k)\) is a yes-instance of Bin Packing.
Clearly, the reduction works in polynomial time when the input is in unary. Thus, since unary Bin Packing is W[1]-hard with respect to \(k\)[16], unary Exact Bin Packing is W[1]-hard with respect to \(k\).
### Reduction From Exact Bin Packing to Cge
Given an instance \((I,s,B,k)\) of Exact Bin Packing problem, denote by \(\mathsf{BinToRob}(I,s,B,k)\) the instance of CGE defined as follows. First, we construct the graph \(T\) as follows. For each \(i\in I\) we create a star with \(s(i)-1\) leaves. We connect each such star with an edge to a vertex \(r\). Formally, \(V(T)=\{v^{i},v^{i}_{1}\ldots,v^{i}_{s(i)-1}\mid i\in I\}\cup\{r\}\) and \(E(T)=\{\{v^{i},v^{i}_{j}\}\mid i\in I,1\leq j\leq s(i)-1\}\cup\{\{r,v_{i}\} \mid i\in I\}\). Now, we define \(\mathsf{BinToRob}(I,s,B,k)=(T,r,k,2B)\). See Figure 8 for an example. Next, we prove the correctness of the reduction:
Let \((I,s,B,k)\) be an instance of Exact Bin Packing. Then, \((I,s,B,k)\) is a yes-instance if and only if \(\mathsf{BinToRob}(I,s,B,k)\) is a yes-instance of CGE.
Proof.: First, assume that \((I,s,B,k)\) is a yes-instance. Let \(I_{1},\ldots,I_{k}\) be a partition of
Figure 8: An illustration of a Exact Bin Packing instance, a solution (in sub-figure (a)) and the equivalent instance of CGE constructed by the \(\mathsf{BinToRob}\) function (in sub-figure (b)).
\(I\) into disjoint sets such that for every \(1\leq j\leq k\), \(\sum_{i\in I_{j}}s(i)=B\). We prove that \(\mathsf{BinToRob}(I,s,B,k)=(T,r,k,2B)\) is a yes-instance of CGE, by showing that there exist \(k\) multisets \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that the conditions of Lemma 3.4 are satisfied. For every \(1\leq j\leq k\), let \(\widehat{E}_{j}=\{\{v^{i},v_{t}^{i}\},\{v^{i},v_{t}^{i}\}\ |\ \ i\in I_{j},1\leq t\leq s(i)-1\}\cup\{\{v^{i},r\},\{v^{i},r\}\}\). Clearly, \(r\in V(\mathsf{Graph}(\widehat{E}_{j}))\), \(\mathsf{Graph}(\widehat{E}_{j})\) is connected, and every vertex in \(\mathsf{Graph}(\widehat{E}_{j})\) has even degree. Therefore, Conditions 1 and 2 are satisfied. In addition, since \(I=I_{1}\cup\ldots\cup I_{k}\), we have that \(E\subseteq\widehat{E}_{1}\cup\ldots\cup\widehat{E}_{k}\), so Condition 3 is satisfied. Now, for every \(1\leq j\leq k\), \(|\widehat{E}_{j}|=|\{\{v^{i},v_{t}^{i}\},\{v^{i},v_{t}^{i}\}\ |\ \ i\in I_{j},1\leq t\leq s(i)-1\}\cup\{\{v^{i},r\},\{v^{i},r\}\}|=\sum_{i\in I_ {j}}2(s(i)-1)+\sum_{i\in I_{k}}2=2\sum_{i\in I_{k}}s(i)=2B\). Thus, Condition 4 is satisfied. Therefore, all the conditions of Lemma 3.4 are satisfied, so \(\mathsf{BinToRob}(I,s,B,k)\) is a yes-instance of CGE.
Now, we prove the reverse direction. Assume that \(\mathsf{BinToRob}(I,s,B,k)=(T,r,k,2B)\) is a yes-instance of CGE. From Lemma 3.4, there exist \(k\) multisets \(\widehat{E}_{1},\ldots,\widehat{E}_{k}\) such that the conditions of Lemma 3.4 hold. Let \(1\leq j\leq k\). We first show that every \(\{u,v\}\in\widehat{E}_{j}\) appears at least twice in \(\widehat{E}_{j}\). Let \(\{u,v\}\in\widehat{E}_{j}\). We the following two cases:
**Case 1: \(\{u,v\}=\{v^{i},v_{t}^{i}\}\) for some \(i\in I\) and \(1\leq t\leq s(i)-1\).** From Condition 2 of Lemma 3.4, \(v_{t}^{i}\) has even degree in \(\mathsf{Graph}(\widehat{E}_{j})\). Since \(\{v^{i},v_{t}^{i}\}\) is the only edge having \(v_{t}^{i}\) as an endpoint in \(T\), \(\{v^{i},v_{t}^{i}\}\) appears an even number of times in \(\widehat{E}_{j}\), and so it appears at least twice in \(\widehat{E}_{j}\).
**Case 2: \(\{u,v\}=\{v^{i},r\}\) for some \(i\in I\).** From Condition 2 of Lemma 3.4, \(v^{i}\) has even degree in \(\mathsf{Graph}(\widehat{E}_{j})\). From Case 1, each \(\{v^{i},v_{t}^{i}\}\in\widehat{E}_{j}\) appears an even number of times in \(\widehat{E}_{j}\). Therefore, since \(r\) is the only neighbor of \(v^{i}\) other than \(v_{t}^{i}\), \(1\leq t\leq s(i)-1\), \(\{v^{i},r\}\) appears an even number of times, which is greater or equal to \(2\), in \(\widehat{E}_{j}\).
Now, observe that \(|E(T)|=\sum_{i\in I}s(i)=B\cdot k\), and from Condition 4 of Lemma 3.4, \(\sum_{1\leq j\leq k}|\widehat{E}_{j}|\leq 2B\cdot k\). In addition, from Condition 3 of Lemma 3.4, (1): \(E(T)\subseteq\widehat{E}_{1}\cup\ldots\cup\widehat{E}_{k}\). So, since we have already proved that for every \(1\leq j\leq k\), each \(\{u,v\}\in\widehat{E}_{j}\) appears at least twice in \(\widehat{E}_{j}\), we get that for every \(1\leq j<j^{\prime}\leq k\), \(\widehat{E}_{j}\cap\widehat{E}_{j^{\prime}}=\emptyset\), and \(\sum_{1\leq t\leq k}|\widehat{E}_{\ell}|=2B\cdot k\); in turn, for every \(1\leq j\leq k\), \(|\widehat{E}_{j}|=2B\), and each \(\{u,v\}\in\widehat{E}_{j}\) appears exactly twice in \(\widehat{E}_{j}\). Moreover, from Conditions 1 and 2 of Lemma 3.4, for every \(1\leq j\leq k\), \(r\in V(\mathsf{Graph}(\widehat{E}_{j}))\) and \(\mathsf{Graph}(\widehat{E}_{j})\) is connected. Therefore, for every \(1\leq j\leq k\) and \(i\in I\), if \(v^{i}\in V(\mathsf{Graph}(\widehat{E}_{j}))\) then \(\{\{v^{i},v_{t}^{i}\}\ |\ 1\leq t\leq s(i)-1\}\cup\{r,v^{i}\}\subseteq \widehat{E}_{j}\). Thus, for every \(1\leq j<j^{\prime}\leq k\), (2): \(V(\mathsf{Graph}(\widehat{E}_{j}))\cap V(\mathsf{Graph}(\widehat{E}_{j^{\prime}} ))=\{r\}\).
Now, for every \(1\leq j\leq k\), let \(I_{j}=\{i\in I\ |\ v^{i}\in V(\mathsf{Graph}(\widehat{E}_{j}))\}\). By (1) and (2), \(I_{1},\ldots,I_{k}\) is a partition of \(I\) into disjoint sets. We show, that for every \(1\leq j\leq k\), \(\sum_{i\in I_{j}}s(i)=B\). Let \(1\leq j\leq k\). Then, \(\sum_{i\in I_{j}}s(i)=\sum_{i\in I_{j}}|\{\{v_{i},v_{i_{t}}\}\ |\ 1\leq t\leq s(i)-1\}\cup\{r,v_{i}\}|=\frac{1}{2}| \widehat{E}_{j^{\prime}}^{\prime}|=\frac{1}{2}\cdot 2\cdot B=B\). Therefore \(I_{1},\ldots,I_{k}\) is a solution for \((I,s,B,k)\), so \((I,s,B,k)\) is a yes-instance of Exact Bin Packing problem. This ends the proof.
Clearly, the reduction works in polynomial time when the input is in unary. In addition, observe that the treedepth of the tree, obtained by the reduction, is bounded by \(3\). Now, recall that, by Lemma 7.1, unary Exact Bin Packing is W[1]-hard with respect to \(k\). Thus, we conclude from Lemma 7.2 the correctness of Theorem 1.3. |
2302.05691 | Methods of generating soft topologies and soft separation axioms | The paper develops a novel analysis of mutual interactions between topology
and soft topology. It is known that each soft topology produces a system of
crisp (parameterized) topologies. The other way round is also possible. Namely,
one can generate a soft topology from a system of crisp topologies. Different
methods of producing soft topologies are discussed by implementing two
formulas. Then, the relationships between the resulting soft topologies are
obtained. With the help of an example, it is demonstrated that one formula is
more constructible than the other. Now, it is reasonable to ask which
(topological) properties of a soft topology can be transferred to the set of
crisp topologies or the opposite. To address this question, we consider the
standard separation axioms and show how well these axioms can be preserved when
moving from a system of crisp topologies to the soft topology generated by it
and contrariwise. Additionally, our findings extend and disprove some results
from the literature. | Zanyar A. Ameen, Baravan A. Asaad | 2023-02-11T13:20:03Z | http://arxiv.org/abs/2302.05691v2 | # Methods of generating soft topologies and soft separation axioms
###### Abstract
The paper develops a novel analysis of mutual interactions between topology and soft topology. It is known that each soft topology produces a system of crisp (parameterized) topologies. The other way round is also possible. Namely, one can generate a soft topology from a system of crisp topologies. Different methods of producing soft topologies are discussed by implementing two formulas. Then, the relationships between the resulting soft topologies are obtained. With the help of an example, it is demonstrated that one formula is more constructible than the other. Now, it is reasonable to ask which (topological) properties of a soft topology can be transferred to the set of crisp topologies, or the opposite. To address this question, we consider the standard separation axioms and show how well these axioms can be preserved when moving from a system of crisp topologies to the soft topology generated by it and contrariwise. Additionally, our findings extend and disprove some results from the literature.
**Key words and phrases:** soft topology, soft \(T_{0}\), soft \(T_{1}\), soft \(T_{2}\), soft regular, soft normal, soft \(T_{3}\), soft \(T_{4}\).
**2020 MSC:** 54A05, 54H99.
## 1 Introduction
In its modern version, the Weierstrass Extreme Value Theorem demonstrates that topological considerations can be useful in decision-making theory and economics, (see, [8]). Indeed, the development of topological structures helps to enhance other disciplines.
General topology is the mathematical branch of topology that concerns itself with the foundational set-theoretic notions and constructions. Motivated by the standard axioms of classical topological space, Shabir and Naz [31], and Cagman et al. [20], separately, introduced another branch of topology named "soft topology."
Soft topology is the combination of soft set theory and topology. It is focused on the construction of the system of all soft sets.
Soft sets were presented as a collection of relevant parameters to characterize a universe of possibilities. Soft set theory has been a fruitful area of study and connection with various disciplines since its establishment. Molodtsov [27], in 1999, originated the soft set theory as is a mathematical tool for dealing with uncertainty which is free of the challenges related with other theories such as fuzzy set theory [34], rough set theory [30], and so on. In particular, the nature of parameter sets associated with soft sets provides a standardized foundation for modeling uncertain data. This leads to the rapid growth of soft set theory and soft topology in a short amount of time and provides various applications of soft sets in real life.
There are various studies that have made significant contributions to the development of soft topology since its foundation in [20, 31]. A soft topological approach was then used to interpret the behavior of the most fundamental concepts in (general) topology. To be specific, soft compactness [18], soft connectedness [23], soft extremal disconnectedness [17], soft submaximality [1], soft simple extendedness [11], and soft continuity [28].
Different methods of generating soft topologies on a common universal set were discussed in [6, 7, 8, 19, 28, 32, 35].
Soft continuity of mappings has been widely generalized to diverse classes, including soft semi-continuity [24], soft \(\beta\)-continuity [33], soft SD-continuity [4], soft somewhat continuity [14] and soft \(\mathcal{U}\)-continuity [10].
Soft separation axioms are another significant aspect in the late development of soft topology; see for example [3, 5, 13, 21, 15, 16, 26, 31].
Two remarkable formulas for generating soft topologies from a system of crisp topologies have been given by Terepeta [32]. One of the formulas (Formula 2) is said to generate a single set soft topology, while the other one generates a more general soft topology (Formula 1). Terepeta mainly applied Formula 2 to study the inheritance of soft separation axioms after the system of crisp topologies. Recently, Alcantud [7] proposed a slight extension of Formula 1 (we also call it Formula 1). He then employed such a formula to investigate the behavior of separability and second countability axioms between a system of crisp topologies and the soft topology generated by it. Very recently, Alcantud [8] established crucial relationships between soft and fuzzy soft topologies. The work of Terepeta and Alcantud inspired us to attempt this research. Following their direction, we first apply the formulas to the system of crisp topologies taken from a soft topology in order to determine the connections between the obtained soft topologies and the original one. In addition, we use Formula 1 to verify how well the separation axioms are transferred between a system of crisp topologies and the soft topology that it generates. The latter statement extends the work of Terepeta (see, Section 2.1 in [32]), which is the main objective of this research.
## 2 Preliminaries
Let \(X\) be an initial universe, \(\mathcal{P}(X)\) be all subsets of \(X\) and \(E\) be a set of parameters. An ordered pair \((F,E)=\{(e,F(e)):e\in E\}\) is said to be a soft set over \(X\), where \(F:E\rightarrow\mathcal{P}(X)\) is a set value mapping. The family of all soft sets on \(X\) is represented by \(S_{E}(X)\). A soft point
[31] is a soft set \((F,E)\) over \(X\) in which \(F(e)=\{x\}\) for each \(e\in E\), where \(x\in X\), and is denoted by \((\{x\},E)\). It is said that a soft point \((\{x\},E)\) is in \((F,E)\) (briefly, \(x\in(F,E)\)) if \(x\in F(e)\) for each \(e\in E\). On the other hand, \(x\notin(F,E)\) if \(x\notin F(e)\) for some \(e\in E\). This implies that if \((\{x\},E)\widetilde{\bigcap}(F,E)=\widetilde{\Phi}\), then \(x\notin(F,E)\). The soft set \((X,E)\backslash(F,E)\) (or simply \((F,E)^{c}\)) is the complement of \((F,E)\), where \(F^{c}:E\rightarrow\mathcal{P}(X)\) is given by \(F^{c}(e)=X\backslash F(e)\) for each \(e\in E\). A soft subset \((F,E)\) over \(X\) is called null, denoted by \(\widetilde{\Phi}\), if \(F(e)=\emptyset\) for each \(e\in E\) and is called absolute, denoted by \(\widetilde{X}\), if \(F(e)=X\) for each \(e\in E\). Notice that \(\widetilde{X}^{c}=\widetilde{\Phi}\) and \(\widetilde{\Phi}^{c}=\widetilde{X}\). A set \((F,E)\in S_{E}(X)\) (\(X,\Sigma,E\)) is called pseudo constant [29] if \(F(e)=X\) or \(F(e)=\emptyset\) for each \(e\in E_{0}\), where \(E_{0}\widetilde{\subseteq}E\). The family of all pseudo constant soft sets on \(X\) is symbolized by \(PC_{E}(X)\). It is said that \((A,E_{1})\) is a soft subset of \((B,E_{2})\) (written by \((A,E_{1})\widetilde{\subseteq}(B,E_{2})\), [25]) if \(E_{1}\subseteq E_{2}\) and \(A(e)\subseteq B(e)\) for each \(e\in E_{1}\), and \((A,E_{1})=(B,E_{2})\) if \((A,E_{1})\widetilde{\subseteq}(B,E_{2})\) and \((B,E_{2})\widetilde{\subseteq}(A,E_{1})\). The union of soft sets \((A,E),(B,E)\) is represented by \((F,E)=(A,E)\widetilde{\cup}(B,E)\), where \(F(e)=A(e)\cup B(e)\) for each \(e\in E\), and intersection of soft sets \((A,E),(B,E)\) is given by \((F,E)=(A,E)\widetilde{\cap}(B,E)\), where \(F(e)=A(e)\cap B(e)\) for each \(e\in E\), see [9].
**Definition 2.1**.: _[_31_]_ _A collection \(\Sigma\) of \(S_{E}(X)\) is said to be a soft topology on \(X\) if the following conditions are satisfied:_
1. \(\widetilde{\Phi},\widetilde{X}\in\Sigma\)_;_
2. _If_ \((F_{1},E),(F_{2},E)\in\Sigma\)_, then_ \((F_{1},E)\widetilde{\cap}(F_{2},E)\in\Sigma\)_; and_
3. _If each_ \(\{(F_{i},E):i\in I\}\widetilde{\subseteq}\Sigma\)_, then_ \(\widetilde{\bigcup}_{i\in I}(F_{i},E)\in\Sigma\)_._
_Terminologically, we call \((X,\Sigma,E)\) a soft topological space on \(X\). The elements of \(\Sigma\) are called soft open sets in \(\Sigma\) (or simply, soft open sets when no confusion arises), and their complements are called soft closed sets in \(\Sigma\) (or shortly, soft closed sets)._
**Definition 2.2**.: _[_18_]_ _A soft topological space \((X,\Sigma,E)\) is called enriched if \((F,E)\in\Sigma\) for all \((F,E)\in PC_{E}(X)\)._
In what follows, by \((X,\Sigma,E)\) we mean a soft topological space, and by disjoint of two soft sets \((F,E),(G,E)\) over \(X\) we mean \((F,E)\widetilde{\cap}(G,E)=\widetilde{\Phi}\).
**Definition 2.3**.: _[_20_]_ _A subcollection \(\mathcal{B}\subseteq\Sigma\) is called a soft base for the soft topology \(\Sigma\) if each element of \(\Sigma\) is a union of elements of \(\mathcal{B}\)._
**Definition 2.4**.: _[_20_]_ _Let \(\Sigma_{1},\Sigma_{2}\) be two soft topologies on \(X\). It is said that \(\Sigma_{2}\) is finer than \(\Sigma_{1}\) (or \(\Sigma_{1}\) is coarser than \(\Sigma_{2}\)) if \(\Sigma_{1}\widetilde{\subseteq}\Sigma_{2}\)_
**Lemma 2.5**.: _[_31_]_ _Let \((X,\Sigma,E)\) be a soft topology on \(X\). For each \(e\in E\), \(\Sigma_{e}=\{F(e):(F,E)\in\Sigma\}\) is a crisp topology on \(X\)._
**Definition 2.6**.: _[_12_]_ _Let \(\mathcal{F}\widetilde{\subseteq}S_{E}(X)\). The intersection of all soft topologies on \(X\) including \(\mathcal{F}\) is called a soft topology generated by \(\mathcal{F}\) and is referred to \(T(\mathcal{F})\)._
**Lemma 2.7**.: _[_2_, Lemma 3.5]_ _Let \(\Sigma_{1},\Sigma_{2}\) be two soft topologies on \(X\). The resulting soft topology \(T(\Sigma_{1}\widetilde{\cup}\Sigma_{2})\) is identical to the soft topology \(T(\mathcal{F})\) generated by \(\mathcal{F}=\{(F_{1},E)\widetilde{\cap}(F_{2},E):(F_{1},E)\in\Sigma_{1},(F_{2},E)\in\Sigma_{2}\}\)._
**Lemma 2.8**.: _[_26_, Theorem 3.18]_ _If \((X,\Sigma,E)\) is a soft regular space, then \(\Sigma_{e}=\Sigma_{e^{\prime}}\) for each \(e,e^{\prime}\in E\)._
**Definition 2.9**.: _[_31_]_ _A soft space \((X,\Sigma,E)\) is called_
* _soft_ \(T_{0}\) _if for each_ \(x,y\in X\) _with_ \(x\neq y\)_, there exist soft open sets_ \((U,E),(V,E)\) _such that_ \(x\in(U,E)\)_,_ \(y\notin(U,E)\) _or_ \(x\notin(V,E)\)_,_ \(y\in(V,E)\)_,_
* _soft_ \(T_{1}\) _if for each_ \(x,y\in X\) _with_ \(x\neq y\)_, there exist soft open sets_ \((U,E),(V,E)\) _such that_ \(x\in(U,E)\)_,_ \(y\notin(U,E)\) _and_ \(x\notin(V,E)\)_,_ \(y\in(V,E)\)_,_
* _soft_ \(T_{2}\)__(_soft Hausdorff_) _if for each_ \(x,y\in X\) _with_ \(x\neq y\)_, there exist soft open sets_ \((U,E),(V,E)\) _containing_ \(x,y\) _respectively such that_ \((U,E)\widehat{\bigcap}(V,E)=\widetilde{\Phi}\)_._
* _soft regular if for each soft closed set_ \((F,E)\) _and each soft point_ \(x\) _with_ \(x\notin(F,E)\)_, there exist soft open sets_ \((U,E),(V,E)\) _such that_ \(x\in(U,E)\)_,_ \((F,E)\widetilde{\subseteq}(V,E)\) _and_ \((U,E)\widehat{\bigcap}(V,E)=\widetilde{\Phi}\)_._
* _soft normal if for each soft closed sets_ \((F,E),(D,E)\) _with_ \((F,E)\widehat{\bigcap}(D,E)=\widetilde{\Phi}\)_, there exist soft open sets_ \((U,E),(V,E)\) _such that_ \((F,E)\widetilde{\subseteq}(U,E)\)_,_ \((D,E)\widetilde{\subseteq}(V,E)\) _and_ \((U,E)\widehat{\bigcap}(V,E)=\widetilde{\Phi}\)_._
* _soft_ \(T_{3}\) _if it is soft_ \(T_{1}\) _and soft regular._
* _soft_ \(T_{4}\) _if it is soft_ \(T_{1}\) _and soft normal._
## 3 Methods of generating soft topologies and their relationships
This section provides different methods of producing soft topologies via Formulas 1 & 2. An example is given which discusses the implementation of these formulas in detail. The relationships between the original soft topology and the soft topologies that are produced by Formulas 1 & 2.
**Definition 3.1**.: _[_7, 32_]_ _Let \(\mathbf{\Sigma}=\{\Sigma_{e}\}\) be a family of (crisp) topologies on a set \(X\) indexed by \(E\). Then following procedures produce different soft topologies on \(X\):_
\[(1)\hskip 28.452756pt\mathcal{T}(\mathbf{\Sigma})=\Big{\{}\{(e,F(e)):e\in E \}\in S_{E}(X):F(e)\in\Sigma_{e},\forall e\in E\Big{\}},\]
\(\mathcal{T}(\mathbf{\Sigma})\) _is called a soft topology generated by \(\mathbf{\Sigma}\). If for each \(e,e^{\prime}\in E\), \(\Sigma_{e}=\Sigma_{e^{\prime}}=\Sigma\), then \(\mathcal{T}(\mathbf{\Sigma})=\mathcal{T}(\Sigma)\)._
\[(2)\hskip 14.226378pt\hat{\mathcal{T}}(\Sigma_{e})=\Big{\{}\{(e,F(e)):e\in E \}\in S_{E}(X):F(e)=F(e^{\prime})\in\Sigma_{e},\forall e,e^{\prime}\in E\Big{\}},\]
\(\hat{\mathcal{T}}(\Sigma_{e})\) _is called a single set soft topology generated by \(\Sigma_{e}\)._
**Definition 3.2**.: _Let \((X,\Sigma,E)\) be a soft topological space. If \(\mathbf{\Sigma}=\{\Sigma_{e}:e\in E\}\) is the family of all crisp topologies from \(\Sigma\), then we call \(\mathcal{T}(\mathbf{\Sigma})\) the soft topology associated with \(\Sigma\)._
_Note that \(\mathcal{T}(\mathbf{\Sigma})\) is called extended soft topology in [28]._
**Remark 3.3**.: _In [6], Theorem 2 shows that \(\mathcal{T}(\mathbf{\Sigma})\) is equivalent to the enriched soft topology._
**Lemma 3.4**.: _Let \(\mathbf{\Sigma}=\{\Sigma_{e}:e\in E\}\) be the family of all crisp topologies from \((X,\Sigma,E)\). Then_
\[\Sigma\widetilde{\subseteq}\mathcal{T}(\mathbf{\Sigma}).\]
Proof.: It can be concluded from the definition of soft sets and the soft topology generated by \(\mathbf{\Sigma}\).
**Lemma 3.5**.: _Let \(\bar{\beta}=\{\beta_{e}:e\in E\}\) be a family of bases for the topologies \(\Sigma_{e}\) on \(X\). Then \(\mathcal{B}(\bar{\beta})=\left\{\{(e,F(e)):e\in E\}\in S_{E}(X):F(e)\in\beta_{ e}\cup\{\emptyset\},\forall e\in E\right\}\) is a base for a soft topology on \(X\) and \(\mathcal{T}(\mathbf{\Sigma})=T(\mathcal{B}(\bar{\beta}))\)._
Proof.: By using Corollary 3 in [7] and simple modifications to the proof of Theorem 3 in [7], we can conclude the proof.
The following is a straightforward generalizations of Lemma 2.7, thus the proof is omitted:
**Lemma 3.6**.: _Let \(\{\Sigma_{e}:e\in E\}\) be a family of soft topologies on \(X\). The resulting soft topology \(T(\widetilde{\bigcup}_{e\in E}\Sigma_{e})\) is identical to the soft topology \(T(\mathcal{F})\) generated by \(\mathcal{F}=\{\widetilde{\bigcap}_{e_{i}=1}^{n}(F_{e_{i}},E):(F_{e_{i}},E)\in \widetilde{\bigcup}_{e_{i}\in E}\Sigma_{e_{i}}\}\)._
**Lemma 3.7**.: _Let \(\mathbf{\Sigma}=\{\Sigma_{e}:e\in E\}\) be a family of crisp topologies on \(X\). Then_
\[T(\widetilde{\bigcup}_{e\in E}\tilde{\mathcal{T}}(\Sigma_{e}))=\tilde{ \mathcal{T}}(T(\bigcup_{e\in E}\Sigma_{e})).\]
Proof.: The Lemma 3.5 reduces the task of working with basic soft open sets rather than soft open sets. Let \((B_{0},E)\in T(\widetilde{\bigcup}_{e\in E}\tilde{\mathcal{T}}(\Sigma_{e}))\). Then \((B_{0},E)=\widetilde{\bigcap}_{i=1}^{n}(B_{i},E)\) for \((B_{i},E)\in\widetilde{\bigcup}\tilde{\mathcal{T}}(\Sigma_{e})\), and so \((B_{0},E)=\widetilde{\bigcap}_{i=1}^{n}(B_{i},E)\) such that \((B_{i},E)\in\tilde{\mathcal{T}}(\Sigma_{e})\) for some \(e\in E\). By Formula 2, one can detach \(E\) from \((B_{i},E)\) for \(i=0,1,\cdots,n\), and get \(B_{0}=\bigcap_{i=1}^{n}B_{i}\), where \(B_{i}\in\Sigma_{e}\) for some \(e\in E\). This implies that \(B_{0}=\bigcap_{i=1}^{n}B_{i}\) for \(B_{i}\in\bigcup\Sigma_{e}\). Therefore, by Formula 2, \((B_{0},E)\in\tilde{\mathcal{T}}(T(\widetilde{\bigcup}\Sigma_{e}))\). The reverse of the inclusion can be proved by a similar technique.
The following example shows how the techniques in Definition 1 and the relations in Lemmas 3.4-3.7 can be used in practice:
**Example 3.8**.: _Let \(X=\{x_{1},x_{2},x_{3}\}\), \(E=\{e_{1},e_{2}\}\). Consider the soft topology on \(X\),_
\[\Sigma=\{\widetilde{\Phi},(F_{1},E),(F_{2},E),(F_{3},E),(F_{4},E),\widetilde{ X}\},\]
_where_
\[(F_{1},E) =\{(e_{1},\{x_{1}\}),(e_{2},\emptyset)\},\] \[(F_{2},E) =\{(e_{1},\{x_{1},x_{2}\}),(e_{2},X)\},\] \[(F_{3},E) =\{(e_{1},\emptyset),(e_{2},\{x_{3}\})\},\text{ and }\] \[(F_{4},E) =\{(e_{1},\{x_{1}\}),(e_{2},\{x_{3}\})\}.\]
_The crisp topologies from \(\Sigma\) are_
\[\Sigma_{e_{1}}=\{\emptyset,\{x_{1}\},\{x_{1},x_{2}\},X\}\text{ and }\Sigma_{e_{2}}=\{ \emptyset,\{x_{3}\},X\}.\]
_Applying the formula (2), we obtain the following two soft topologies on \(X\):_
\[\hat{\mathcal{T}}(\Sigma_{e_{1}}) =\Big{\{}\widetilde{\Phi},\{(e_{1},\{x_{1}\}),(e_{2},\{x_{1}\}) \},\{(e_{1},\{x_{1},x_{2}\}),(e_{2},\{x_{1},x_{2}\})\},\widetilde{X}\Big{\}}\] \[=\Big{\{}\widetilde{\Phi},\{(x_{1}\},E),(\{x_{1},x_{2}\},E), \widetilde{X}\Big{\}}\text{ (more compactly) and }\] \[\hat{\mathcal{T}}(\Sigma_{e_{2}}) =\Big{\{}\widetilde{\Phi},\{(e_{1},\{x_{3}\}),(e_{2},\{x_{3}\}) \},\widetilde{X}\Big{\}}=\{\widetilde{\Phi},(E,\{x_{3}\}),\widetilde{X})\}.\]
_From Lemma 3.6, we can naturally generate a soft topology \(T\) on \(X\) by the union of \(\hat{\mathcal{T}}(\Sigma_{e_{1}})\) and \(\hat{\mathcal{T}}(\Sigma_{e_{2}})\). That is,_
\[T\Big{(}\overset{\sim}{\bigcup}_{i=1}^{2}\hat{\mathcal{T}}(\Sigma_{e_{i}}) \Big{)}=\Big{\{}\widetilde{\Phi},(G_{1},E),(G_{2},E),(G_{3},E),(G_{4},E), \widetilde{X}\Big{\}},\]
_where_
\[(G_{1},E) =\{(e_{1},\{x_{1}\}),(e_{2},\{x_{1}\})\},\] \[(G_{2},E) =\{(e_{1},\{x_{3}\}),(e_{2},\{x_{3}\})\},\] \[(G_{3},E) =\{(e_{1},\{x_{1},x_{2}\}),(e_{2},\{x_{1},x_{2}\})\},\text{ and }\] \[(G_{4},E) =\{(e_{1},\{x_{1},x_{3}\}),(e_{2},\{x_{1},x_{3}\})\}.\]
_The compact form of the above conclusion is_
\[T\Big{(}\overset{\sim}{\bigcup}_{i=1}^{2}\hat{\mathcal{T}}(\Sigma_{e_{i}}) \Big{)}=\Big{\{}\widetilde{\Phi},(\{x_{1}\},E),(\{x_{3}\},E),(\{x_{1},x_{2}\},E),(\{x_{1},x_{3}\},E),\widetilde{X}\Big{\}}.\]
_By applying the first formula, the next soft topology on \(X\) will be obtained._
\[\mathcal{T}(\mathbf{\Sigma})=\mathcal{T}(\{\Sigma_{e_{1}},\Sigma_{e_{2}}\})= \Big{\{}\widetilde{\Phi},(H_{1},E),(H_{2},E),\cdots,(H_{10},E),\widetilde{X} \Big{\}},\]
_where_
\[\mathcal{T}(\mathbf{\Sigma})=\mathcal{T}(\{\Sigma_{e_{1}},\Sigma_{e_{2}}\})= \Big{\{}\widetilde{\Phi},(H_{1},E),(H_{2},E),\cdots,(H_{10},E),\widetilde{X} \Big{\}}.\]
_The proof of the proposition is given in the following lemma._
**Lemma 3.7**.: _Let \(\Sigma\) be a compact form of the above proposition. Then,_
\[\mathcal{T}(\mathbf{\Sigma})=\mathcal{T}(\{\Sigma_{e_{1}},\Sigma_{e_{2}}\})= \Big{\{}\widetilde{\Phi},(H_{1},E),(H_{2},E),\cdots,(H_{10},E),\widetilde{X} \Big{\}},\]
_where_
\[\mathcal{T}(\mathbf{\Sigma})=\mathcal{T}(\{\Sigma_{e_{1}},\Sigma_{e_{2}}\})= \Big{\{}\widetilde{\Phi},(H_{1},E),(H_{2},E),\cdots,(H_{10},E),\widetilde{X} \Big{\}},\]
_where_
\[\mathcal{T}(\mathbf{\Sigma})=\mathcal{T}(\{\Sigma_{e_{1}},\Sigma_{e_{2}}\})= \Big{\{}\widetilde{\Phi},(H_{1},E),(H_{2},E),\cdots,(H_{10},E),\widetilde{X} \Big{\}},\]
_where_
\[\mathcal{T}(\{\Sigma_{e_{1}},\Sigma_{e_{2}}\})=\Big{\{}\widetilde{\Phi},(H_{1 },E),(H_{2},E),\cdots,(H_{10},E),\widetilde{X}\Big{\}},\]
_where_
\[\mathcal{T}(\{\Sigma_{e_{1}},\Sigma_{e_{2}}\})=\Big{\{}\widetilde{\Phi},(H_{1 },E),(H_{2},E),\cdots,(H_{10},E),\widetilde{X}\Big{\}},\]
\[(H_{1},E) =\{(e_{1},\emptyset),(e_{2},X)\},\] \[(H_{2},E) =\{(e_{1},\emptyset),(e_{2},\{x_{3}\})\},\] \[(H_{3},E) =\{(e_{1},X),(e_{2},\emptyset)\},\] \[(H_{4},E) =\{(e_{1},X),(e_{2},\{x_{3}\})\},\] \[(H_{5},E) =\{(e_{1},\{x_{1}\}),(e_{2},\emptyset)\},\] \[(H_{6},E) =\{(e_{1},\{x_{1}\}),(e_{2},X)\},\] \[(H_{7},E) =\{(e_{1},\{x_{1}\}),(e_{2},\{x_{3}\})\},\] \[(H_{8},E) =\{(e_{1},\{x_{1},x_{2}\}),(e_{2},\emptyset)\},\] \[(H_{9},E) =\{(e_{1},\{x_{1},x_{2}\}),(e_{2},X)\},\text{ and }\] \[(H_{10},E) =\{(e_{1},\{x_{1},x_{2}\}),(e_{2},\{x_{3}\})\}.\]
_One can easily check that the above computations lead to the following observations:_
* \(\Sigma\) _is a subcollection of_ \(\mathcal{T}(\mathbf{\Sigma})\)_._
* \(\Sigma\) _is independent of_ \(T\big{(}\widetilde{\bigcup}_{i=1}^{2}\hat{\mathcal{T}}(\Sigma_{e_{i}})\big{)}\)_._
* \(T\big{(}\widetilde{\bigcup}_{i=1}^{2}\hat{\mathcal{T}}(\Sigma_{e_{i}})\big{)}\) _is independent of_ \(\mathcal{T}(\mathbf{\Sigma})\)_._
The relationships between the soft topologies on a common universe obtained by the methods described in this section is summarized in Diagram 1.
\(\mathcal{T}(\mathbf{\
\((R_{3},E)\), \((R_{4},E),\widetilde{X}\}\) be another soft topology on \(X\), where_
\[(R_{1},E) =\{(e_{1},\{x_{1}\}),(e_{2},\emptyset)\},\] \[(R_{2},E) =\{(e_{1},\{x_{1},x_{2}\}),(e_{2},X)\},\] \[(R_{1},E) =\{(e_{1},X),(e_{2},\{x_{3}\})\},\text{ and }\] \[(R_{1},E) =\{(e_{1},\{x_{1},x_{2}\}),(e_{2},\{x_{3}\})\}.\]
_Then \(\Sigma\) and \(\Sigma^{\prime}\) are incomparable. Set \(\hat{\Sigma}=\Sigma\widetilde{\bigcup}\Sigma^{\prime}\). Therefore, \(\hat{\Sigma}\) is finer than both \(\Sigma\) and \(\Sigma^{\prime}\). On the other hand, \(\Sigma,\Sigma^{\prime}\) and \(\hat{\Sigma}\) have the same family of crisp topologies \(\mathbf{\Sigma}=\{\Sigma_{e_{1}},\Sigma_{e_{2}}\}\), and thus they generate only one \(\mathcal{T}(\mathbf{\Sigma})\)._
## 5 Separation axioms preservation between \(\mathbf{\Sigma}\) and \(\mathcal{T}(\mathbf{\Sigma})\)
With the exception of \(\mathcal{T}(\Sigma)\), a single set soft topology \(\hat{\mathcal{T}}(\Sigma)\) generated by \(\Sigma\) inherits soft separation axioms after \(\Sigma\), according to Terepeta [32]. In this section, we use \(\mathcal{T}(\mathbf{\Sigma})\) to see how well separation axioms are preserved when moving from \(\mathbf{\Sigma}\) to \(\mathcal{T}(\mathbf{\Sigma})\) and vice versa.
Here, we start defining special types of soft sets that exist when constructing a soft topology by using \(\mathcal{T}(\mathbf{\Sigma})\) for future usage.
**Definition 5.1**.: _Let \(G,H\subset X\) and let \(e\in E\), we define_
1. \((F_{e}^{G},E)\) _to be a soft set over_ \(X\) _such that_ \(F_{e}^{G}(e)=G\) _and_ \(F_{e}^{G}(e^{\prime})=X\) _for each_ \(e^{\prime}=e\)_._
2. \((F_{H}^{e},E)\) _to be a soft set over_ \(X\) _such that_ \(F_{H}^{e}(e)=H\) _and_ \(F_{H}^{e}(e^{\prime})=\emptyset\) _for each_ \(e^{\prime}=e\)_, (see,_ _[_7_, Definition 5]__)._
_Note that \((F_{e}^{G},E)^{c}=(F_{H}^{e},E)\) if and only if \(G^{c}=H\)._
**Theorem 5.2**.: _Let \(\mathbf{\Sigma}=\{\Sigma_{e}:e\in E\}\) be a family of crisp topologies on \(X\). Then \(\Sigma_{e}\) is a \(T_{0}\)-space for some \(e\in E\) if and only if \(\mathcal{T}(\mathbf{\Sigma})\) is a soft \(T_{0}\)-space._
Proof.: Suppose that \(\Sigma_{e}\) is a \(T_{0}\)-space for some \(e\in E\). Let \(x,y\in X\) with \(x\neq y\). Then there exist open sets \(U,V\in\Sigma_{e}\) such that \(x\in U\), \(y\notin U\) or \(x\notin V\), \(y\in V\). By Definition 5.1, there exist two corresponding soft sets \((F_{e}^{U},E),(F_{e}^{V},E)\in\mathcal{T}(\mathbf{\Sigma})\) for which \(x\in(F_{e}^{U},E)\), \(y\notin(F_{e}^{U},E)\) or \(x\notin(F_{e}^{V},E)\), \(y\in(F_{e}^{V},E)\). Thus, \(\mathcal{T}(\mathbf{\Sigma})\) is soft \(T_{0}\).
Conversely, let \(x,y\in X\) with \(x\neq y\). If \(\mathcal{T}(\mathbf{\Sigma})\) is soft \(T_{0}\), then exists a soft open set \((G,E)\) containing one them, \(x\) say, but not the other, \(y\) say. Since \(y\notin(G,E)\), then \(y\notin G(e)\) for some \(e\in E\), \(e_{0}\) say. Therefore, \(G(e_{0})\) is an open set such that \(x\in G(e_{0})\) and \(y\notin G(e_{0})\), and so \(\Sigma_{e_{0}}\) is \(T_{0}\). Hence the proof.
**Theorem 5.3**.: _Let \(\mathbf{\Sigma}=\{\Sigma_{e}:e\in E\}\) be a family of crisp topologies on \(X\). If \(\Sigma_{e}\) is a \(T_{1}\)-space for some \(e\in E\) then \(\mathcal{T}(\mathbf{\Sigma})\) is a soft \(T_{1}\)-space._
Proof.: It is entirely analogous to the first part of the proof of Theorem 5.2.
The following example shows that the converse of Theorem 5.3 is not true in general. It also refutes Theorem 3.5 in [22]:
**Example 5.4**.: _Let \(X=\{x_{1},x_{2}\}\) and let \(\Sigma_{e_{1}}=\{\emptyset,\{x_{1}\},X\}\) and \(\Sigma_{e_{2}}=\{\emptyset,\{x_{2}\},X\}\) be crisp topologies on \(X\) indexed by \(E=\{e_{1},e_{2}\}\). By using the Formula 1, the following soft topology on \(X\) will be obtained:_
\[\mathcal{T}(\mathbf{\Sigma})=\mathcal{T}(\{\Sigma_{e_{1}},\Sigma_{e_{2}}\})= \Big{\{}\widetilde{\Phi},(H_{1},E),(H_{2},E),(H_{3},E),(H_{4},E),(H_{5},E),(H_ {6},E),(H_{7},E),\widetilde{X}\Big{\}},\]
_where_
\[(H_{1},E) =\{(e_{1},\emptyset),(e_{2},X)\},\] \[(H_{2},E) =\{(e_{1},X),(e_{2},\emptyset)\},\] \[(H_{3},E) =\{(e_{1},\emptyset),(e_{2},\{x_{2}\})\},\] \[(H_{4},E) =\{(e_{1},X),(e_{2},\{x_{2}\})\},\] \[(H_{5},E) =\{(e_{1},\{x_{1}\}),(e_{2},\emptyset)\},\] \[(H_{6},E) =\{(e_{1},\{x_{1}\}),(e_{2},X)\},\text{ and }\] \[(H_{7},E) =\{(e_{1},\{x_{1}\}),(e_{2},\{x_{2}\})\}.\]
_Then \((H_{4},E)\) and \((H_{6},E)\) are soft open sets in \(\mathcal{T}(\mathbf{\Sigma})\) such that \(x_{1}\in(H_{6},E)\), \(x_{2}\notin(H_{6},E)\) and \(x_{2}\in(H_{4},E)\), \(x_{1}\notin(H_{4},E)\). Thus, \(\mathcal{T}(\mathbf{\Sigma})\) is soft \(T_{1}\). On the other hand, neither of \(\Sigma_{e_{1}}\) nor \(\Sigma_{e_{2}}\) is \(T_{1}\)._
**Theorem 5.5**.: _Let \(\mathbf{\Sigma}=\{\Sigma_{e}:e\in E\}\) be a family of crisp topologies on \(X\). Then \(\Sigma_{e}\) is a \(T_{2}\)-space for each \(e\in E\) if and only if \(\mathcal{T}(\mathbf{\Sigma})\) is a soft \(T_{2}\)-space._
Proof.: Assume that \(\Sigma_{e}\) is \(T_{2}\) for each \(e\in E\). Let \(x,y\in X\) with \(x\neq y\). Then, for each \(e\), there exist open sets \(U(e),V(e)\in\Sigma_{e}\) such that \(x\in U(e)\), \(y\in V(e)\) and \(U(e)\cap V(e)=\emptyset\). Set \((U,E)=\{(e,U(e)):e\in E\}\) and \((V,E)=\{(e,V(e)):e\in E\}\). So \((U,E),(V,E)\in\mathcal{T}(\mathbf{\Sigma})\) such that \(x\in(U,E)\), \(y\in(V,E)\) and \((U,E)\widehat{\bigcap}(V,E)=\{(e,U(e)\cap V(e)):e\in E\}=\widetilde{\Phi}\). Hence, \(\mathcal{T}(\mathbf{\Sigma})\) is soft \(T_{2}\).
Conversely, let \(x,y\in X\) with \(x\neq y\). Suppose that \(\mathcal{T}(\mathbf{\Sigma})\) is soft \(T_{2}\), then exists a soft open set \((G,E),(H,E)\) such that \(x\in(G,E)\), \(y\in(H,E)\) and \((G,E)\widehat{\bigcap}(H,E)=\widetilde{\Phi}\). This means that, for each \(e\in E\), \(x\in G(e)\), \(y\in H(e)\) and \(G(e)\cap H(e)=\emptyset\). Thus, \(\Sigma_{e}\) is \(T_{2}\) for each \(e\in E\).
**Corollary 5.6**.: _If \((X,\Sigma,E)\) is a soft \(T_{2}\)-space, then \(\Sigma_{e}\) is \(T_{2}\) for each \(e\in E\)._
Proof.: It is an immediate consequence of Lemma 3.4 and Theorem 5.5.
Notice that Theorem 5.5 and Corollary 5.6 generalize (part of) Theorem 4 in [32] and Proposition 17 in [31], respectively.
**Lemma 5.7**.: _[_32_, Theorem 3]_ _Let \(\Sigma\) be a (crisp) topology on a set \(X\). A soft set \((F,E)\) is soft closed in \(\mathcal{T}(\Sigma)\) if and only if \((F,E)=\{(e,F(e)):F^{c}(e)\in\Sigma\}\)._
**Theorem 5.8**.: _Let \(\mathbf{\Sigma}=\{\Sigma_{e}:e\in E\}\) be a family of crisp topologies on \(X\). If \(\mathcal{T}(\mathbf{\Sigma})\) is a soft regular space, then \(\Sigma_{e}\) is a regular space for each \(e\in E\)._
Proof.: Let \(e\in E\). Take \(x\in X\) and \(F(e)\) be a closed set in \((X,\Sigma_{e})\) such that \(x\notin F(e)\). The Definition 3.1 and Lemma 2.8 tell us that soft regularity of \(\mathcal{T}(\mathbf{\Sigma})\) guarantees the equality of \(\mathcal{T}(\mathbf{\Sigma})=\mathcal{T}(\Sigma)\). Set \((F,E)=\{(e,F(e)):F^{c}(e)\in\Sigma\}\). By Lemma 5.7, \((F,E)\) is soft closed in \(\mathcal{T}(\Sigma)\) along with \(x\notin(F,E)\). Since \(\mathcal{T}(\Sigma)\) is soft regular, then there exist soft open sets \((U,E),(V,E)\) in \(\mathcal{T}(\Sigma)\) such that \(x\in(U,E),(F,E)\stackrel{{\subseteq}}{{=}}(V,E)\) and \(\widetilde{\Phi}=(U,E)\widehat{\bigcap}(V,E)=\{(e,U(e)\cap V(e)):e\in E\}\). This implies that \(x\in U(e)\), \(F(e)\subseteq V(e)\) and \(U(e)\cap V(e)=\emptyset\) for each \(e\in E\). Since \(U(e),V(e)\in\Sigma_{e}\), then \(\Sigma_{e}\) is regular for each \(e\in E\).
**Corollary 5.9**.: _If \((X,\Sigma,E)\) is a soft regular space, then \(\Sigma_{e}\) is regular for each \(e\in E\)._
Proof.: It can be concluded from Lemma 3.4 and Theorem 5.8.
**Remark 5.10**.: _We shall mention that it is observed in Remark 3.23 (2') [26] that if \((X,\Sigma,E)\) is a soft \(T_{3}\) space, then \(\Sigma_{e}\) is \(T_{3}\) for each \(e\in E\). This conclusion is more general than Corollary 5.9, but it cannot be followed from any of our results due to Example 5.4._
The examples given below disprove the reverse of theorem 5.8:
**Example 5.11**.: _Let \(X=\{x\}\), let \(E=\{e_{1},e_{2}\}\), and let \(\mathbf{\Sigma}=\{\Sigma_{e_{1}},\Sigma_{e_{2}}\}\), where \(\Sigma_{e_{1}}=\Sigma_{e_{2}}=\{\emptyset,X\}\). One can check that each \(\Sigma_{e_{i}}\) is trivially a regular space. On the other hand, the soft topology \(\mathcal{T}(\mathbf{\Sigma})=\{\widetilde{\Phi},(F_{1},E),(F_{2},E),\widetilde {X}\}\) is not soft regular, where \((F_{1},E)=\{(e_{1},\emptyset),(e_{2},X)\}\) and \((F_{2},E)=\{(e_{1},X),(e_{2},\emptyset)\}\). Indeed, \(x\notin(F_{i},E)^{c}\) for each \(i\), but no soft open sets in \(\mathcal{T}(\mathbf{\Sigma})\) can separate them._
A less trivial example is following:
**Example 5.12**.: _Let \(X=\mathbb{R}\) be the set of reals, let \(E=\{e_{1},e_{2}\}\), and let \(\mathbf{\Sigma}=\{\Sigma_{e_{1}},\Sigma_{e_{2}}\}\), where \(\Sigma_{e_{1}}\) is the natural topology and \(\Sigma_{e_{2}}\) is the Sorgenfrey line on \(\mathbb{R}\). It is known that both \(\Sigma_{e_{1}}\) and \(\Sigma_{e_{2}}\) are regular spaces, while \(\mathcal{T}(\mathbf{\Sigma})\) is not soft regular. Take \(x\in\mathbb{R}\) and two closed sets \([a,b],[c,d)\) for which \(x\in[a,b]\) and \(x\notin[c,d)\). If \((F,E)=\{(e_{1},[a,b]),(e_{2},[c,d))\}\), then \((F,E)\) is a soft closed set in \(\mathcal{T}(\mathbf{\Sigma})\) such that \(x\notin(F,E)\). One can easily check that \(x\) and \((F,E)\) cannot be separated by soft open sets._
We shall admit that the idea of this example is due to Terepeta [32, Example 8].
**Theorem 5.13**.: _Let \(\mathbf{\Sigma}=\{\Sigma_{e}:e\in E\}\) be a family of crisp topologies on \(X\). Then \(\Sigma_{e}\) is a normal space for each \(e\in E\) if and only if \(\mathcal{T}(\mathbf{\Sigma})\) is a soft normal space._
Proof.: Let \(\Sigma_{e}\) be normal for each \(e\in E\). Suppose \((A,E),(B,E)\) are disjoint soft closed sets in \(\mathcal{T}(\mathbf{\Sigma})\). Then \((A,E)=\{(e,A(e)):A^{c}(e)\in\Sigma_{e},e\in E\}\) and \((B,E)=\{(e,B(e)):B^{c}(e)\in\Sigma_{e},e\in E\}\). Therefore \(\widetilde{\Phi}=(A,E)\widehat{\bigcap}(B,E)=\{(e,A(e)\cap B(e)):A^{c}(e),B^{c} (e)\in\Sigma_{e},e\in E\}\). We obtain that \(A(e)\cap B(e)=\emptyset\). Since \(\Sigma_{e}\) is a normal space for each \(e\in E\), there exist open sets \(G(e),H(e)\) such that \(A(e)\subseteq G(e)\), \(B(e)\subseteq H(e)\) and \(G(e)\cap H(e)=\emptyset\). Set \((G,E)=\{(e,G(e)):e\in E\}\) and \((H,E)=\{(e,H(e)):e\in E\}\). Then \((G,E),(H,E)\in\mathcal{T}(\mathbf{\Sigma})\) such that \((A,E)\stackrel{{\subseteq}}{{=}}(G,E)\) and \((B,E)\widetilde{\subseteq}(H,E)\). Furthermore, \((G,E)\widehat{\bigcap}(H,E)=\{(e,G(e)\cap H(e)):e\in E\}=\widetilde{\Phi}\). This shows that \(\mathcal{T}(\mathbf{\Sigma})\) is soft normal.
Conversely, for each \(e\in E\), we let \(C(e),D(e)\) be disjoint closed sets in \(\Sigma_{e}\). By Lemma 5.7, \((C,E)=\{(e,C(e)):C^{c}(e)\in\Sigma_{e},e\in E\}\) and \((D,E)=\{(e,D(e)):D^{c}(e)\in\Sigma_{e},e\in E\}\) are soft closed sets in \(\mathcal{T}(\mathbf{\Sigma})\) and \((C,E)\widetilde{\bigcap}(D,E)=\{(e,C(e)\cap D(e)):e\in E\}=\widetilde{\Phi}\). Since \(\mathcal{T}(\mathbf{\Sigma})\) is soft normal, then there exist disjoint soft open sets \((U,E),(V,E)\) such that \((C,E)\widetilde{\subseteq}(U,E)\) and \((D,E)\widetilde{\subseteq}(V,E)\). This implies that \(C(e)\subseteq U(e)\), \(C(e)\subseteq U(e)\) and \(U(e)\cap V(e)=\emptyset\) for each \(e\in E\). Thus, \(\Sigma_{e}\) is normal for each \(e\in E\).
**Corollary 5.14**.: _If \((X,\Sigma,E)\) is soft normal, then \(\Sigma_{e}\) is normal for each \(e\in E\)._
Proof.: It is an immediate consequence of Lemma 3.4 and Theorem 5.13.
**Remark 5.15**.: _Notice that the above corollary is not true if we replace the word "normal" by "\(T_{4}\)"; see Example 5 in [31]._
The following example demonstrates that if one \(\Sigma_{e}\) is not normal, then \(\mathcal{T}(\mathbf{\Sigma})\) needs not be soft normal:
**Example 5.16**.: _Let \(X=\{x_{1},x_{2},x_{3}\}\) and \(E=\{e_{1},e_{2}\}\). Take \(\Sigma_{e_{1}}=\{\emptyset,X\}\) and \(\Sigma_{e_{2}}=\{\emptyset,\{x_{1}\},\{x_{1},x_{2}\},\{x_{1},x_{3}\},X\}\). One can easily verify that \(\Sigma_{e_{1}}\) is normal but not \(\Sigma_{e_{2}}\). The closed sets \(\{x_{2}\},\{x_{3}\}\) in \(\Sigma_{e_{2}}\) cannot be separated. The soft topology_
\[\mathcal{T}(\mathbf{\Sigma})=\big{\{}\widetilde{\Phi},(F_{1},E),(F_{2},E),(F_ {3},E),(F_{4},E),(F_{5},E),(F_{6},E),(F_{7},E),(F_{8},E),\widetilde{X}\big{\}},\]
_where_
\[(F_{1},E) =\{(e_{1},\emptyset),(e_{2},\{x_{1}\})\},\] \[(F_{2},E) =\{(e_{1},\emptyset),(e_{2},\{x_{1},x_{2}\})\},\] \[(F_{3},E) =\{(e_{1},\emptyset),(e_{2},\{x_{1},x_{3}\})\},\] \[(F_{4},E) =\{(e_{1},\emptyset),(e_{2},X)\},\] \[(F_{5},E) =\{(e_{1},X),(e_{2},\emptyset)\},\] \[(F_{6},E) =\{(e_{1},X),(e_{2},\{x_{1}\})\},\] \[(F_{7},E) =\{(e_{1},X),(e_{2},\{x_{1},x_{2}\})\},and\] \[(F_{8},E) =\{(e_{1},X),(e_{2},\{x_{1},x_{3}\})\},\]
_is not a soft normal space. Indeed, since \(\{(e_{1},\emptyset),(e_{2},\{x_{3}\})\}\) and \(\{(e_{1},X),(e_{2},\{x_{2}\})\}\) are disjoint soft closed sets in \(\mathcal{T}(\mathbf{\Sigma})\) but no soft open sets can separate them._
**Remark 5.17**.: _Notice that we can provide a more general proof sketch that proves the above claim, which is another proof to the part one of Theorem 5.13. If \(\mathbf{\Sigma}=\{\Sigma_{e}:e\in E\}\) is not a normal topology for some \(\bar{e}\in E\), then there are closed sets \(A,B\) in \(\Sigma_{\bar{e}}\) which cannot be separated by any open sets. The soft sets \((F_{A}^{\bar{e}},E),(F_{\bar{e}}^{B},E)\) are closed and disjoint in \(\mathcal{T}(\mathbf{\Sigma})\). If there exist soft open sets \((G,E),(H,E)\) in \(\mathcal{T}(\mathbf{\Sigma})\) such that \((F_{A}^{\bar{e}},E)\widetilde{\subseteq}(G,E)\), \((F_{\bar{e}}^{B},E)\widetilde{\subseteq}(H,E)\) such that \((G,E)\widetilde{\bigcap}(H,E)=\widetilde{\Phi}\). This concludes that \(A\subseteq G(\bar{e})\), \(B\subseteq H(\bar{e})\) and \(G(\bar{e})\cap H(\bar{e})=\emptyset\), a contradiction._
## Conclusion
This study develops a methodical understanding of the connections between a system of crisp topologies and the soft topology produced by it. The procedure is carried out with the help of two formulas. If we start from an original soft topology, then it produces a system of crisp topologies. With our formulas, we can generate two different new soft topologies. We have discussed the relationships between these soft topologies. Moreover, we show that the resulting soft topology via Formula 1 is always finer than the original one, while the soft topology generated by Formula 2 is incomparable. We see that two different original soft topologies may generate a single soft topology by either of the formulas. Furthermore, we study the preservation of separation axioms between the system of crisp topologies and the soft topology generated by it. More precisely, we show that Hausdorff and normality behave better in transforming to soft topologies and conversely. On the other hand, other separation axioms act differently. If one of the crisp topologies is respectively \(T_{0}\), \(T_{1}\), then it guarantees that the resulting soft soft topology is soft \(T_{0}\), \(T_{1}\). The converse is also true for \(T_{0}\). All of the crisp topologies are regular when the soft topology generated by them is soft regular.
## Ethical approval
This article does not contain any studies with human participants or animals performed by the author.
## Funding details
This article received no external funding.
## Conflict of interest
The author declare no conflict of interest.
## Availability of data and materials
Data sharing is not applicable to this article as no data sets were generated or analyzed during the current study.
## Authorship contribution
The authors dealt with the conceptualization, formal analysis, supervision, methodology, investigation, and writing original draft preparation. They also contributed equally in the formal analysis; writing, review and editing. They read and approved the final version of the article. |
2310.04946 | Transferable Deep Clustering Model | Deep learning has shown remarkable success in the field of clustering
recently. However, how to transfer a trained clustering model on a source
domain to a target domain by leveraging the acquired knowledge to guide the
clustering process remains challenging. Existing deep clustering methods often
lack generalizability to new domains because they typically learn a group of
fixed cluster centroids, which may not be optimal for the new domain
distributions. In this paper, we propose a novel transferable deep clustering
model that can automatically adapt the cluster centroids according to the
distribution of data samples. Rather than learning a fixed set of centroids,
our approach introduces a novel attention-based module that can adapt the
centroids by measuring their relationship with samples. In addition, we
theoretically show that our model is strictly more powerful than some classical
clustering algorithms such as k-means or Gaussian Mixture Model (GMM).
Experimental results on both synthetic and real-world datasets demonstrate the
effectiveness and efficiency of our proposed transfer learning framework, which
significantly improves the performance on target domain and reduces the
computational cost. | Zheng Zhang, Liang Zhao | 2023-10-07T23:35:17Z | http://arxiv.org/abs/2310.04946v1 | # Transferable Deep Clustering Model
###### Abstract
Deep learning has shown remarkable success in the field of clustering recently. However, how to transfer a trained clustering model on a source domain to a target domain by leveraging the acquired knowledge to guide the clustering process remains challenging. Existing deep clustering methods often lack generalizability to new domains because they typically learn a group of fixed cluster centroids, which may not be optimal for the new domain distributions. In this paper, we propose a novel transferable deep clustering model that can automatically adapt the cluster centroids according to the distribution of data samples. Rather than learning a fixed set of centroids, our approach introduces a novel attention-based module that can adapt the centroids by measuring their relationship with samples. In addition, we theoretically show that our model is strictly more powerful than some classical clustering algorithms such as k-means or Gaussian Mixture Model (GMM). Experimental results on both synthetic and real-world datasets demonstrate the effectiveness and efficiency of our proposed transfer learning framework, which significantly improves the performance on target domain and reduces the computational cost.
## 1 Introduction
Clustering is one of the most fundamental tasks in the field of data mining and machine learning that aims at uncovering the inherent patterns and structures in data, providing valuable insights in diverse applications. In recent years, deep clustering models [20; 40; 25] have emerged as a major trend in clustering techniques for complex data due to their superior feature extraction capabilities compared to traditional shallow methods. Generally, a feature extracting encoder such as deep neural networks is first applied to map the input data to an embedding space, then traditional clustering techniques such as \(k\)-means are applied to the embeddings to facilitate the downstream clustering tasks [10; 28]. There are also several recent works [33; 35; 37; 36; 15; 39] that integrate the feature learning process and clustering into an end-to-end framework, which yield high performance for large-scale datasets.
While existing deep approaches have achieved notable success on clustering, they primarily focus on training a model to obtain optimal clustering performance on the data from a given domain. When data from a new domain is present, an interesting question is can we leverage the acquired knowledge from the learned model on trained domains to guide the clustering process in new domains. Unfortunately, existing deep clustering models can be hardly transferred from one domain to another. This limitation arises primarily from the fixed centroid-based learning approach employed by these methods. As illustrated in Figure 1, discrepancies often exist between the distributions of the source and target domains. Consequently, the learned fixed centroids may no longer be suitable for the target domain, leading to suboptimal clustering results. However, the process of training a new model from scratch for each domain incurs a substantial computational burden. More importantly, the acquired knowledge pertaining to the intra- and inter-clusters structure and patterns remains underutilized,
impeding its potential to guide the clustering process on new data from similar domains. These limitations significantly hinder the practicability of deep clustering methods.
To address these limitations, there is a need for transferable deep clustering models that can leverage acquired knowledge from trained domains to guide clustering in new domains. By transferring the underlying principles of clustering on trained source domains, the model could learn how to cluster better and adapt such knowledge to clustering new data in the target domains. Unfortunately, there exists no trivial way to directly generalize existing deep clustering methods due to several major challenges: (1) **Difficulty in unsupervised learning the shared knowledge among different domains.** In clustering scenarios, where labeled data is unavailable, extracting meaningful and transferable knowledge that capture the commonalities of underlying cluster structures across domains is challenging. (2) **Difficulty in ensuring the learned knowledge can be adapted and customized to target domains.** As shown in Figure 1(b), the distribution discrepancies between source and target domains can significantly harm the clustering performance of existing deep clustering models. Adapting the shared knowledge to new domains remains a challenging task in order to mitigate the negative impact of these distribution discrepancies. (3) **Difficulty in theoretically ensuring a stable learning process of clustering module.** Unlike supervised learning tasks, clustering models lack labeled data to provide guidance during training, making it even more crucial to establish theoretical guarantees for stability. Addressing this challenge requires developing theoretical frameworks that can provide insights into the stability and convergence properties of clustering algorithms.
In order to address the above metioned challenges, in this paper we propose a novel method named **T**ransferable **D**eep **C**lustering **M**odel (TDCM). To address the first challenge, we introduce an end-to-end learning framework that can jointly optimize the feature extraction encoder and a learnable clustering module. This framework aims to leverage the learned model parameter to capture the shared intra-cluster and inter-cluster structure derived from trained cluster patterns. Therefore, the shared knowledge can be effectively transferred to unseen data from new domains. To solve the second challenge, in stead of optimizing a fixed set of centroids, a novel learnable attention-based module is proposed for the clustering process to automatically adapt centroids to the new domains, as illustrated in the Figure 1(c). Therefore, the learned clustering model is not limited to the trained source domains and can be easily generalized to other domains. Specifically, this module enables the updating of centroids through a cluster-driven bi-partite attention block, allowing the model to be aware of the similarity relationships among data samples and capture the underlying structures and patterns. Furthermore, we provide theoretical evidence to demonstrate the strong expressive power of the proposed attention-based module in representing the relationships among data samples. Our theoretical analysis reveals that traditional centroid-based clustering models like \(k\)-means or GMM can be considered as special cases of our model. This theoretical proof highlights the enhanced capabilities of our approach compared to traditional clustering methods, emphasizing its potential for mining complex cluster patterns from data. Finally, we demonstrate the effectiveness of our proposed framework on both synthetic and real-world datasets. The experimental results show that our method can achieve strongly competitive clustering performance on unseen data by a single forward pass.
Figure 1: Problem of interest. (a) The cluster centroids learned from source domain can perfectly cluster source data samples; (b) The fixed centroids are not reliable to cluster target samples due to the distribution shift between source samples and target samples; (c) The cluster centroids are adapted to optimal position to better cluster the target samples. _Best viewed in color._
Related Works
**Deep clustering models.** Existing deep clustering methods can be classified into two main categories: separately and jointly optimization. The separately optimization methods typically first train a feature extractor by self-supervised task such as deep autoencoder models, then traditional clustering methods such as \(k\)-means [10], GMM [37] or spectral clustering [1] are applied to obtain the clustering results. There are also some works [27] using density-based clustering algorithm such as DBSCAN [26] to avoid an explicit choice of number of centroids. However, the separately methods require a two-step optimization which lack the ability to train the model in an end-to-end manner to learning representation that is more suitable for clustering. On the contrary, the jointly methods are becoming more popular in the era of deep learning. One prominent approach is the Deep Embedded Clustering (DEC) model [33], which leverages an autoencoder network to map data to a lower-dimensional representations and then optimize the clustering loss KL-divergence between the soft assignments of data to centroids and an adjusted target distribution with concentrated cluster assignments. Deep clustering model (DCN) [35] jointly optmize the dimensionality reduction and \(k\)-means clustering objective functions via learning a deep autoencoder and a set of \(k\)-means centroids in the embedding space. JULE [36] formulates the joint learning in a recurrent framework, which incorporates agglomerative clustering technique as a forward pass in neural networks. More recently, some works [15; 34; 24] also propose to use contrastive learning by data augmentation techniques to obtain more discriminative representations for downstream clustering tasks.
However, most existing deep clustering methods focus on optimizing a fixed set of centroids, which limits their transferability as they struggle to handle distribution drift between different source and target domains. In contrast, our proposed model takes a different approach by adapting the centroids to learned latent embeddings, allowing it to be aware of distribution drift between domains and enhance its transferability.
**Attention models.** Attention models [2; 29; 8] have gained significant attention in the field of deep learning, revolutionizing various tasks across natural language processing, computer vision, and sequence modeling. These works collectively demonstrate the versatility and effectiveness of attention models in capturing informative relationships between data samples.
**Deep metric learning.** Our method is also related to deep metric learning methods that aim to learn representations from high-dimensional data in such a way that the similarity or dissimilarity between samples can be accurately measured. One prominent approach is the Contrastive Loss [7], which encourages similar samples to have smaller distances in the embedding space. Siamese networks [5] learns embeddings by comparing pairs of samples and optimizing the contrastive loss. More recently, the Angular Loss [31] incorporates angular margins to enhance the discriminative power of the learned embeddings. Proxy-NCA [21] employs proxy vectors to approximate the intra-class variations, enabling large-scale metric learning.
**Connection with Unsupervised Domain Adaption (UDA) methods.** While both our work and existing Unsupervised Domain Adaptation (UDA) methods [6; 18; 16] involve transferring models from source domains to target domains, the primary goal of our paper differs significantly from UDA tasks. UDA methods assume the presence of labeled data in the source domains, allowing the model to be trained in a supervised manner. In contrast, our paper focuses on a scenario where no labels are available in the source domain, necessitating the use of unsupervised learning techniques. This key distinction highlights the unique challenges and approaches we address in our research.
## 3 Preliminaries
In this section, we first formally define the problem formulation of transferable clustering task and then present the key challenges involved in designing an effective transferable deep clustering model.
In our study, we focus on a collection of datasets denoted as \(\mathcal{D}=\{D_{1},D_{2},\ldots,D_{m}\}\). Each dataset \(D_{j}\) is sampled from a joint probability distribution \(p(\mathcal{D})\). Within each sampled dataset \(D_{j}\), we have a set of high-dimensional feature vectors denoted as \(D_{j}=\{\mathbf{x}_{i}^{j}\}_{i=1}^{N_{j}}\), where \(\mathbf{x}_{i}^{j}\) represents the feature vector for the \(i\)-th sample. Our objective is to learn shared knowledge in clustering from a subset of datasets, referred to as the training set \(D_{s}\) (source), and utilize this acquired knowledge to predict the clustering patterns on newly sampled unseen datasets, serving as the test set \(D_{t}\) (target).
To achieve this, we aim to learn a clustering model denoted as \(f\), trained on the source datasets \(D_{s}\). The model \(f\) partitions each source dataset \(\{\mathbf{x}_{i}^{*}\}_{i=1}^{N_{s}}\) into \(K\) clusters in an unsupervised manner, where \(K\) is the desired number of clusters. Our goal is to maximize the intra-cluster similarities and minimize the inter-cluster similarities by learning the clustering rule from the training datasets. Subsequently, we evaluate the clustering performance of the learned function \(f\) on the test target sets \(D_{t}\). By leveraging the knowledge acquired during training, we aim to accurately predict the cluster patterns in the test datasets.
## 4 Methodology
To address the aforementioned challenges, we propose a novel method named **T**ransferable **D**eep **C**lustering **M**odel (TDCM). To ensure the shared clustering knowledge among domains can be learned unsupervised, we propose an end-to-end learning framework that jointly optimizes the feature extraction encoder and a learnable clustering module, as depicted in Figure 2(a). The framework aims to utilize the learned model parameters to capture the shared intra-cluster and inter-cluster structure derived from trained cluster patterns. Consequently, this enables effective transfer of the shared knowledge to unseen data from new domains. To adjust the learned knowledge to the target domains, in stead of optimizing a fixed set of centroids, a novel learnable attention-based module is proposed to automatically adapt centroids to the new domains, as shown in Figure 2(b). Therefore, the learned clustering model is not restricted to the trained source domains and can be easily generalized to other domains. Specifically, this module integrates a cluster-driven bi-partite attention block to update centroids, considering the similarity relationships among data samples and capturing underlying structures and patterns. Furthermore, we provide theoretical evidence to demonstrate the strong expressive power of the proposed attention-based module in representing the relationships among data samples. Our theoretical analysis reveals that traditional centroid-based clustering models like \(k\)-means or GMM can be considered as special cases of our model. This theoretical proof highlights the enhanced capabilities of our approach compared to traditional clustering methods, emphasizing its potential for mining complex cluster patterns from data.
### Transferrable cluster centroids learning framework
As previously discussed, existing deep clustering models typically treat centroids as fixed learnable parameters, which limits their ability to generalize effectively to unseen data. To address this limitation, we propose a novel clustering framework that can dynamically adjust the centroids based
Figure 2: (a) The overall framework of the proposed approach. An encoder is first applied to extract latent embeddings \(\mathbf{Z}\) from input samples \(\mathbf{X}\). Then the initial centroids will forward pass a series of Learnable Centroids Updating Block (LCUB) to learn the underlying similarities with centroids to reveal the cluster patterns; (b) The detailed architecture of LCUB. The current centroids \(\mathbf{C}^{(l)}\) and latent sample embeddings \(\mathbf{Z}\) are forwarded to form a bi-partite graph to calculate the assignment weights by pairwise attention scores, then the centroids are updated by the computed assignment weights.
on the extracted sample embeddings. Consequently, the centroids are dynamically adapted based on the distribution of sample embeddings, endowing the model with the capability to effectively transfer to new domains. As depicted in Figure 2(a), an encoder \(g\) is first utilized to extract latent embeddings \(\mathbf{Z}=g_{\phi}(\mathbf{X};\phi)\). Then the adaption process involves forward pass on a series of centroids updating blocks: \(\{\mathbf{c}_{j}^{(0)}\}_{j=1}^{K}\rightarrow\{\mathbf{c}_{j}^{(1)}\}_{j=1}^{K }\rightarrow\ldots\{\mathbf{c}_{j}^{(L)}\}_{j=1}^{K}\), where each block consists of two steps: assignment and update. In the assignment step of the \(l\)-th (\(l\in[0,L]\)) block, we compute the probability \(\delta_{ij}\) that assigns the data sample \(\mathbf{z}_{i}\) to the current cluster centroid \(\mathbf{c}_{j}^{(l)}\) using a score function \(\ell(\mathbf{z}_{i},\mathbf{c}_{j}^{(l)})\), which captures the underlying similarity relationships among samples. Subsequently, we update the cluster centroids based on the assigned data points. The updating process can be mathematically formalized as:
\[\begin{split}\mathbf{c}_{j}^{(l+1)}&=\frac{1}{ \sum_{i}^{N}\sum_{j}^{K}\delta_{ij}^{(l+1)}}\sum_{i=1}^{N}\delta_{ij}^{(l+1)} \mathbf{z}_{i},\\ \delta_{ij}^{(l+1)}&=\frac{\exp{(\ell(\mathbf{z}_{i },\mathbf{c}_{j}^{(l)})/\tau)}}{\sum_{j=1}^{K}\exp{(\ell(\mathbf{z}_{i}, \mathbf{c}_{j}^{(l)})/\tau)}},\end{split} \tag{1}\]
where \(\tau\) denotes the temperature hyper-parameter.
### Learnable centroids updating module
Given the overall updating procedure described earlier, a key consideration is the choice of the score function \(\ell(\mathbf{z}_{i},\mathbf{c}_{j})\) to capture the similarity relationship between samples and centroids, thereby capturing the underlying cluster structure. Traditionally, a common approach is to use handcrafted score functions like the Euclidean distance \(\ell(\mathbf{z}_{i},\mathbf{c}_{j})=\|\mathbf{z}_{i}-\mathbf{c}_{j}\|_{2}\). However, designing a specific score function requires domain knowledge and lacks generalizability across different domains.
To address this issue, we propose a learnable score function \(\ell(\mathbf{z}_{i},\mathbf{c}_{j};\mathbf{W})\) by introducing learnable weights \(\mathbf{W}\) that automatically capture the relational metrics between samples in a data-driven manner. Notably, the formulation in Equation 1 resembles a bi-partite graph structure of centroids and samples, which is illustrated in Figure 2(b). An attention-like mechanism, which selectively allocates resources based on the relevance of information, can be constructed based on the bi-partite structure. Since the goal of updating centroids is to gradually push centroids to represent a group of similar samples, ideally the score function \(\ell(\mathbf{z}_{i},\mathbf{c}_{j};\mathbf{W})\) should achieve its maximum value when \(\mathbf{z}_{i}=\mathbf{c}_{j}\). However, a common design of attention mechanism can not guarantee this property due to the arbitrary choice of learnable parameters \(\mathbf{W}\) (see our proof in Appendix Theorem A.1).
To solve this issue from a theoretical perspective, we propose a novel clustering-driven bi-partite attention module with appropriate constraints on the parameters of learnable matrices. Specifically, the score function is designed as \(\ell(\mathbf{z}_{i},\mathbf{c}_{j};\mathbf{W}_{Q},\mathbf{W}_{K})=-\sigma( \mathbf{W}_{Q}(\mathbf{z}_{i}-\mathbf{c}_{j}^{(l)})\cdot\mathbf{W}_{K}( \mathbf{z}_{i}-\mathbf{c}_{j}^{(l)}))/\tau\) with two learnable weight matrices and we rewrite the Equation 1 as:
\[\begin{split}\delta_{ij}^{(l+1)}&=\frac{\exp{(-\sigma( \mathbf{W}_{Q}(\mathbf{z}_{i}-\mathbf{c}_{j}^{(l)})\cdot\mathbf{W}_{K}( \mathbf{z}_{i}-\mathbf{c}_{j}^{(l)}))/\tau)}}{\sum_{j=1}^{K}\exp{(-\sigma( \mathbf{W}_{Q}(\mathbf{z}_{i}-\mathbf{c}_{j}^{(l)})\cdot\mathbf{W}_{K}( \mathbf{z}_{i}-\mathbf{c}_{j}^{(l)}))/\tau)}},\\ \mathbf{W}_{Q}&=\mathbf{W}_{Q}^{\intercal},\mathbf{W }_{K}=\mathbf{W}_{K}^{\intercal},\end{split} \tag{2}\]
where \(\mathbf{W}_{Q}\) and \(\mathbf{W}_{K}\) are two learnable real-symmetic matrices and \(\sigma\) is a continuous non-decreasing nonlinear activation function (e.g. ReLU [22] or LeakyReLU [19]).
**Theorem 4.1**.: _The score function \(\ell(\mathbf{z}_{i},\mathbf{c}_{j};\mathbf{W}_{Q},\mathbf{W}_{K})=-\sigma( \mathbf{W}_{Q}(\mathbf{z}_{i}-\mathbf{c}_{j}^{(l)})\cdot\mathbf{W}_{K}( \mathbf{z}_{i}-\mathbf{c}_{j}^{(l)}))/\tau\) defined in Equation 2 can guarantee that \(\forall\mathbf{z}_{i}\in\mathbb{R}^{b}\), we have \(\ell(\mathbf{z}_{i},\mathbf{c}_{j})\leq\ell(\mathbf{c}_{j},\mathbf{c}_{j})\)._
Proof.: We first define \(\mathbf{p}=\mathbf{z}_{i}-\mathbf{c}_{j}^{(l)}\) and rewrite the score function as \(\ell=-\sigma(\mathbf{W}_{Q}\mathbf{p}\cdot\mathbf{W}_{K}\mathbf{p})/\tau\). We rewrite the inner product part as
\[\mathbf{W}_{Q}\mathbf{p}\cdot\mathbf{W}_{K}\mathbf{p}=(\mathbf{p}\mathbf{W}_{Q} )^{\intercal}\cdot\mathbf{W}_{K}\mathbf{p}=\mathbf{p}^{\intercal}(\mathbf{W}_{Q }^{\intercal}\mathbf{W}_{K})\mathbf{p}.\]
Since \(\mathbf{W}_{Q}\) and \(\mathbf{W}_{Q}\) are two real-symmetic matrices, \(\mathbf{W}_{Q}^{\intercal}\mathbf{W}_{K}\) is a positive-definite matrix. For any nonzero real vector \(\mathbf{p}\), we have \(\mathbf{p}^{\intercal}(\mathbf{W}_{Q}^{\intercal}\mathbf{W}_{K})\mathbf{p}>0\). In addition, due to the property of continuous
and non-decreasing, the nonlinear activation function would not change the ordering of values. Therefore, for all \(\mathbf{z}_{i}\in\mathbb{R}^{b}\), we have \(\ell(\mathbf{z}_{i},\mathbf{c}_{j})\leq\ell(\mathbf{c}_{j},\mathbf{c}_{j})\).
In addition to theoretical property that our centroids updating module can group similar samples within same clusters, we further prove that our defined score function in Equation 2 can theoretically have stronger expressive power in representing the similarity relationship between data samples than traditional clustering technique such as \(k\)-means or GMM by the following theorems:
**Theorem 4.2**.: _The score function of \(k\)-means and GMM models are special cases of our defined score function \(\ell(\mathbf{z}_{i},\mathbf{c}_{j};\mathbf{W}_{Q},\mathbf{W}_{K})\) in Equation 2._
The proof for \(k\)-means algorithm is straightforward given here and the proof for GMM models can be found in the Appendix.
Proof.: By setting the nonlinear function \(\sigma\) as identity function and both \(\mathbf{W}_{\mathbf{Q}}\) and \(\mathbf{W}_{\mathbf{K}}\) as identity matrix \(\mathbf{I}\), we can rewrite the score function as \(\ell(\mathbf{z}_{i},\mathbf{c}_{j})=-\|\mathbf{z}_{i}-\mathbf{c}_{j}\|_{2}^{2}/\tau\), which is the negative squared Euclidean distance. Then the model is equalize to a soft \(k\)-means centroids updating step. By setting \(\tau\to 0^{+}\), the process converges to the traditional \(k\)-means algorithm.
### Unsupervised learning objective function
In order to optimize the parameters of the proposed model, the overall objective function of our framework can be written as:
\[\min_{g_{\phi},\mathbf{W}_{Q},\mathbf{W}_{K}}\mathcal{L}_{\mathrm{clustering} }+\beta\mathcal{L}_{\mathrm{entropy}}. \tag{3}\]
Here the first term \(\mathcal{L}_{\mathrm{clustering}}\) is aimed at maximizing the similarity scores within clusters:
\[\begin{split}\mathcal{L}_{\mathrm{clustering}}&=- \sum_{l}^{L}\alpha^{(l)}\sum_{i}^{N}\sum_{j}^{K}\delta_{ij}^{(l)}\ell(g_{\phi }(\mathbf{x}_{i}),\mathbf{c}_{j}^{(l)};\mathbf{W}_{Q},\mathbf{W}_{K}),\\ s.t.\mathbf{W}_{Q}\mathbf{W}_{Q}^{\intercal}&= \mathbf{I},\mathbf{W}_{K}\mathbf{W}_{K}^{\intercal}=\mathbf{I}\end{split} \tag{4}\]
where \(\alpha^{(l)}\) are hyperparameters to tune the balance between blocks and orthogonal constraints are incorporated to prevent the trivial solution of scale changes in the embeddings. We can treat the constraints as a Lagrange multiplier and solve an equivalent problem by substituting the constraint to a regularization term.
Besides the clustering loss term, the entropy loss term is aimed at avoiding the trivial solution of assigning all samples to one single cluster:
\[\begin{split}\mathcal{L}_{\mathrm{entropy}}&=- \sum_{l}^{L}\alpha^{(l)}\sum_{j}^{K}\pi_{j}^{(l)}\log\pi_{j}^{(l)},\\ \pi_{j}^{(l)}&=\sum_{i}\delta_{ij}^{(l)}=\sum_{i} \frac{\exp{(-\sigma(\mathbf{W}_{Q}(\mathbf{z}_{i}-\mathbf{c}_{j}^{(l-1)}) \cdot\mathbf{W}_{K}(\mathbf{z}_{i}-\mathbf{c}_{j}^{(l-1)}))/\tau)}}{\sum_{j=1} ^{K}\exp{(-\sigma(\mathbf{W}_{Q}(\mathbf{z}_{i}-\mathbf{c}_{j}^{(l-1)})\cdot \mathbf{W}_{K}(\mathbf{z}_{i}-\mathbf{c}_{j}^{(l-1)}))/\tau)}},\end{split} \tag{5}\]
where \(\pi_{j}^{(l)}\) reflects the size of each clusters.
**Initialization of centroids.** Many previous studies use the centroids provided by traditional clustering methods such as \(k\)-means on the latent embeddings as the initialization of centroids. However, these methods usually requires to load all data samples into the memory, which can be hardly generalize to a mini-batch version due to the permutation invariance of cluster centroids. To solve this issue, we propose to initialize the centroids \(\{\mathbf{c}_{j}^{(0)}\}_{j=1}^{K}\) before blocks as a set of orthogonal vectors in the embedding space, e.g. identity matrix \(\mathbf{I}\).
### Complexity analysis
Here we present the complexity analysis of our proposed dynamic centroids update module. In each block, we need to compute the pair-wise scores between centroids and data samples in Equation 2.
Assuming the embedding space dimension is denoted as \(b\), the time complexity to calculate the score functions in one block is \(O(NKb^{2})\). Consequently, performing \(L\) blocks would entail a time complexity of \(O(LINKb^{2})\), where \(N\) represents the number of samples and \(K\) denotes the number of centroids. It is important to note that our framework naturally supports a mini-batch version, which significantly enhances the scalability of the model and improves its efficiency.
## 5 Experiments
In this section, the experimental settings are introduced first in Section 5.1, then the performance of the proposed method on synthetic datasets are presented in Section 5.2. We further present the effectiveness test on our method against distributional shift between domains on real-world datasets in Section 5.3. In addition, we verify the effectiveness of framework components through ablation studies in Section 5.4 and measure the parameter sensitivity in Appendix B.2 due to space limit.
### Experimental settings
Synthetic datasets.In order to assess the generalization capability of our proposed method towards unseen domain data, we conduct an evaluation using synthetic datasets. A source domain is first generated by sampling \(K\) equal-sized data clusters. The data features are sampled from multi-Gaussian distributions with randomized centers and covariance matrices, which is similar to previous works [11; 4]. Subsequently, a corresponding target domain is created by randomly perturbing the centers of the source domain clusters. This ensures the presence of distributional drift between the train and test set data. To provide comprehensive results, we vary the value of \(K\) and generate \(10\) distinct datasets for each value of \(K\). We train the clustering model on source domain and test on the target domain. Our experimental results are reported as an average of 5 runs on each dataset, with different random seeds employed to ensure robustness.
Real-world datasets.To further evaluate the generalization capability of our proposed method under real-world senarios, commonly used real-world benchmark datasets are included. (1) **Digits** which includes MNIST and USPS, is a standard digit recognition benchmark that commonly used by previous studies [33; 35; 17; 16]. Follow previous works [17; 16], we train the model on the source domain training set and test the model on the target domain test set. All input images are resized to \(32\times 32\). (2) **CIFAR-10**[13] is commonly used image benchmark datasets in evaluating deep clustering models. We treat the training set as source domain and test set as target domain. We introduce CenterCrop to the test set to create distribution drift.
Comparison methods.We evaluate the proposed method on both synthetic and real-world benchmark datasets and compare it with both traditional clustering and state-of-the-art deep clustering techniques such as \(k\)-means, GMM, DAE [30], DAEGMM [32], DEC [33], DCN [35], JULE [36], CC [15] and IDFD [34].
Evaluation metrics.In our evaluation of clustering performance, we employ widely recognized metrics, namely normalized mutual information (NMI) [3], adjusted rand index (ARI) [38], and clustering accruracy (ACC) [3]. By combining NMI, ARI, and ACC, we can comprehensively demonstrate the quality and efficacy of our clustering results.
Implementation details.Our proposed model serves as a general framework, allowing for the integration of various commonly used deep representation learning techniques as the encoder part. To ensure a fair comparison with previous works, we enforce the use of the same encoder for feature extraction across all models. Specifically, for synthetic data, we utilize a three-layer multilayer perceptron (MLP) as the encoder. For the Digits dataset, we employ the classical LeNet-5 network [14] as the encoder. Furthermore, for the CIFAR-10 datasets, we utilize the ResNet-18 network [9] as the encoder. We use \(L=4\) layers of blocks to update the synthetic datasets and \(L=5\) for the real-world datasets. The temperature \(\tau\) is set as \(1.0\) throughout the whole experiments. We use an linearly increasing series of values for the weights \(\alpha\) for penalizing each block in loss function, where the final layer has the largest weight. We train the whole network through back-propagation and utilize Adam [12] as the optimizer. The initial learning rate is set as \(5e^{-3}\) for the synthetic datasets and \(5e^{-4}\) for the real-world datasets, and the weight decay rate is set as \(5e^{-4}\). The total number of training
epochs is \(500\) for the synthetic datasets and \(2,000\) for the real-world datasets. The batch size is set as 256 for synthetic and DIGITS datasets, and 128 for CIFAR-10 dataset. Data augmentation techniques are added like previous papers [15; 34] for the purpose of training discriminative representations for all the image datasets. The experiments are carried out on NVIDIA A6000 GPUs, which takes around 30 gpu-hours to train the model on CIFAR-10 dataset.
### Synthetic data results
Table 1 presents the clustering performance on both the trained source domain and test target domain of synthetic datasets. The results show the remarkable effectiveness of our proposed TDCM framework in achieving superior generalization performance when transferring the trained model from source to target sets across all synthetic scenarios. Specifically, TDCM consistently outperforms all the comparison methods, exhibiting an average improvement of 0.215, 0.243, and 0.157 on NMI, ARI, and ACC metrics, respectively. Notably, the performance of the TDCM model on the test set exhibits only a marginal average decrease of 0.033, 0.044, and 0.024 on NMI, ARI, and ACC metrics, respectively, compared to the training set. These results provide strong evidence that our proposed method significantly enhances the transferability of the clustering model, demonstrating its superior performance and robustness. On the other hand, although the comparison methods can achieve competitive performance on the trained training set, their performance drops significantly when transfer from source to target domains, which proves that their fixed set of optimized centroids can not handle the distribution drift between domains.
### Real-world data results
We report the clustering results of the real-world datasets in Table 2. The results demonstrate the strength of our proposed TDCM framework by consistently achieving the best performance when test on test sets across all datasets. Specifically, TDCM consistently outperforms all the comparison methods, exhibiting an average improvement of 0.206,0.342, 0.439 on MNIST, USPS, and CIFAR-10 data test sets, respectively. Our results strongly demonstrate the enhanced transferability of our proposed method for the clustering model, highlighting its superior performance. It worth noting that the improvement of our model on CIFAR-10 dataset is more significant than the other two digits dataset. A possible reason is CIFAR-10 datasets are more complex than the other two datasets, which may prove that our model can handle complex data with high dimensional features.
\begin{table}
\begin{tabular}{c|c|c c c|c c c|c c c|c c c} \hline \multirow{2}{*}{model} & \multicolumn{4}{c|}{\(K\)=2} & \multicolumn{4}{c|}{\(K\)=3} & \multicolumn{4}{c|}{\(K\)=5} & \multicolumn{4}{c}{\(K\)=10} \\ \cline{2-13} & & NMI & ARI & ACC & NMI & ARI & ACC & NMI & ARI & ACC & NMI & ARI & ACC \\ \hline \multirow{3}{*}{\(k\)-means} & source & 0.995 & 0.998 & 0.999 & 0.951 & 0.930 & 0.940 & 0.898 & 0.845 & 0.855 & 0.924 & 0.850 & 0.845 \\ \cline{2-13} & target & 0.622 & 0.596 & 0.828 & 0.883 & 0.846 & 0.916 & 0.750 & 0.653 & 0.745 & 0.782 & 0.643 & 0.731 \\ \cline{2-13} & diff & 0.373 & 0.402 & 0.171 & 0.068 & 0.084 & 0.024 & 0.148 & 0.192 & 0.110 & 0.142 & 0.207 & 0.114 \\ \hline \multirow{3}{*}{GMM} & source & 0.995 & 0.998 & 0.999 & 0.991 & 0.995 & 0.998 & 0.934 & 0.919 & 0.936 & 0.953 & 0.919 & 0.924 \\ & target & 0.622 & 0.597 & 0.824 & 0.902 & 0.877 & 0.948 & 0.756 & 0.674 & 0.775 & 0.788 & 0.661 & 0.755 \\ \cline{2-13} & diff & 0.373 & 0.401 & 0.175 & 0.089 & 0.118 & 0.050 & 0.178 & 0.245 & 0.161 & 0.165 & 0.258 & 0.169 \\ \hline \multirow{3}{*}{AE} & source & 0.995 & 0.998 & 0.999 & 0.949 & 0.928 & 0.939 & 0.899 & 0.852 & 0.861 & 0.919 & 0.855 & 0.854 \\ \cline{2-13} & target & 0.532 & 0.516 & 0.778 & 0.754 & 0.746 & 0.824 & 0.642 & 0.635 & 0.701 & 0.702 & 0.613 & 0.712 \\ \cline{2-13} & diff & 0.463 & 0.482 & 0.221 & 0.195 & 0.182 & 0.115 & 0.257 & 0.217 & 0.160 & 0.217 & 0.242 & 0.142 \\ \hline \multirow{3}{*}{DAEGMM} & source & 0.996 & 0.998 & 0.998 & 0.990 & 0.993 & 0.989 & 0.933 & 0.922 & 0.933 & 0.945 & 0.908 & 0.916 \\ \cline{2-13} & target & 0.522 & 0.507 & 0.713 & 0.842 & 0.827 & 0.878 & 0.696 & 0.704 & 0.734 & 0.690 & 0.610 & 0.687 \\ \cline{2-13} & diff & 0.474 & 0.491 & 0.285 & 0.148 & 0.166 & 0.111 & 0.237 & 0.218 & 0.199 & 0.255 & 0.298 & 0.229 \\ \hline \multirow{3}{*}{DEC} & source & 0.995 & 0.998 & 0.989 & 0.991 & 0.997 & 0.932 & 0.942 & 0.978 & 0.955 & 0.942 & 0.973 \\ \cline{2-13} & target & 0.692 & 0.636 & 0.828 & 0.889 & 0.851 & 0.905 & 0.766 & 0.683 & 0.785 & 0.701 & 0.605 & 0.713 \\ \cline{2-13} & diff & 0.303 & 0.362 & 0.171 & 0.100 & 0.140 & 0.092 & 0.166 & 0.259 & 0.193 & 0.254 & 0.337 & 0.260 \\ \hline \multirow{3}{*}{DCN} & source & 0.994 & 0.997 & 0.999 & 0.991 & 0.991 & 0.997 & 0.937 & 0.950 & 0.982 & 0.963 & 0.954 & 0.978 \\ \cline{2-13} & target & 0.654 & 0.498 & 0.719 & 0.703 & 0.608 & 0.795 & 0.643 & 0.698 & 0.742 & 0.689 & 0.599 & 0.753 \\ \cline{1-1} \cline{2-13} & diff & 0.340 & 0.499 & 0.280 & 0.288 & 0.383 & 0.202 & 0.294 & 0.252 & 0.240 & 0.274 & 0.355 & 0.275 \\ \hline \multirow{3}{*}{CC} & source & 0.990 & 0.992 & 0.998 & 0.980 & 0.981 & 0.998 & 0.938 & 0.950 & 0.981 & 0.962 & 0.963 & 0.982 \\ \cline{1-1} \cline{2-13} & target & 0.578 & 0.555 & 0.694 & 0.821 & 0.802 & 0.854 & 0.623 & 0.634 & 0.701 & 0.694 & 0.645 & 0.721 \\ \cline{1-1} \cline{2-13} & diff & 0.412 & 0.437 & 0.304 & 0.159 & 0.179 & 0.144 & 0.315 & 0.316 & 0.280 & 0.268 & 0.318 & 0.261 \\ \hline \multirow{3}{*}{TDCM} & source & 0.990 & 0.991 & 0.998 & 0.975 & 0.982 & 0.994 & 0.935 & 0.949 & 0.979 & 0.961 & 0.965 & 0.984 \\ \cline{1-1} \cline{2-13} & target & 0.989 & 0.995 & 0.999 & 0.953 & 0.957 & 0.984 & 0.901 & 0.896 & 0.951 & 0.885 & 0.863 & 0.925 \\ \cline{1-1} \cline{2-13} & diff & **0.001** & **-0.004** & **-0.001** & **0.022** & **0.025** & **0.010** & **0.034** & **0.053** & **0.028** & **0.076** & **0.102** & **0.059** \\ \hline \end{tabular}
\end{table}
Table 1: Clustering performance on the synthetic datasets with varying number of clusters \(K\). The models are
### Ablation studies
Here we investigate the impact of the proposed components of TDCM. We first consider variants of removing the real-symmetric constraints and orthogonal constraints in our model, named _variant-R_ and _variant-O_. In addition, we also remove the entropy loss in our overall loss function, named _variant-E_. We present the results on two synthetic datasets (\(K=2,5\)) and CIFAR-10 real-world dataset in Table 3, where we can observe a significant performance drop consistently for all variants. Especially, we observe that the standard deviation of all variants are larger than the full model, especially for the _variant-R_ that removes real-symmetric constraint. Such behavior may demonstrate the importance of these proposed constraints in guaranteeing a stable training process, which is highly consistent with our theoretical analysis.
### Visualization of centroids updating process
In order to illustrate the learned centroids updating behavior by our designed module, here we visualize the layer-wise updated centroids in \(K=2\) synthetic dataset test set in Figure 3. From the visualization we can observe that by forward passing the updating blocks, the centroids are adapted to a more clear cluster structure by the learned similarity metrics. It worth noting that the final adapted centroids are not necessarily at the 'center' of the clusters, which demonstrate that the designed module can automatically find the underlying similarity metric between samples that is suitable for the given data.
## 6 Conclusions and Limitations
In this study, we introduce a novel framework called **T**ransferable **D**eep **C**lustering **M**odel (TDCM) to tackle the challenge of limited generalization ability in previous end-to-end deep clustering techniques when faced with unseen domain data. In stead of optimizing a fixed set of centroids specific to the training source domain, our proposed TDCM employs an adapted centroids updating module, enabling automatic adaptation of centroids based on the input domain data. As a result, our framework exhibits enhanced generalization capabilities to handle unseen domain data. To capture the intrinsic structure and patterns of clusters, we propose an attention-based learnable module, which learns
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline & model & - adenoma & \multicolumn{2}{c|}{CNN} & \multicolumn{2}{c|}{AE} & \multicolumn{2}{c|}{ECN} & \multicolumn{2}{c|}{RALE} & \multicolumn{2}{c|}{EC} & \multicolumn{2}{c|}{ECV} & \multicolumn{2}{c|}{TDCM} \\ \hline & \multicolumn{2}{c|}{source} & image & image & image & image & image & image & image & image & image & image & image & image & image & image & image & image & image \\ \hline & \multirow{2}{*}{MNIST} & \multirow{2}{*}{ACT}
a data-driven score function for measuring the underlying similarity among samples. Theoretical analysis guarantees the effectiveness of our proposed module in extracting underlying similarity relationships, surpassing conventional clustering techniques such as \(k\)-means or Gaussian Mixture Models (GMM) in terms of expressiveness. Extensive experiments conducted on synthetic and real-world datasets validate the effectiveness of our proposed model in addressing distributional drift during the transfer of clustering knowledge from trained source domains to unseen target domains.
We acknowledge certain limitations associated with our proposed framework. One limitation is that our TDCM is a centroids-based method, similar to previous centroids-based methods like \(k\)-means, GMM, DEN, DEC, DCN, CC, and IDFD, which necessitates a predefined number of clusters for the model. An inappropriate choice of the number of clusters can adversely affect the performance of the model. Additionally, our adapted updating module requires the updating matrix to be real-symmetric, implying that the hidden dimension of clusters should remain fixed throughout the adaption process. However, as the primary objective of this work is to pioneer deep clustering models for transfer learning across different domains and propose a practical solution to address this problem, the aforementioned challenges certainly go beyond the scope of this study. Nevertheless, we consider these challenges as potential avenues for future research.
|
2306.08642 | Nonlinear equilibria and transport processes in burning plasmas | In this work, we put forward a general phase-space transport theory in
axisymmetric tokamak plasmas based upon the concept of zonal state (ZS). Within
this theoretical framework, the ZS corresponds to a renormalized plasma
nonlinear equilibrium consisting of phase-space zonal structures (PSZS) and
zonal electromagnetic fields (ZFs) which evolve self-consistently with symmetry
breaking fluctuations and sources/collisions. More specifically, our approach
involves deriving governing equations for the evolution of particle
distribution functions (i.e, PSZS), which can be used to compute the
corresponding macro-/meso-scale evolving magnetized plasma equilibrium adopting
the Chew Goldberger Low (CGL) description, separating the spatiotemporal
microscale structures. The nonlinear physics of ZFs and of geodesic acoustic
modes/energetic particle driven geodesic acoustic modes is then analyzed to
illustrate the implications of our theory. | Matteo Valerio Falessi, Liu Chen, Zhiyong Qiu, Fulvio Zonca | 2023-06-14T17:16:24Z | http://arxiv.org/abs/2306.08642v3 | # Nonlinear equilibria and transport processes in burning plasmas
###### Abstract
In this work, we further develop a general phase-space transport theory in axisymmetric tokamak plasmas based upon the concept of zonal state (ZS). Within this theoretical framework, ZS corresponds to a renormalized nonlinear equilibrium consisting of phase space zonal structures (PSZS) and zonal electromagnetic fields (ZFs) which evolve self-consistently with symmetry breaking fluctuations and sources/collisions. More specifically, our theoretical approach involves deriving governing equations of the distribution function as well as applying the Chew Goldberger Low (CGL) description to separate the spatiotemporal microscale structures and the macro-/meso- scale components of this equilibrium. The nonlinear physics of zonal flows and of geodesic acoustic modes/energetic particle driven geodesic acoustic modes is then analyzed in details to illustrate the implications of this theoretical framework.
## 1 Introduction
Understanding the behavior of energetic particles (EPs) in fusion plasmas is crucial, as, due to their characteristic orbit sizes and predominant contribution to reactor power balance, they play a critical role in mediating couplings between microscopic and macroscopic plasma dynamics; and significantly influence its behavior as a complex system [1, 2, 3]. A thorough description of EP transport processes is, therefore, essential in burning plasmas. These processes involve fluctuations resonantly excited by EPs, which have different time scales and structures with respect to thermal plasma instabilities and may induce nonlocal behaviors, including those generated by EP-driven shear Alfven waves. For these reasons, a global electromagnetic first-principle based approach is mandatory. Consequently, current research on EP-driven instabilities and
transport in fusion devices is generally recognized to be limited by the costly and time-consuming numerical frameworks used; such as global MHD hybrid-kinetic or fully gyrokinetic codes. While these simulations provide valuable insight into fundamental physics processes, they typically cover only a relatively limited time range of dynamics or rely on the separation of various time scales and, thus, have limited predictive capability of transport processes.
To overcome this challenge, we have developed the phase space zonal structures (PSZS) transport theory [4, 5, 6]. PSZS represent slowly evolving (on the transport time scale) structures in the phase space that are not affected by collisionless dissipation and provide a proper definition of plasma nonlinear equilibrium distribution function. The evolution of the PSZS component of the distribution function extends the concept of plasma transport processes to the phase space [5] and, therefore, is particularly relevant for weakly collisional plasmas that exhibit significant deviations from local thermodynamic equilibrium; e.g., described by Maxwellian distribution functions. Notably, using the PSZS theoretical framework, the usual plasma transport equations can be obtained as a particular limiting case where the deviation from the local Maxwellian is small. Our previous work [5] demonstrated this point by focusing specifically on the calculation of energy and density transport equations. The consistency of the PSZS theory with "gyrokinetic transport theory [7]" and global gyrokinetic codes stems from its foundation in the well-established \(\delta f\) gyrokinetic theory, as emphasized in [5]. The novelty, however, stands in explicitly identifying the part of the toroidally symmetric \(\delta f\) that must be incorporated in the reference state, which, due to transport processes, evolves generally into a non-Maxwellian state. Over time, in fact, this contribution may increasingly deviate from the reference thermodynamic equilibrium due to nonlinear processes eventually invalidating the usual transport analyses that rely, e.g., on local Maxwellian equilibria. The role of PSZS in EP transport due to energetic particle modes (EPM) was recently discussed by gyrokinetic particle-in-cell simulations in Refs [8, 9]. Reference [9], in particular, numerically demonstrates linear scaling of the chirping rate with mode amplitude of nonlinear coherent EPM fluctuations, consistent with theoretical predictions [1, 2, 10, 11, 12]. By solving PSZS transport equations through a hierarchy of transport models [6] ranging from global gyrokinetics [13] to quasilinear theories, we can develop and validate advanced reduced EP transport models capable of capturing the long-time scale evolution of burning plasmas, and provide insights into the non-locality of the underlying transport processes [6].
In this work, we apply the PSZS transport theory to derive the governing equations for the zonal state (ZS) [5, 6], which defines the renormalized nonlinear equilibrium evolving on the transport time scale due to self-consistent symmetry breaking fluctuation, sources and collisions. The ZS, thus, consists of the PSZS and its counterpart, i.e., the zonal electromagnetic fields (ZFs), which represent the long-lived component of toroidally symmetric electromagnetic fields. In fact, the ZS does not evolve in the absence of symmetry breaking fluctuations and/or sources and collisions,
which is consistent with its definition as a proper nonlinear equilibrium. A more rigorous definition of the ZS is given below in Section 2.1. Here, we first derive the PSZS evolution equation in conservative form using the equilibrium constants of motion as phase space coordinates. Subsequently, we represent the same equation in the magnetic-drift/banana center frame using standard flux coordinates and the relative shift operator accounting for the gyrocenter magnetic drift motion in the slowly evolving equilibrium. After rigorously defining the renormalization of the equilibrium distribution function, we apply the Chew Goldberger Low (CGL) description[14] and describe the self-consistent modifications to the reference magnetic equilibrium by means of the macro-/meso- scopic component of the PSZS moments [15]. The evolution of the ZS discussed in this work expands upon the results in Ref. [5] and is predominantly due to toroidal symmetry breaking fluctuations. Here, as a further step, we derive an expression for the plasma polarizability that generalizes the expressions derived recently to arbitrary geometry and equilibrium distribution functions, i.e., PSZS [16, 17, 18, 19]. We also show that transport equation can be cast in the form of a flux surface averaged continuity equation including neoclassical transport in the banana regime as well as sources/sinks. An in-depth discussion of phase space transport processes due to toroidally symmetry breaking perturbations as well as sources/sinks will be reported in a separate work, where we will also address the possibility of constructing reduced transport models within a unified theoretical framework [20]. Meanwhile, since the role of finite frequency toroidally symmetric perturbations is peculiar and goes beyond the standard analysis of transport phenomena, as further demonstration of the present theoretical framework, we have written a detailed Appendix for interested readers focusing on GAM/EGAM physics in general geometry with particular application to the nonlinear dynamics of EGAM.
The article is structured as follows. In Section 2, we introduce the concept of ZS based on the notion of PSZS and ZFs, which is explored in more detail in Section 2.1. Next, in Section 2.2, we demonstrate how PSZS can be interpreted as a renormalization of the reference distribution function in the presence of a finite level of fluctuations. In Section 2.3, meanwhile, we explore the self-consistent modification of the reference magnetic equilibrium due to the PSZS. Section 3 focuses on the self-consistent evolution of the ZS, showing how a gyrokinetic transport theory on long time scales can be consistently developed within the present theoretical framework adopting the conservative form of nonlinear gyrokinetic equations and reconnecting to the previous work discussed in Ref. [5]. Finally, we summarize our findings and discuss future directions in Section 4. As further illustrative example of applications of the present theoretical framework, Appendix A presents to interested readers a detailed discussion of the physics of GAM/EGAM in general geometry.
## 2 Phase space zonal structures and the zonal state
As already mentioned in the Introduction, PSZS are characterized by being "slowly evolving" which means that they are not affected by collisionless dissipation, e.g.,
Landau damping [1, 2, 3, 5, 6]. To satisfy this criterion, PSZS must be calculated by a two-step averaging procedure. More precisely, an average along guiding center equilibrium orbits first; and then a filter for removing the fast spatiotemporal variations on the characteristic particle orbit length-scale and/or the hydrodynamic time-scale must be performed on the axisymmetric particle response. Consequently, PSZS depend solely on the equilibrium invariants of motion, such as the particle energy (per unit mass) \(\mathcal{E}\)1, the magnetic moment \(\mu=v_{\perp}^{2}/2B_{0}+\ldots\) and the toroidal angular momentum \(P_{\phi}\)[21]. It is worth noting that any other combination of three invariants of motion can be used, for example involving the'second invariant' \(J=m\oint v_{\parallel}dl=J(P_{\phi},\mathcal{E},\mu)\)[1]. In this section, we apply this approach to derive the governing equation for PSZS in conservative form. Furthermore, we derive the equations providing the deviation of the axisymmetric particle response from the PSZS and its dynamic evolution. Then, the notion of PSZS is used to introduce the concept of ZS, which, together with the ZFs defined below in Sec. 2.1, provides a proper definition of plasma nonlinear equilibrium [5, 6, 22, 23] that evolves consistently with the (toroidal) symmetry breaking fluctuation spectrum as well as with sources and collisions.
Footnote 1: For simplicity of the present analysis, we assume that equilibrium radial electric field, if it exists, corresponds to sufficiently slow \(\mathbf{E}\times\mathbf{B}\) flow that is consistent with the gyrokinetic ordering and, thus, can be incorporated within the perturbed radial electric field. If needed,this assumption could be readily dropped.
### Orbit averaged particle response: PSZS and zonal state
Since PSZS depend only on the equilibrium constants of motion, their evolution equation can be readily cast using these as coordinates in the phase space. Proceeding along these lines, we write the phase space velocity appearing in the gyrokinetic equation [21, 24, 25] as the sum of two contributions, i.e. \(\dot{\mathbf{Z}}=\dot{\mathbf{Z}}_{0}+\delta\dot{\mathbf{Z}}\), representing, respectively, the integrable particle motion in the reference magnetic field and the remaining particle response that we generically attribute to the effect of fluctuations. This decomposition is general and could be applied to any nearly integrable (Hamiltonian) system. It assumes that the reference equilibrium, defined by the reference magnetic field and by the plasma profiles that are consistent with it and with the PSZS, varies on a sufficiently slow time scale. A more rigorous definition of reference or "equilibrium" magnetic field, is given in Sec. 2.3. The self-consistency of this description and approach can be rigorously checked a-posteriori. Consequently, the gyrokinetic equation in conservative form reads:
\[\frac{\partial}{\partial t}(DF)+\frac{\partial}{\partial\mathbf{Z}}\cdot \left(D\dot{\mathbf{Z}}_{0}F\right)+\frac{\partial}{\partial\mathbf{Z}}\cdot \left(D\delta\dot{\mathbf{Z}}F\right)=0\,, \tag{1}\]
where \(D\) is the velocity space Jacobian and \(F\) the gyro-center distribution function [24, 25]. Here, for the sake of simplicity, we have temporarily suppressed collisions and source terms. We now introduce \((\theta,\zeta,P_{\phi},\mathcal{E},\mu)\) as phase space coordinates2 where \(\theta\)
and \(\zeta\) are, respectively, poloidal and toroidal angle coordinates, and focus only on the zonal component of the distribution function, which is characterized by the toroidal mode number \(n=0\) and, for symmetry reasons, is the obvious starting point for the definition of an equilibrium distribution function in axisymmetric Tokamak plasmas. Without loss of generality, we assume an axisymmetric equilibrium magnetic field, i.e., \(\mathbf{B}_{0}=\hat{F}\mathbf{\nabla}\phi+\mathbf{\nabla}\phi\times\nabla\psi\) where \(\hat{F}=RB_{\phi}\) and \(\phi\) is the toroidal angle. In the equation above, we can re-write the term describing the equilibrium motion as:
\[\frac{\partial}{\partial\mathbf{Z}}\cdot\left(D\dot{\mathbf{Z}}_{0}F\right)_{z}=\nabla \cdot\left(D\dot{\mathbf{X}}_{0}F\right)_{z}=\frac{1}{\mathcal{J}_{P_{\phi}}}\frac {\partial}{\partial\theta}(D\mathcal{J}_{P_{\phi}}F\dot{\mathbf{X}}_{0}\cdot\mathbf{ \nabla}\theta)_{z}\,, \tag{2}\]
where the \(z\) subscript denotes the zonal (\(n=0\)) component, \(\mathcal{J}_{P_{\phi}}=\mathcal{J}(\partial P_{\phi}/\partial\psi)^{-1}\), \(\mathcal{J}=(\mathbf{\nabla}\zeta\cdot\mathbf{\nabla}\psi\times\mathbf{\nabla}\theta)^{-1}\) is the Jacobian in flux coordinates, and the toroidal symmetry of the reference state has been used along with \(\dot{\mathbf{X}}_{0}\cdot\mathbf{\nabla}P_{\phi}=0\) and \(\dot{\mathcal{E}}_{0}=0\). We can now apply orbit averaging in the reference equilibrium (slowly evolving in time); that is, an averaging operator along \(\theta\) on the gyrokinetic equation while using \(\mathcal{J}_{P_{\phi}}\) as weight. Assuming that the reference magnetic equilibrium is slowly evolving; e.g., on the resistive current diffusion time, we obtain:
\[\partial_{t}\oint d\theta\mathcal{J}_{P_{\phi}}DF_{z}+\oint d\theta\mathcal{J} _{P_{\phi}}\frac{\partial}{\partial\mathbf{Z}}\cdot(D\delta\dot{\mathbf{Z}}F)_{z}=0\,. \tag{3}\]
Recalling the governing equation in the absence of fluctuations; i.e., \(\dot{\psi}=-v_{\parallel}\partial_{\theta}G/(\mathcal{J}B_{\parallel}^{*})\), where \(G=\psi-RB_{\phi}v_{\parallel}/\Omega\simeq-(c/e)P_{\phi}\parallel\), and \(D=B_{\parallel}^{*}/|v_{\parallel}|\), with \(B_{\parallel}^{*}\equiv\mathbf{B}^{*}\cdot\mathbf{b}\), \(\mathbf{B}^{*}\equiv\nabla\times\mathbf{A}^{*}\), \((e/c)\mathbf{A}^{*}\equiv(e/c)\mathbf{A}_{0}+m\left(v_{\parallel}\mathbf{b} \right)\), we can recognize that the averaging is indeed a time averaging along the integrable particle orbit; denoted as
\[\overline{\left(\dots\right)}^{\rm(O)}=\tau_{b}^{-1}\oint d\theta(...)/\dot{ \theta}\;, \tag{4}\]
with \(\tau_{b}=\oint d\theta/\dot{\theta}\). Thus, we obtain the following (equilibrium) orbit averaged kinetic equation:
\[\frac{\partial}{\partial t}\overline{F_{z}}^{\rm(O)}+\frac{1}{\tau_{b}}\left[ \frac{\partial}{\partial P_{\phi}}\overline{\left(\tau_{b}\delta\dot{P}_{ \phi}F\right)_{z}}^{\rm(O)}+\frac{\partial}{\partial\mathcal{E}}\overline{ \left(\tau_{b}\delta\dot{\mathcal{E}}F\right)_{z}}^{\rm(O)}\right]=\overline{ \left(\sum_{s}C_{s}^{g}\left[F,\!F_{s}\right]+\mathcal{S}\right)}_{z}^{\rm(O)}\,, \tag{5}\]
where \(\delta\dot{P}_{\phi}=\delta\dot{\mathbf{X}}\cdot\mathbf{\nabla}P_{\phi}\), \(\delta\dot{\mathcal{E}}\) is defined in Eq.(14), and we have restored collisions and source terms on the RHS. The expressions of gyrokinetic collision operators of the considered test particles with the field particle species \(s\), as denoted by the subscript in \(C_{s}^{g}\), are given in Refs. [24, 25]. Meanwhile, for the sake of notation simplicity, the summation over different field particle species in the collisions term will be omitted from now on. Denoting the spatial-temporal slowly evolving component of \(\overline{F_{z}}^{\rm(O)}\), i.e.,
PSZS as \(\overline{F_{0}}^{\rm(O)}\equiv[\overline{F_{z}}^{\rm(O)}]_{S}\), the relevant evolution equation is obtained by additionally extracting the macro-/meso- scopic component of Eq. (5); i.e.:
\[\frac{\partial}{\partial t}\overline{F_{0}}^{\rm(O)}+\frac{1}{\tau_{b}}\left[ \frac{\partial}{\partial P_{\phi}}\overline{\left(\tau_{b}\delta\dot{P}_{\phi} \delta F\right)}^{\rm(O)}+\frac{\partial}{\partial\mathcal{E}}\overline{\left( \tau_{b}\delta\dot{\mathcal{E}}\delta F\right)}^{\rm(O)}_{z}\right]_{S}= \overline{C^{g}}^{\rm(O)}_{S}+\overline{S}^{\rm(O)}_{S}\,, \tag{6}\]
where \([\ldots]_{S}\) denotes an appropriate (ad hoc) spatio-temporal averaging procedure to be illustrated; and where we have postulated a bi-linear collision term such that:
\[\overline{C^{g}_{S}}^{\rm(O)}=\overline{C^{g}\left[\overline{F_{0}}^{\rm(O)},\overline{F_{0}}^{\rm(O)}\right]}^{\rm(O)}+\overline{\left[\overline{C^{g} \left[\overline{F_{0}}^{\rm(O)},\delta F\right]}^{\rm(O)}+\overline{C^{g} \left[\delta F,\overline{F_{0}}^{\rm(O)}\right]}^{\rm(O)}+\overline{C^{g} \left[\delta F,\delta F\right]}^{\rm(O)}\right]_{S}}\,. \tag{7}\]
In the previous expression we have introduced the following decomposition at each instant of time:
\[F=\overline{F_{0}}^{\rm(O)}+\delta F\,. \tag{8}\]
The aforementioned spatio-temporal averaging over the micro-scales is what allows us separating \(\overline{F_{0}}^{\rm(O)}\) from \(\overline{F_{z}}^{\rm(O)}\), given by Eq. (5). It is arbitrary, to some extent, and can be specified for convenience according to the problem of interest. This argument will be further investigated in the following. In the following subsection, we will show that the \(\sim[\ldots]_{S}\) term on the LHS of Eq. (6), with \(\delta F\rightarrow\overline{F_{0}}^{\rm(O)}\), can be self-consistently considered to vanish. In this regard, the approach proposed in the present analysis could be considered a "full-F" description [26, 27] of the nonlinear evolving equilibrium, and a "delta-F" approach [28] for the (toroidal) symmetry breaking perturbations (cf. [29] for a general review for gyrokinetic simulations of turbulent transport).
Here, it is also worthwhile to briefly remark that the ratio between the third and the second terms of LHS in Eq. (5) can be estimated as \(\delta\,\mathcal{E}/\Delta\mathcal{E}\) with \(\delta P_{\phi}/\Delta P_{\phi}\sim\mathcal{O}(1)\), where \(\Delta\mathcal{E}\) and \(\Delta P_{\phi}\) are, respectively, PSZS characteristic scales in energy and toroidal angular momentum space; and \(\delta\mathcal{E}\) and \(\delta P_{\phi}\) are the corresponding nonlinear distortions due to the considered fluctuation spectrum. Using the typical nonlinear gyrokinetic ordering, this is consistent with the relatively small effect of the so-called parallel nonlinearity on fluctuation induced phase space transport. Meanwhile, this is not the case any longer for Eq. (6), where the two terms are generally of the same order. However, once the effect of the third term is integrated over in energy space, the corresponding overall effect can, again, be shown to be negligible at the leading order. Consistently, in Ref. [5] a PSZS transport theory has been formulated omitting the parallel non-linearity term and adopting the classical Frieman-Chen formulation of the nonlinear gyrokinetic equation, which is appropriate up to the leading order in the multi-spatiotemporal scale asymptotic expansion [21]. In the present work, the parallel nonlinearity is maintained; consistent with the conservative formulation of nonlinear gyrokinetics [24, 25].
Having introduced the concept of PSZS, we can decompose the whole gyrocenter particle response and, consequently, the zonal component of the gyrokinetic distribution
\(F_{z}\), into the sum of different terms accounting for the various relevant physics processes, i.e.:
\[F_{z}=\overline{F}_{z}^{\rm(O)}+\delta\tilde{F}_{z}=\overline{F_{0}}^{\rm(O)}+ \overline{\delta F_{z}}^{\rm(O)}+\delta\tilde{F}_{z}. \tag{9}\]
In particular, as already stated, the PSZS, \(\overline{F_{0}}^{\rm(O)}\), describes the evolution of the macro-/meso-scopic equilibrium. The phase space transport theory, derived in this work is primarily motivated by the fact that this contribution may increasingly deviate in time from the reference thermodynamic equilibrium due to nonlinear processes; eventually invalidating the usual transport analyses that rely, e.g., on local Maxwellian equilibria. Notable applications are analyses of EP transport in fusion plasmas [1, 2, 3], but deviation of (bulk) particle equilibria from local Maxwellian is also known to be crucial, e.g., for explaining the nonlinear up-shift (the so-called "Dimits shift" [30]) of the threshold for ion temperature gradient driven turbulence [23]. In the following, we will show that a multipole expansion can be applied to the PSZS fluid moments [25, 15] yielding an anisotropic CGL pressure tensor [14, 31] and a self-consistently evolving nonlinear magnetic equilibrium. Further to \(\overline{F_{0}}^{\rm(O)}\), the residual components of \(F_{z}\) either describe the micro-scale spatio-temporal variation of the orbit averaged distribution function or have zero average along equilibrium orbits. More precisely, the residual orbit averaged particle response, \(\overline{\delta F_{z}}^{\rm(O)}\), characterizes the transition between neighboring nonlinear equilibria, which are all undamped by collisionless dissipation [5, 23] and slightly deviate from the reference \(\overline{F_{0}}^{\rm(O)}\) as schematically described in Fig. 1. These neighboring nonlinear equilibria [23] should be understood as the ensemble of different realizations of the system, thereby representing the connection between time average,
Figure 1: Schematic picture describing the equilibrium orbit averaged distribution function \(\overline{F_{z}}^{\rm(O)}\). The solid line represents the slowly varying component of the orbit averaged distribution function, while the oscillation around it corresponds to \(\overline{\delta F_{z}}^{\rm(O)}\).
introduced above in the definition of \(\overline{F_{0}}^{\rm(O)}\) by means of Eq. (6), and "ensemble average" in a statistical sense [5]. The distribution function \(\overline{F}_{z}^{\rm(O)}\)\(=\overline{F_{0}}^{\rm(O)}+\overline{\delta F_{z}}^{\rm(O)}\), together with the undamped components of the electromagnetic potentials (ZFs), constitute the ZS, i.e., "the collisionless undamped (long-lived) nonlinear deviation of the plasma from the reference state as a consequence of fluctuation-induced transport processes, due to emission and reabsorption of (toroidal equilibrium) symmetry-breaking perturbations" [5]. We note that the symmetry breaking fluctuations (with \(n\neq 0\)) are not explicitly mentioned as elements of the ZS, but they are self-consistently accounted for in its definition, as they have determined \(\overline{F}_{z}^{\rm(O)}\) during the dynamic evolution of the system. We also note, although PSZS can be obtained separating out the macro-/meso-scopic component of \(\overline{F_{z}}^{\rm(O)}\) consistently with the usual definition of equilibrium and transport, that this separation is somehow arbitrary and depends on the specific problem of interest. For example, when describing EPM [32], \(\overline{F_{z}}^{\rm(O)}=\overline{F_{0}}^{\rm(O)}+\overline{\delta F_{z}}^{ \rm(O)}\) is best considered as a whole since phase space transport occurs on the same time scale of the nonlinear dynamic evolution of the fluctuation spectrum [1, 2]. Following the previous derivation, we obtain the governing equation for the spatio-temporal micro-scale component of the equilibrium orbit averaged distribution function:
\[\frac{\partial}{\partial t}\overline{\delta F_{z}}^{\rm(O)}+\frac {1}{\tau_{b}}\left[\frac{\partial}{\partial P_{\phi}}\overline{\Big{(}\tau_{b }\delta\dot{P}_{\phi}\overline{F_{0}}^{\rm(O)}\Big{)}}_{z}^{\rm(O)}+\frac{ \partial}{\partial\mathcal{E}}\overline{\Big{(}\tau_{b}\delta\dot{\mathcal{E }}\overline{F_{0}}^{\rm(O)}\Big{)}}_{z}^{\rm(O)}\right]\\ +\frac{1}{\tau_{b}}\left[\frac{\partial}{\partial P_{\phi}} \overline{\Big{(}\tau_{b}\delta\dot{P}_{\phi}\delta F\Big{)}}_{z}^{\rm(O)}+ \frac{\partial}{\partial\mathcal{E}}\overline{\Big{(}\tau_{b}\delta\dot{ \mathcal{E}}\delta F\Big{)}}_{z}^{\rm(O)}\right]_{F}=[\overline{C^{g}}_{z}^{ \rm(O)}+\overline{S}^{\rm(O)}]_{F}. \tag{10}\]
Note that, here, \([\ldots]_{F}\) denotes the spatio-temporal micro-scale component of the argument such that \([\ldots]\equiv[\ldots]_{S}+[\ldots]_{F}\); that is, the spatial variation on Larmor radius and finite magnetic drift orbit width length scale, and the temporal variation on the hydrodynamic time scale. It should be pointed out that, differently from the governing equation for PSZS, there is a formally linear term in the orbit averaged response on the LHS. This term may have fast as well as slow spatio-temporal variations and, thus, the subscript \(F\) is omitted there. Furthermore, this same term is responsible, e.g., for the high frequency oscillation characterizing the geodesic acoustic mode (GAM) [33] and, therefore, it cannot be included into the definition of a macroscopic equilibrium consistent with usual transport time scale orderings [5]. Interested readers can find linear as well as nonlinear GAM/EGAM physics discussed in detail in A
### Nonlinear equilibrium as renormalized particle response
In the previous subsection, we showed that the concept of PSZS is intrinsically related to the integrable "equilibrium" guiding center motion and, thus, it is naturally described using \(P_{\phi}\) as phase space coordinate. In particular, the governing equations for the different components of the zonal distribution function take very compact expressions. However, when describing the self-consistent evolution of ZFs, we must adopt standard
flux coordinates \((\psi,\theta,\zeta)\). We can define the associated change of coordinates between these two representations noting that \(P_{\phi}=P_{\phi}(\psi,\theta,{\cal E},\mu)=-(e/c)(\psi-\delta\tilde{\psi}(\psi, \theta,{\cal E},\mu))\). Thus, consistently with previous works [34, 1, 5], one can introduce a shift operator, formally represented as \(e^{iQ}\), accounting for the gyrocenter equilibrium magnetic drift motion, which therefore provides the push-forward transformation from gyrocenter to magnetic drift/oscillating-centers [35]. Then, the (equilibrium) orbit average of a scalar function \(\hat{H}(P_{\phi},\theta)=H(\bar{\psi}+\delta\tilde{\psi},\theta)\) reads:
\[\oint\frac{d\theta}{\dot{\theta}}\hat{H}(P_{\phi},\theta)=\oint\frac{d\theta} {\dot{\theta}}H(\bar{\psi}+\delta\tilde{\psi},\theta)=\oint\frac{d\theta}{\dot {\theta}}e^{iQ}H(\bar{\psi},\theta) \tag{11}\]
where \(\bar{\psi}\equiv-(c/e)P_{\phi}\) and the \(\delta\tilde{\psi}\) dependence on \(\theta\) and the other phase space variables is implicit. It follows by direct inspection that, as expected, (equilibrium) orbit averaging is equivalent to a bounce/transit average combined with the action of the shift operator \(e^{iQ}\). As a further remark, we recall that bounce/transit averaging is also connected with flux surface averaging of velocity space integrals and, consequently, with the standard representation of plasma (radial) transport equations. For this reason, the PSZS governing equation is particularly relevant for describing plasma transport and allows recovering well known results [7], and further generalizing them [5]. In order to see the equivalence between orbit averaging and "shifted bounce averaging" more clearly, let us define \(\overline{(\dots)}=\tau_{b}^{-1}\oint d\theta(...)/\dot{\theta}\), with \(\tau_{b}=\oint d\theta/\dot{\theta}\), where, now, the closed poloidal orbit integral follows the constant-\(\theta\) projection of the actual guiding center orbit on the \(\bar{\psi}\) flux surface. No uncertainty exists in the definition of \(\tau_{b}\) with respect to the orbit averaging approach, since it is uniquely defined being \(\theta\) a dummy integration variable. Then,
\[\overline{(\dots)}^{\rm(O)}\bigg{|}_{P_{\phi}}=\overline{e^{iQ}(\dots)} \bigg{|}_{\bar{\psi}}\, \tag{12}\]
where, for further clarity, we have explicitly denoted by the additional subscripts the reference value of \(P_{\phi}\) on the LHS, and of \(\bar{\psi}\) on the RHS. Rephrasing this concept, Eq. (12) states that orbit average for given \(P_{\phi}\) (implicitly assuming given \({\cal E}\) and \(\mu\)) corresponds to a proper shifted bounce averaging on the flux surface labeled by \(\bar{\psi}\).
Due to the equivalence between orbit and bounce/transit averaging, the governing equation for PSZS introduced in the previous subsection are consistent with those obtained in Refs. [1, 5], with the further inclusion of the effects of the so called parallel nonlinearity. In what follows, we re-derive the governing equations for the different components of \(F_{z}\) using \((\psi,\theta,\zeta,{\cal E},\mu)\) as phase space coordinates for two main reasons: first, in order to establish a close contact with the representation that is most conveniently adopted for writing equations describing mode structures, either ZFs or symmetry breaking perturbations; and second, for demonstrating that the PSZS definition adopted in this and earlier works [1, 2, 3, 5] corresponds to a renormalization of equilibrium particle response.
As a first step toward the second goal, we note that the decomposition of Eq. (9) can be written introducing the drift/banana center pull-back operator \(e^{-iQ_{z}}\) where
\(Q_{z}=RB_{\phi}\left(v_{\parallel}/\Omega\right)k_{z}/(d\psi/dr)\)[5, 6, 22, 23, 34]. This is the explicit expression for the shift operator introduced previously and the subscript \(z\) to the radial wave number \(k_{z}\equiv-i\partial_{r}\) reminds that it is acting on toroidally symmetric response. As noted above, \(Q_{z}\) is the leading order expression of \(Q\); and more accurate expressions for \(Q_{z}\) could be given by means of corresponding more accurate expressions of \(P_{\phi}\), consistent with the adopted gyrokinetic description [24, 25]. In particular, \(F_{z}\) can be written as:
\[F_{z}\equiv\overline{F}_{0}+e^{-iQ_{z}}\left(\overline{\delta F_{Bz}}\Big{|}_{ F}+\delta\tilde{F}_{Bz}\right)=\overline{F}_{0}+e^{-iQ_{z}}\left(\overline{e^{ iQ_{z}}\delta F_{z}}\Big{|}_{F}+\delta\tilde{F}_{Bz}\right) \tag{13}\]
where the function \(\delta F_{Bz}\) is the drift/banana center particle response, the bar stand for bounce/transit averaging, the tilde denotes the vanishing bounce/transit average response and \(\bar{F}_{0}\) is, from now on, a short notation for \(\overline{F}_{0}^{\rm(O)}\) and, thus, describes the PSZS component. Note the one-on-one correspondence of Eq. (13) and Eq. (9), which also illuminates the notation. Following the previous subsection, we now proceed in deriving the governing equations for the different terms of this decomposition. In particular, we recall the gyrokinetic expression for the time variation of the energy per unit mass, i.e.:
\[\delta\dot{\mathcal{E}}=-\frac{e}{m}\dot{\mathbf{X}}_{0}\cdot\nabla\left\langle \delta L_{g}\right\rangle_{z}\,, \tag{14}\]
where angular brackets denote gyro-phase averaging, \(\delta L=\delta\phi-\mathbf{v}\cdot\delta\mathbf{A}/c\), \(\delta\phi\) is the scalar potential, \(\delta\mathbf{A}\) is the vector potential with \(\delta L_{g}=e^{\mathbf{\rho}\cdot\mathbf{\nabla}}\delta L(\mathbf{X})=\delta L(\mathbf{X}+\bm {\rho})\), \(\mathbf{\rho}=\mathbf{b}_{0}\times\mathbf{v}/\Omega\) and \(\Omega=eB_{0}/(mc)\). Note that, \(\left\langle\delta L_{g}\right\rangle_{z}=J_{0}(\lambda)(\delta\phi_{z}-v_{ \parallel}\delta A_{\parallel z})+(2/\lambda)(m/e)\mu J_{1}(\lambda)\delta B_{ \parallel z}\), \(\lambda^{2}=2(\mu B_{0}/\Omega^{2})k_{\perp}^{2}\) and \(J_{0,1}\) are Bessel functions. We also recall the conservation of the toroidal component of the canonical angular momentum in the presence of toroidally symmetric perturbations:
\[\delta\dot{\theta}\frac{\partial P_{\phi}}{\partial\theta}+\delta\psi\frac{ \partial P_{\phi}}{\partial\psi}+\delta\dot{\mathcal{E}}\frac{\partial P_{ \phi}}{\partial\mathcal{E}}=-\frac{e}{c}\left(\partial_{t}+\dot{\mathbf{X}}_{0} \cdot\nabla\right)\frac{RB_{\phi}\langle\delta A_{\parallel g}\rangle_{z}}{B_ {0}}\;. \tag{15}\]
Thus, we can re-write the toroidally symmetric component of Eq. (1) as follows:
\[D(\partial_{t}+\dot{\mathbf{X}}_{0}\cdot\mathbf{\nabla})\left(F_{z}- \frac{e}{m}\langle\delta L_{g}\rangle_{z}\frac{\partial\bar{F}_{0}}{\partial \mathcal{E}}\Big{|}_{\bar{\psi}}+\frac{RB_{\phi}}{B_{0}}\frac{\partial\bar{F} _{0}}{\partial\bar{\psi}}\langle\delta A_{\parallel g}\rangle_{z}\right)+\] \[-D\frac{RB_{\phi}}{B_{0}}\langle\delta A_{\parallel g}\rangle_{z} \frac{\partial}{\partial\psi}\partial_{t}\bar{F}_{0}+D\frac{e}{m}\frac{ \partial}{\partial t}\left(\left.\frac{\partial\bar{F}_{0}}{\partial\mathcal{E }}\right|_{\bar{\psi}}\langle\delta L_{g}\rangle_{z}\right)+\] \[+\frac{1}{\mathcal{J}}\frac{\partial}{\partial\theta}(\mathcal{ J}D\delta\dot{\theta}\delta F)+\frac{1}{\mathcal{J}}\frac{\partial}{\partial\psi}( \mathcal{J}D\delta\dot{\psi}\delta F)+\frac{\partial}{\partial\mathcal{E}}(D \delta\dot{\mathcal{E}}\delta F)=D(C^{g}+\mathcal{S}). \tag{16}\]
This equation, consistently with Ref. [6, 21], suggests to introduce the following definition:
\[G_{z}\equiv F_{z}-\left.\frac{e}{m}\left\langle\delta L_{g}\right\rangle_{z} \frac{\partial\bar{F}_{0}}{\partial\mathcal{E}}\right|_{\bar{\psi}}+\frac{RB_{ \phi}}{B_{0}}\left\langle\delta A_{\parallel g}\right\rangle_{z}\frac{ \partial\bar{F}_{0}}{\partial\bar{\psi}} \tag{17}\]
where, as radial coordinate, we are using \(\bar{\psi}\equiv-(c/e)P_{\phi}\) introduced earlier. From the definition above, the role of \(\bar{F}_{0}\) as renormalized reference distribution function taking into
account nonlinear plasma behaviors (self-interactions) consistently with the theoretical framework introduced in [1, 5] is made clear. In fact, consistently with Eqs. (9) and Eq. (13), no distinction is made in Eqs. (16) and (17) between the \(\overline{\delta F}_{z}^{\rm(O)}=\overline{e^{iQ_{z}}\delta F_{z}}\) contribution that should be kept distinct from \(\bar{F}_{0}\) and the one that can be reabsorbed in it. Thus, the distinction can be made for convenience of identification of a reference magnetic equilibrium involving macro- and meso-scale kinetic profiles only (cf. next subsection); but, the physics analysis of phase space structures that are undamped by linear collisionless processes is "full-\(F\)" by construction. We may also note that, consistently with Eqs. (9) and Eq. (13), the reference state appearing in Eq. (17) generally includes a spatio-temporal micro-scale contribution. However, consistently with the gyrokinetic ordering [21, 25], this term can be neglected at the relevant leading order. We can now write \(G_{z}\) in terms of the drift/banana shift operator, i.e. \(G_{z}=e^{-iQ_{z}}G_{Bz}\), substitute this expression in Eq. (16) and apply \(e^{iQ_{z}}\) on both sides. We find:
\[e^{iQ_{z}}D\dot{\mathbf{X}}_{0}\cdot\mathbf{\nabla}G_{z}=e^{iQ_{z}}\frac{v_{\parallel}} {\mathcal{J}|v_{\parallel}|}\left[1-\frac{\partial}{\partial\psi}\frac{RB_{ \phi}v_{\parallel}}{\Omega}\right]e^{-iQ_{z}}\frac{\partial G_{Bz}}{\partial \theta}=\frac{v_{\parallel}}{\mathcal{J}_{\mathcal{P}_{\phi}}|v_{\parallel}|} \frac{\partial}{\partial\theta}G_{Bz}\,, \tag{18}\]
, where \(\mathcal{J}_{\mathcal{P}_{\phi}}\) is computed at the actual gyrocenter particle position;and, considering the effect of the shift operator on \(D\), we obtain the following kinetic equation:
\[\mathcal{J}_{P_{\phi}}D\partial_{t}(G_{Bz})+\frac{v_{\parallel}} {|v_{\parallel}|}\partial_{\theta}G_{Bz}=\\ e^{iQ_{z}}\Big{[}-\frac{e}{m}\mathcal{J}_{P_{\phi}}D\frac{ \partial}{\partial t}\left(\langle\delta L_{g}\rangle_{z}\frac{\partial\bar{F }_{0}}{\partial\mathcal{E}}\Big{|}_{\bar{\psi}}\right)+\mathcal{J}_{P_{\phi}} D\frac{RB_{\phi}}{B_{0}}\langle\delta A_{\parallel g}\rangle_{z}\frac{\partial}{ \partial\bar{\psi}}\partial_{t}\bar{F}_{0}\\ -\frac{\partial}{\partial\theta}(\mathcal{J}_{P_{\phi}}D\delta \dot{\theta}\delta F)-\frac{\partial}{\partial\psi}(\mathcal{J}_{P_{\phi}}D \delta\dot{\psi}\delta F)-\frac{\partial}{\partial\mathcal{E}}(\mathcal{J}_{P _{\phi}}D\delta\dot{\mathcal{E}}\delta F)+\mathcal{J}_{P_{\phi}}D(C_{g}+ \mathcal{S})\Big{]}. \tag{19}\]
It can be shown that the shift operator can be commuted with the partial derivatives up to the required level of accuracy in the nonlinear terms involving the \(n\neq 0\) toroidal symmetry breaking fluctuations. The nonlinearities caused by ZFs, meanwhile, need a special attention, since \(\delta\dot{\psi}_{z}\) vanishes at the leading order. We will come back to this technical but important point while describing some applications of this theory in Appendix A. Integrating over \(\theta\) on a closed trajectory, the \(\theta\) derivatives can be annihilated except for the ZFs terms. Meanwhile, recalling the bounce average definition introduced previously, the following expression is finally obtained:
\[\partial_{t}\overline{G_{Bz}}= -\overline{e^{iQ_{z}}\frac{e}{m}\partial_{t}\left[\left\langle \delta L_{g}\right\rangle_{z}\frac{\partial\bar{F}_{0}}{\partial\mathcal{E}} \right|_{\bar{\psi}}\right]}+\overline{e^{iQ_{z}}\frac{RB_{\phi}}{B_{0}}\left \langle\delta A_{\parallel g}\right\rangle_{z}\frac{\partial}{\partial\bar{ \psi}}\partial_{t}\bar{F}_{0}}+\overline{e^{iQ_{z}}\left[C_{g}+\mathcal{S} \right]}\Big{|}_{z}\] \[-\overline{\left[e^{iQ_{z}}\left(\delta\dot{\psi}_{z}\partial_{ \psi}+\delta\dot{\theta}_{z}\partial_{\theta}+\delta\dot{\mathcal{E}}_{z} \partial_{\mathcal{E}}\right)\delta F_{z}\right]}\] \[-\frac{1}{\tau_{b}}\frac{\partial}{\partial\psi}\left[\tau_{b} \overline{e^{iQ_{z}}\delta\dot{\psi}\delta F}\right]_{z}-\frac{1}{\tau_{b}} \frac{\partial}{\partial\mathcal{E}}\left[\tau_{b}\overline{e^{iQ_{z}}\delta \dot{\mathcal{E}}\delta F}\right]_{z} \tag{20}\]
Again, for the sake of clarity, note that we have explicitly separated nonlinear responses due to ZFs from those due to \(n\neq 0\) symmetry breaking perturbations. This is a generalization of Eq. (25) in Ref. [5] written in conservative form and retaining the role of parallel nonlinearity, collisions and source terms. Consequently, recalling the relationship between bounce/transit and flux surface averaging, from this expression it is possible to derive all the usual flux surface averaged transport equations [5]. The governing equation for \(\bar{F}_{0}\) follows directly from Eq. (20) and is consistent with Eq. (6):
\[\partial_{t}\overline{e^{iQ_{z}}\bar{F}_{0}}= -\left.\overline{e^{iQ_{z}}\frac{RB_{\phi}}{B_{0}}\partial_{t} \left<\delta A_{\parallel g}\right>_{z}\frac{\partial}{\partial\bar{\psi}}\bar {F}_{0}}\right|_{S}+\left.\overline{e^{iQ_{z}}\left[C_{g}+\mathcal{S}\right]} \right|_{zS}\] \[-\left[\overline{e^{iQ_{z}}\left(\delta\dot{\psi}_{z}\partial_{ \psi}+\delta\dot{\theta}_{z}\partial_{\theta}+\delta\dot{\mathcal{E}}_{z} \partial_{\mathcal{E}}\right)\delta F_{z}}\right]_{S}\] \[-\frac{1}{\tau_{b}}\frac{\partial}{\partial\psi}\left[\tau_{b} \overline{e^{iQ_{z}}\delta\dot{\psi}\delta F}\right]_{zS}-\frac{1}{\tau_{b}} \frac{\partial}{\partial\mathcal{E}}\left[\tau_{b}\overline{e^{iQ_{z}}\delta \dot{\mathcal{E}}\delta F}\right]_{zS}. \tag{21}\]
Here, it is worthwhile noting that, except for the ZFs inductive term on the RHS, Eq. (21) shows that PSZS evolution is either caused by nonlinear interactions or by sources/collisions. Physically, the ZFs inductive term is due to the externally or nonlinearly generated perturbation of the magnetic flux function. Thus, the present definition of PSZS, consistent with earlier works [1, 2, 3, 5], describes the renormalized reference distribution function taking into account nonlinear plasma behaviors (self-interactions). The orbit averaged fast spatiotemporal deviation of the plasma response about the PSZS is given by:
\[\partial_{t}\left.\overline{\delta g_{Bz}}\right|_{F}= -\left.\overline{e^{iQ_{z}}\frac{e}{m}\partial_{t}\left[\left< \delta L_{g}\right>_{z}\frac{\partial}{\partial\mathcal{E}}\right|_{\bar{\psi} }\bar{F}_{0}}\right]\right|_{F}+\left.\overline{e^{iQ_{z}}\frac{RB_{\phi}}{B_{ 0}}\left<\delta A_{\parallel g}\right>_{z}\frac{\partial}{\partial\bar{\psi}} \partial_{t}\bar{F}_{0}}\right|_{F}\] \[+\left.\overline{e^{iQ_{z}}\left[C_{g}+\mathcal{S}\right]} \right|_{zF}-\left[\overline{e^{iQ_{z}}\left(\delta\dot{\psi}_{z}\partial_{ \psi}+\delta\dot{\theta}_{z}\partial_{\theta}+\delta\dot{\mathcal{E}}_{z} \partial_{\mathcal{E}}\right)\delta F_{z}}\right]_{F}\] \[-\frac{1}{\tau_{b}}\frac{\partial}{\partial\psi}\left[\tau_{b} \overline{e^{iQ_{z}}\delta\dot{\psi}\delta F}\right]_{zF}-\frac{1}{\tau_{b}} \frac{\partial}{\partial\mathcal{E}}\left[\tau_{b}\overline{e^{iQ_{z}}\delta \dot{\mathcal{E}}\delta F}\right]_{zF}. \tag{22}\]
where \(\delta g_{z}=e^{-iQ_{z}}\delta g_{Bz}\), consistent with Eq. (17), is the nonadiabatic particle response that is connected with \(\delta F_{z}\) by:
\[\delta g_{z}\equiv\delta F_{z}-\left.\frac{e}{m}\left<\delta L_{g}\right>_{z} \frac{\partial\bar{F}_{0}}{\partial\mathcal{E}}\right|_{\bar{\psi}}+\frac{RB_ {\phi}}{B_{0}}\left<\delta A_{\parallel g}\right>_{z}\frac{\partial\bar{F}_{0}} {\partial\bar{\psi}}. \tag{23}\]
Similarly, after some lengthy but straightforward algebra, one can obtain the governing equation for \(\delta\tilde{g}_{Bz}=\delta g_{Bz}-\overline{\delta g_{Bz}}\)[6]. Equation (21), or the equivalent Eq. (6), and Eq. (22) completely describe the ZS, introduced and defined in the previous subsection, once the reference magnetic equilibrium and the evolution equations for the ZFs are given along with the symmetry breaking fluctuation spectrum. This is done in the next subsection. In particular, the ZS, consisting of neighboring nonlinear equilibria [23]
which, can be thought of as ensemble of different realizations of the system [5], can be written as [6]
\[F_{0*}\equiv\bar{F}_{0}+e^{-iQ_{z}}\left.\overline{e^{iQ_{z}}\delta F_{z}}\right|_ {F}. \tag{24}\]
### Chew Goldberger Low reference equilibrium and zonal fields
The motivation for analyzing phase space features of transport processes in low collisionality burning plasmas is that PSZS could significantly deviate from a given model plasma equilibrium, e.g. Maxwellian, over long time scales and, thus, the usual transport description as evolution of macroscopic radial profiles may become inadequate [5]. Consistently with these motivations, it is necessary to describe the modification of the reference magnetic equilibrium self-consistently with PSZS. Following Refs. [2, 15, 25], we recall that the transformation from the gyrocenter to the particle distribution function can be cast as:
\[f= e^{-\rho\cdot\nabla}\left[F-\frac{e}{m}\left(\frac{\partial \overline{F_{0}}}{\partial\mathcal{E}}+\frac{1}{B_{0}}\frac{\partial\overline {F_{0}}}{\partial\mu}\right)\langle\delta L_{g}\rangle\right]+\frac{e}{m} \left[\frac{\partial\overline{F_{0}}}{\partial\mathcal{E}}\delta\phi+\frac{1} {B_{0}}\frac{\partial\overline{F_{0}}}{\partial\mu}\delta L\right]. \tag{25}\]
Using this expression, we can write every fluid moment in terms of its so called push-forward representation [25]. In the following, differently from the usual approach, we will describe the guiding center transformation using the Dirac delta formalism instead of the \(e^{-\boldsymbol{\rho}\cdot\boldsymbol{\nabla}}\) operator, extending to phase space the velocity space integrals on the particle distribution function, expressed as in Eq. (25). As a simple example, the toroidally symmetric plasma current density \(\mathbf{J}_{z}\) reads:
\[\mathbf{J}_{z}(\mathbf{r}) = e\int d\mathcal{E}d\mu d\alpha d^{3}\mathbf{X}\,D\left(T_{gc}^{ -1}\mathbf{v}\right)\delta(\mathbf{X}+\boldsymbol{\rho}-\mathbf{r})\Big{[} \bar{F}_{0}+\delta F_{z}-\frac{e}{m}\frac{\partial\bar{F}_{0}}{\partial \mathcal{E}}\langle\delta L_{g}\rangle_{z}+ \tag{26}\] \[-\frac{e}{m}\frac{1}{B_{0}}\frac{\partial\bar{F}_{0}}{\partial \mu}\langle\delta L_{g}\rangle_{z}\Big{]}+\frac{e^{2}}{m}\int d\mathcal{E}d \mu d\alpha d^{3}\mathbf{X}\,\mathbf{v}D\Big{[}\frac{\partial\bar{F}_{0}}{ \partial\mathcal{E}}\delta\phi_{z}+\frac{1}{B_{0}}\frac{\partial\bar{F}_{0}}{ \partial\mu}\delta L_{z}\Big{]}\;,\]
where \(\alpha\) is the gyrophase, \(T_{gc}^{-1}\mathbf{v}\) represents the guiding-center transformation of the velocity \(\mathbf{v}\) and the argument of the delta function accounts for the relation between the particle position \(\mathbf{r}\) and the guiding center position \(\mathbf{X}\)[15]. The pressure tensor can be derived analogously. In the present approach, ZFs are considered explicitly as a distortion of the nonlinear equilibrium, that is of the zonal state. Thus, the reference magnetic equilibrium must be computed assuming only the PZSZ as describing the reference state; i.e., only the \(\propto\bar{F}_{0}\) term in the push forward representation of the fluid moments such as Eq. (26). Due to the macro-/meso-scopic nature of PSZS, and applying the usual multipole expansion in the push-forward representation of the fluid moments [25, 15], one can obtain a CGL pressure tensor and a toroidally symmetric current satisfying the following force balance equation:
\[\sigma\frac{\mathbf{J}_{z}\times\mathbf{B}_{0}}{c}=\nabla P_{\parallel}+( \sigma-1)\nabla\left(\frac{B_{0}^{2}}{8\pi}\right)+\frac{B_{0}^{2}}{4\pi} \nabla_{\perp}\sigma \tag{27}\]
where \(\perp\) and \(\parallel\) denote the components perpendicular and parallel to \(\mathbf{B}_{0}\) and
\[\sigma=1+\frac{4\pi}{B_{0}^{2}}\left(P_{\perp}-P_{\parallel}\right). \tag{28}\]
It is well-known that, assuming \(\mathbf{B}_{0}=\hat{F}\nabla\phi+\nabla\phi\times\nabla\psi\), the radial component of this expression reads:
\[\Delta^{*}\psi+\mathbf{\nabla}\ln\sigma\cdot\mathbf{\nabla}\psi=-\frac{4\pi R^{2}}{ \sigma}\frac{\partial P_{\parallel}}{\partial\psi}-\frac{1}{\sigma^{2}}\frac{ \partial G}{\partial\psi} \tag{29}\]
where \(\Delta^{*}\) is the usual Grad-Shafranov operator and \(G(\psi)=(\sigma\hat{F})^{2}/2\) is a flux function. Meanwhile, pressure components and \(\hat{F}(\psi,B_{0})\) function are connected by the parallel
\[\frac{\partial P_{\parallel}}{\partial B_{0}}=\frac{P_{\parallel}-P_{\perp}}{ B_{0}} \tag{30}\]
and bi-normal
\[\frac{\partial P_{\perp}}{\partial B_{0}}=\frac{P_{\perp}-P_{\parallel}}{B_{ 0}}-\sigma\frac{B_{0}^{2}}{4\pi}\frac{\partial\ln\hat{F}}{\partial B_{0}} \tag{31}\]
components of Eq. (27). The resulting solution of Eqs. (27) to (31) defines the magnetic equilibrium that is consistent with the presence of PSZS, which \(\mathbf{J}_{z}\) as well as \(P_{\perp}\) and \(P_{\parallel}\) have been computed from. More precisely, \(P_{\perp}\) and \(P_{\parallel}\) can be calculated integrating the PSZS distribution function and, then, \(\hat{F}(\psi,B_{0})\) is obtained from the expression of the poloidal plasma current, which is also computed from \(\bar{F}_{0}\), and from Eq.(31). Finally a standard Grad Shafranov problem must be solved. This produces, as expected [15, 25, 36], an anisotropic MHD equilibrium. It is worth noting that this result holds at the leading order in the multipole expansion. At higher order, we could compute the macro-/meso-scopic deviations from the CGL pressure tensor, which is expected to become relevant when the multipole expansion does not hold; _e.g._, when steep gradient regions are encountered, viz., near the last closed magnetic surface. More generally, whenever the length scale of the gradients becomes comparable with the characteristic length of particle orbits, the proposed separation of scales to isolate the PSZS is no longer applicable and a "full-\(F\)" approach is mandatory [37].
The neighboring nonlinear equilibria [6, 23, 5]; that is, the micro spatiotemporal deviation from the reference state given by PSZS and the just derived anisotropic (CGL) reference magnetic equilibrium, can also be self-consistently determined along with the ZFs; _i.e._, \(\delta\phi_{z}\), \(\delta A_{\parallel z}\) and \(\delta B_{\parallel z}\) are obtained by means of quasineutrality and Ampere equations following the well known theoretical framework described in [2]:
\[\sum_{s}\left\langle\frac{e^{2}}{m}\frac{\partial\bar{F}_{0s}}{ \partial\mathcal{E}}\right\rangle_{v}\!\!\delta\phi_{z}\!+\nabla\cdot\sum_{s} \left\langle\frac{e^{2}}{m}\frac{2\mu}{\Omega^{2}}\frac{\partial\bar{F}_{0s}}{ \partial\mu}\left(\frac{J_{0}^{2}-1}{\lambda^{2}}\right)\right\rangle_{v}\! \nabla_{\perp}\delta\phi_{z}\] \[\qquad+\sum_{s}\left\langle eJ_{0}(\lambda)\delta g_{z}\right\rangle _{v}+\sum_{s}\left\langle eJ_{0}(\lambda)\bar{F}_{0s}\right\rangle_{v}=0\, \tag{32}\] \[\frac{\partial}{\partial t}\delta A_{\parallel z}=-\left[\frac{1} {B_{0}}\mathbf{b}_{0}\times\mathbf{\nabla}\delta A_{\parallel}\cdot\mathbf{\nabla}\left( \nabla_{\parallel}^{-1}\partial_{t}\delta A_{\parallel}\right)\right]_{z},\] (33) \[\nabla_{\perp}\delta B_{\parallel z}=\mathbf{\kappa}_{0}\delta B_{ \parallel z}+\nabla_{\parallel}\delta\mathbf{B}_{\perp z}+\nabla\mathbf{b}_ {0}\cdot\delta\mathbf{B}_{\perp z}+\frac{4\pi}{c}\delta\mathbf{J}_{\perp z}. \tag{34}\]
Here, \(\sum_{s}\) denotes summation on all particle species, \(\left\langle...\right\rangle_{v}\) stands for velocity space integration, \(\nabla_{\parallel}^{-1}\) is the inverse operator of \(\nabla_{\parallel}\), \(\mathbf{\kappa}_{0}\equiv\mathbf{b}_{0}\cdot\mathbf{\nabla}\mathbf{b}_{0}\) is the magnetic field curvature, \(\delta\mathbf{J}_{\perp z}\) is readily obtained from Eq. (26), \(\delta\mathbf{B}_{\perp z}\) and \(\delta B_{\parallel z}\) are expressed in terms of the fluctuating vector potential as in Ref. [2], and the Coulomb gauge \(\mathbf{\nabla}\cdot\delta\mathbf{A}=0\) is assumed. Thus,
\[\delta\mathbf{B}_{\perp z} = \mathbf{\nabla}_{\perp}\delta A_{\parallel z}\times\mathbf{b}_{0}+\mathbf{b} _{0}\times\mathbf{\kappa}_{0}\delta A_{\parallel z} \tag{35}\] \[+\mathbf{b}_{0}\times\nabla_{\parallel}\delta\mathbf{A}_{\perp z}+(\mathbf{ b}_{0}\times\mathbf{\nabla}\mathbf{b}_{0})\cdot\delta\mathbf{A}_{\perp z}\;,\]
and, therefore, Eqs. (32) to (34) can be solved for \(\delta\phi_{z}\), \(\delta A_{\parallel z}\) and \(\delta B_{\parallel z}\) as independent field variables uniquely defining ZFs [6]. Note that we have maintained the last term on the RHS of Eq. (32) despite it usually vanishes assuming equilibrium quasineutrality. However, in the present theoretical framework where \(\bar{F}_{0}\) is assumed to vary consistently with Eq. (21), equilibrium quasineutrality is not imposed separately, while plasma quasineutrality is satisfied overall. This means that ZFs are allowed to develop slow spatiotemporal mean field structures due to fluctuation induced transport.
PSZS, with their micro spatiotemporal scale counterpart, and the ZFs constitute the zonal state, introduced in Sec. 2.1, which is consistent with the finite level of \(n\neq 0\) symmetry breaking fluctuations. Here, the \(n\neq 0\) fluctuation spectrum is assumed as given, but can generally be computed by means of standard nonlinear gyrokinetic theory. We will address this point in a separate work [20].
## 3 Zonal state self-consistent evolution
In order to illuminate the evolution of the zonal state, let us focus on the case where a given \(n\neq 0\) spectrum is assumed.
Following [2], we adopt low-\(\beta\) ordering with good separation of SAW and compressional Alfven wave frequencies and we calculate \(\delta B_{\parallel}\) from the perpendicular pressure balance equation:
\[\nabla_{\perp}\left(B_{0}\delta B_{\parallel z}+4\pi\delta P_{\perp z}\right) \simeq 0, \tag{36}\]
where \(\delta P_{\perp z}\) represents the perpendicular pressure perturbation. Having solved for \(\delta B_{\parallel z}\) explicitly, the fluctuation spectrum is entirely described by the scalar potential \(\delta\phi_{z}\) and the parallel vector potential \(\delta A_{\parallel z}\). Furthermore, for the sake of simplicity, we also assume the ZS is predominantly characterized by \(\delta\phi_{z}\). Consequently, in what follows, we will describe the zonal state by means of the scalar potential ZFs only. Thus, we are left with the solution of the zonal component of the quasi neutrality condition, i.e., Eq. (32) and of the corresponding particle responses. The equations obtained below, may thus be readily adopted for discussing electrostatic turbulence and, in particular, can be used to describe GAM/EGAM (energetic particle induced GAM [38, 39]) physics (cf. A). To this aim, we rewrite the gyrokinetic equation for the non adiabatic drift/banana center distribution function, i.e.:
\[(\partial_{t}+v_{\parallel}\nabla_{\parallel})\delta g_{Bz}=-e^{iQ_{z}}\left[ \frac{e}{m}\frac{\partial\bar{F}_{0}}{\partial\mathcal{E}}J_{0}\partial_{t} \delta\phi_{z}+N.L.\right] \tag{37}\]
where, for the sake of brevity, the nonlinear terms \(\delta\dot{\bi{X}}\cdot\bi{\nabla}\delta F+\delta\dot{\cal E}\partial_{\cal E}\delta F\) have been indicated as \(N.L.\). Introducing the lifting of a generic scalar field to the particle phase space [1, 35] and the action angle coordinates \(\vartheta_{c}\) and \(\zeta_{c}\), i.e., such that \(\omega_{b}=\dot{\vartheta}_{c}\) and \(\dot{\zeta}_{c}=\bar{\omega}_{d}\) where \(\omega_{b}\) and \(\bar{\omega}_{d}\) are, respectively, the bounce/transit frequency and the precession drift frequency, we obtain:
\[(\partial_{t}+\omega_{b}\partial_{c})\delta g_{Bz}=-e^{iQ_{z}}\left[\frac{e}{ m}\frac{\partial\bar{F}_{0}}{\partial{\cal E}}J_{0}\partial_{t}\delta\phi_{z}+N.L. \right]_{z}\;. \tag{38}\]
For now, we neglect the nonlinear term and Fourier decompose the RHS with respect to the \(\vartheta_{c}\) coordinate. Meanwhile, we introduce the \(\delta\hat{G}_{l}\) function which is connected to the Fourier series of the scalar potential by the following definition:
\[-e^{iQ_{z}}\frac{e}{m}\frac{\partial\bar{F}_{0}}{\partial{\cal E}}J_{0} \partial_{t}\delta\phi_{z}\equiv\sum_{l}e^{il\vartheta_{c}}\partial_{t}\delta \hat{G}_{l}. \tag{39}\]
Here, the coefficients in the Fourier series, \(\delta\hat{G}_{l}\), can be calculated as:
\[\delta\hat{G}_{l}\equiv\frac{1}{2\pi}\oint d\vartheta_{c}e^{-il\vartheta_{c}} \left[-\frac{e}{m}e^{iQ_{z}}\frac{\partial\bar{F}_{0}}{\partial{\cal E}}J_{0} \delta\phi_{z}\right]=-\overline{e^{-il\vartheta_{c}+iQ_{z}}\frac{e}{m}\frac{ \partial\bar{F}_{0}}{\partial{\cal E}}J_{0}\delta\phi_{z}}. \tag{40}\]
It can be readily shown that the spectral representation of the linear solution of Eq. (38) reads:
\[\delta g_{Bz}=\sum_{l}\frac{\omega_{z}}{\omega_{z}-l\omega_{b}}e^{il\vartheta_ {c}}\delta\hat{G}_{l}\;, \tag{41}\]
where \(\omega_{z}\equiv i\partial_{t}\) is the ZFs characteristic frequency that must be intended as an operator. Thus, \((\omega_{z}-l\omega_{b})^{-1}\) in Eq. (41) must be intended as the inverse operator of \((\omega_{z}-l\omega_{b})\). We emphasize that this is a formal solution since it requires integration along characteristics and, therefore, it involves an integral equation. The same procedure can be straightforwardly applied to solve the equation including the nonlinear term, and the corresponding solution can be substituted into Eq. (32). Thus, restoring the species index \(s\) and explicitly denoting summation on particle species as well as summation on \(\hat{\sigma}=\pm\), where \(\hat{\sigma}=v_{\parallel}/|v_{\parallel}|\) for circulating particles, while, for magnetically trapped particles, \(\hat{\sigma}=\pm\) represents the right-/left-handed rotation of the particles on the outer leg of their poloidal orbit, the magnetic flux surface averaged Eq. (32) reads4:
Footnote 4: By magnetic flux surface average we mean \([...]_{\psi}=(2\pi/V_{\psi}^{\prime})\int_{0}^{2\pi}{\cal J}(...)d\theta\) with \(V_{\psi}^{\prime}=2\pi\int_{0}^{2\pi}{\cal J}d\theta\). The following equation, for simplicity, does not report a factor \((4\pi^{2}/V_{\psi}^{\prime})\) that should appear in front of the double sum, \(\sum_{s}\sum_{\hat{\sigma}}\), after flux surface averaging of Eq. (32).
\[\sum_{s}\sum_{\hat{\sigma}}\int d{\cal E}d\mu\tau_{bs}\frac{e_{s} ^{2}}{m_{s}}\left(\frac{\partial\bar{F}_{0s}}{\partial{\cal E}}\delta\phi_{z }-\sum_{l}\frac{\omega_{z}}{\omega_{z}-l\omega_{bs}}\overline{e^{il\vartheta_{c }-iQ_{zs}}\overline{J_{0}}e^{-il\vartheta_{c}+iQ_{zs}}\frac{\partial\bar{F}_{0s }}{\partial{\cal E}}J_{0}\delta\phi_{z}}\right)\] \[+\sum_{s}\sum_{\hat{\sigma}}\int d{\cal E}d\mu\frac{1}{d\psi/dr} \frac{\partial}{\partial r}\Biggl{[}\frac{e_{s}^{2}}{m_{s}}\frac{2\mu}{\Omega_ {s}^{2}}\frac{\partial\bar{F}_{0s}}{\partial\mu}\left(\frac{J_{0}^{2}-1}{ \lambda_{s}^{2}}\right)\tau_{bs}\frac{d\psi}{dr}\frac{\partial}{\partial r} \delta\phi_{z}\Biggr{]}=\] \[\sum_{s}\sum_{\hat{\sigma}}\int d{\cal E}d\mu\tau_{bs}e_{s}\sum_{ l}\frac{i}{\omega_{z}-l\omega_{bs}}\overline{e^{il\vartheta_{c}-iQ_{zs}}J_{0}}\, \overline{e^{-il\vartheta_{c}+iQ_{zs}}N.L.}. \tag{42}\]
Note that, here, \(\sum_{\hat{\sigma}}\) applies to circulating particles only, since it is reabsorbed by the bounce averaging for trapped particles. Furthermore, \(\tau_{bs}=2\pi/\omega_{bs}\) and, for simplicity, we have ignored the possible contribution due to breaking the PSZS quasineutrality, discussed above. That contribution can be easily restored, if needed, along with the contribution of sources and collisions by letting \(N.L.\to N.L.-(C_{g}+{\cal S})\) on the RHS of Eq. (42). This expression has been derived by using a minimal set of assumptions that quite reasonably describe the self-consistent evolution of the ZS and, therefore, its generality make it suitable for various applications since it allows to describe an arbitrary \(\overline{F}_{0}\) while retaining realistic magnetic geometry effects. We will illustrate some of these applications in A.
In the following, we explore the low-frequency response obtained focusing on the \(l=0\) component. This is clearly the response that is directly connected with transport. In particular, the hence obtained linear terms can be expressed in a compact form introducing the plasma polarizability for \(s\)-species, \(\chi_{zs}\)3, defined as:
Footnote 3: \(\chi_{zs}\), is connected with the usual definition of susceptibility, \(\chi_{s}\), via the relation \(\chi_{zs}=(1+\chi_{s})k_{r}^{2}\lambda_{D}^{2}\), with \(\lambda_{D}^{2}=T/(4\pi ne^{2})\) the Debye length.
\[\chi_{zs}\left[\delta\phi_{z}\right]_{\psi} \equiv -\frac{4\pi^{2}}{V_{\psi}^{\prime}}\frac{T_{s}}{n_{s}m_{s}}\sum_ {\hat{\sigma}}\int d{\cal E}d\mu\tau_{bs}\left\{\frac{\partial\bar{F}_{0s}}{ \partial{\cal E}}\delta\phi_{z}-\overline{e^{-iQ_{zs}}J_{0}}\overline{e^{iQ_{ zs}}J_{0}\frac{\partial\bar{F}_{0s}}{\partial{\cal E}}\delta\phi_{z}}\right. \tag{43}\] \[\left.+\frac{1/\tau_{bs}}{d\psi/dr}\frac{\partial}{\partial r} \overline{\left[\frac{2\mu}{\Omega_{s}^{2}}\frac{\partial\bar{F}_{0s}}{ \partial\mu}\left(\frac{J_{0}^{2}-1}{\lambda_{s}^{2}}\right)\tau_{bs}\frac{d \psi}{dr}\frac{\partial}{\partial r}\delta\phi_{z}\right]}\right\}\.\]
This equation becomes a closed expression for \(\chi_{zs}\) once \(\delta\hat{\phi}_{z}\equiv\delta\phi_{z}-\left[\delta\phi_{z}\right]_{\psi}\) is given1. This can be obtained from the component of the quasineutrality condition that is varying along the flux surface, and it can be shown that \(\left|\delta\hat{\phi}_{z}\right|\ll\left|\left[\delta\phi_{z}\right]_{\psi}\right|\) in the long wavelength limit, \(\left|Q_{zs}\right|\ll 1\) (cf., e.g., Ref. [40, 41, 42, 43]). In particular, \(\delta\hat{\phi}_{z}\to 0\) for \(T_{e}/T_{i}\to 0\). Equation (43), valid for arbitrary wavelength, generalizes to arbitrary geometry and distribution functions the plasma polarizability expressions at short wavelengths studied recently [16, 17, 18, 19]. Introducing the \(s\)-species polarization density
Footnote 1: Please, note the difference between bounce, \(\overline{(...)}\), and flux averaging, \(\left[...\right]_{\psi}\), although they are the same at the lowest order for well circulating particles. The difference between \(\widetilde{(...)}\) and \(\widetilde{(...)}\) follows consequently.
\[\left[\delta n_{\rm pols}\right]_{\psi}=-\frac{n_{s}e_{s}}{T_{s}}\chi_{zs} \left[\delta\phi_{z}\right]_{\psi}\,\]
the \(l=0\) flux surface averaged quasineutrality condition, Eq. (42), can be rewritten as
\[\sum_{s}e_{s}\partial_{t}[\delta n_{\rm pols}]_{\psi}=\sum_{s}e_{s}\left[ \boldsymbol{\nabla}\cdot\boldsymbol{\Gamma}_{N.L.s}\right]_{\psi}\, \tag{44}\]
where we have introduced the flux surface averaged divergence of the \(s\)-species particle flux due to nonlinear interactions:
\[\left[\boldsymbol{\nabla}\cdot\boldsymbol{\Gamma}_{N.L.s}\right]_{\psi}=\frac {4\pi^{2}}{V_{\psi}^{\prime}}\sum_{\hat{\sigma}}\int d{\cal E}d\mu\tau_{bs} \overline{e^{-iQ_{zs}}J_{0}}\,\overline{e^{iQ_{zs}}N.L.}. \tag{45}\]
Recalling that, at the leading order, \(\delta\dot{\psi}=(B_{0}/B_{\parallel}^{*})c\partial_{\zeta}\left\langle\delta L_{g}\right\rangle\), the last equation demonstrates that only toroidal symmetry breaking fluctuations drive a finite flux surface averaged particle transport in tokamaks. Thus, the present analysis must assume a prescribed spectrum of \(n\neq 0\) fluctuations (cf. Sec. 2.3). Physically, Eq. (44) is readily interpreted as the nonlinear charge density modification compensating the polarization charge to ensure quasineutrality; that is, \(\sum_{s}e_{s}\partial_{t}[\delta n_{\mathrm{p}ols}]_{\psi}=-\sum_{s}e_{s} \partial_{t}[\delta n_{N.L.s}]_{\psi}\). Meanwhile, without summing over all particle species, it is possible to cast the same equation in the form of flux surface averaged particle continuity equation:
\[\partial_{t}\left[n_{s}\right]_{\psi}=\partial_{t}[\delta n_{\mathrm{p}ols}]_{ \psi}-\left[\mathbf{\nabla}\cdot\mathbf{\Gamma}_{N.L.s}\right]_{\psi}\;, \tag{46}\]
where collisional neoclassical transport in the banana regime as well as sources/sinks can be readily included in the expression above by letting \(N.L.\to N.L.-(C_{g}+\mathcal{S})\), as discussed below Eq. (42). Note that \(\left[n_{s}\right]_{\psi}\) on the LHS of Eq. (46) is the total particle density of the \(s\)-species, since the fluctuation induced nonlinear particle flux includes both micro- as well as meso- and macro-scale spatio-temporal behaviors, consistent with Eq. (32). Thus, Eq. (46) describes the variety of spatiotemporal scales involved in particle transport. Consistently with the analysis of Ref. [5], polarization effects become important only on sufficiently short scales, \(k_{z}L>\delta^{-1/2}\); i.e., the meso-scales, with \(L\) the characteristic plasma macro-scale, \(\rho_{L}\) the Larmor radius and \(\delta=\rho_{L}/L\) the gyrokinetic ordering parameter. Meanwhile, the flux surface averaged particle flux for symmetry breaking fluctuations becomes
\[\left[\mathbf{\nabla}\cdot\mathbf{\Gamma}_{N.L.s}\right]_{\psi}=\frac{1}{V_{\psi}^{ \prime}}\frac{\partial}{\partial\psi}\left[\left\langle V_{\psi}^{\prime}( \overline{e^{-iQ_{zs}}J_{0}})\,\overline{[ce^{iQ_{zs}}R^{2}\nabla\phi\cdot \nabla\left\langle\delta L_{gs}\right\rangle\delta g_{s}]}\right\rangle_{v} \right]_{\psi}\;. \tag{47}\]
Again, in the \(k_{z}L<\delta^{-1/2}\) long wavelength limit, this expression reduces to the well-known form adopted in classical analyses of fluctuation-induced evolution of macroscopic plasma profiles [7, 44, 45, 46, 47, 29]. Following Ref. [5], the same argument can be repeated to show that classical forms of momentum and energy transport equations are reproduced.
This demonstrates that the PSZS transport equations, Eqs. (21) and (22) derived in the previous section, along with the equations for the self-consistent determination of the ZFs, Eqs. (32) to (34), fully characterize the ZS and, at the same time, the multi-spatiotemporal-scale nature of phase space transport in collisionless burning plasmas and their possible deviation from local thermodynamic equilibrium. This description reduces to the previous gyrokinetic theory of phase space transport [5, 48] within the framework of Frieman-Chen nonlinear gyrokinetic equation [21] and recovers earlier works in the proper limit [7, 44, 45, 46, 47, 29]. Based on the nonlinear gyrokinetic theory with Hamiltonian description of particle motion accurate up \(\mathcal{O}(\delta^{2})\)[24, 25], the present PSZS transport equations are valid for even longer than the characteristic transport time scale, \(\mathcal{O}(\delta^{-3})\Omega^{-1}\), and typically hold on times \(<\mathcal{O}(\delta^{-4})\Omega^{-1}\).
## 4 Conclusions and discussion
In this article, we have presented a comprehensive study of plasma transport processes in fusion plasmas using the PSZS transport theory [5, 6]. We addressed the limitations of current numerical frameworks which are computationally expensive and often limited in their ability to capture long-time scale dynamics and non-local behaviors. To overcome these challenges, we developed the PSZS transport theory, which provides a proper definition of the plasma nonlinear equilibrium distribution function by considering slowly evolving structures in the phase space. In this way, we generalize the concept of plasma transport into the phase space. The usual transport equations, e.g., particle and energy, can be readily obtained in the proper limit [5].
Applying the PSZS transport theory, we derived the evolution equation for the ZS, representing the renormalized nonlinear equilibrium consistent with toroidal symmetry breaking fluctuations and transport time scale ordering. Specifically, we defined the two components of the ZS, namely the PSZS and the ZFs. Moreover, applying the Chew Goldberger Low (CGL) description, we derived the self-consistent modifications of the reference magnetic equilibrium using the push forward representation of the macro-/meso-scopic component moments of the PSZS.
As illustration, of the theoretical framework, we discuss the self-consistent evolution of the ZS with a given spectrum of toroidally symmetry breaking perturbations and ZFs dominated by the scalar potential response. We derived expressions for the plasma polarizability that are applicable to arbitrary geometry and equilibrium distribution functions, and discussed the features of transport equations on the different spatial scales that are involved in the problem. Since the equations discussed in the present work have direct application to GAM/EGAM physics, we have added a detailed Appendix on this problem, where interested readers can find a discussion of the GAM/EGAM peculiar physics. In particular, we give a general expression for the linear dielectric response of GAMs. Furthermore, we illustrate examples of GAM/EGAM nonlinear dynamics, which could be readily adopted to investigate problems of practical interest in general geometry and with arbitrary EP distribution functions, such as the EGAM decay into two GAMs recently observed in low-collisionality LHD plasmas [49] and the self-consistent EGAM frequency sweeping [50].
In conclusion, the PSZS transport theory provides a promising approach to understanding EP transport processes in fusion plasmas. The derived equations for the ZS and the associated modifications to the equilibrium provide a comprehensive framework for studying plasma nonlinear equilibrium and its evolution due to transport processes. This theoretical framework opens new possibilities for developing advanced reduced EP transport models capable of capturing the long-time scale evolution of burning plasmas and providing insight into the non-locality of transport processes. Notably, a recent advancement in this field is the proposed Dyson-Schrodinger transport Model (DSM) [6]. PSZS fluxes, computed using the DAEPS-FALCON suite of codes [51, 52, 53], have been calculated within the LIGKA EP workflow [54, 55] considering realistic
Tokamak configurations. This is a crucial step towards the practical implementation of the PSZS transport theory in realistic geometry. Based on a gyrokinetic description for the underlying perturbations, employing general EP distributions functions and using saturation rules obtained from gyrokinetic non-linear codes will allow us to construct a quantitative and predictive reduced EP transport model for the interpretation of present-day experimental results and the investigation of future burning plasmas.
Future research directions include the derivation of general orbit-averaged source and collision terms on the analytical side. On the numerical side, PSZS diagnostics have been developed for global gyrokinetic and hybrid codes such as HMGC and ORB5 [13], enabling the study of phase space transport processes during nonlinear gyrokinetic simulation. At present, the EP workflow calculates the PSZS evolution within the kick model [56] approximation. Further development of reduced transport models for the PSZS involves the development of a solver for the DSM [6] and the inclusion of nonlinear corrections into the governing equations of the EP workflow. More generally, a comprehensive gyrokinetic transport solver on long time scale can be developed by means of subcycling and restart of nonlinear gyrokinetic simulations in the updated ZS computed within the present theoretical framework adopting the numerically computed phase space fluxes. These advancements will contribute to a deeper understanding of EP transport in fusion plasmas and facilitate the development of more accurate predictive models including core turbulent transport.
This work was carried out within the framework of the EUROfusion Consortium and received funding from Euratom research and training programme 2014-2018 and 2019-2020 under Grant Agreement No. 633053 (Project No. WP19-ER/ENEA-05). This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 - EUROfusion; Projects No. ENR-MOD.01.MPG and AC-TSVV.10.MPG). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. This work was also supported by the National Science Foundation of China Project No. 12261131622. This work was supported in part by the Italian Ministry of Foreign Affairs and International Cooperation, grant number CN23GR02.
## Appendix A Applications to geodesic acoustic mode physics
In this appendix, as further illustration of the strength and usefulness of the present theoretical framework, we focus on selected applications of Eq. (42) derived above. Despite these results are not entirely novel, their compact derivation and validity for general geometry and distribution functions demonstrates the practical implications of
the present approach. Firstly, we derive an expression for the linear dielectric response of GAM (Geodesic Acoustic Mode) oscillations, which yields the well-known results for Maxwellian distribution functions and circular equilibria [57, 42, 58] and, next, we extend the results of Ref. [43] describing the generation of zero frequency ZFs by GAMs oscillations. Additionally, we investigate the modulation of GAMs by ZFs showing that the presence of energetic particles (EPs) or higher-order thermal plasma finite orbit width effects are necessary for non-vanishing nonlinear interactions as well as for second harmonic GAM generation. This generalizes the results of [43, 50, 59, 60, 61]. Finally, we explore the nonlinear dynamics of EGAM highlighting the importance of PSZS and the ZS by considering the nonlinear term induced by PSZS as a nonlinear equilibrium. We describe the evolution of the zonal state, accounting for the action of sources, collisions, and the emission and re-absorption of the GAM/EGAM fluctuations.
### Linear dielectric response of energetic particle driven geodesic acoustic mode
In order to calculate the linear dispersion response, we rewrite Eq. (41) taking the decomposition \(\delta\phi_{z}=\delta\phi_{G}=\left[\delta\phi_{G}\right]_{\psi}+\delta\hat{ \phi}_{G}\) explicitly into account, denoting that the scalar potential refers to GAM/EGAM. More precisely,
\[\delta g_{BG}= -e^{iQ_{G}}\frac{e}{m}\frac{\partial\bar{F}_{0}}{\partial\mathcal{ E}}J_{0}\delta\phi_{G}+\sum_{l}\left[\frac{l\omega_{b}\omega_{G}}{\omega_{G}^ {2}-l^{2}\omega_{b}^{2}}i\sin l\vartheta_{c}+\frac{l^{2}\omega_{b}^{2}}{ \omega_{G}^{2}-l^{2}\omega_{b}^{2}}\cos l\vartheta_{c}\right] \tag{4.1}\] \[\times\left[-\overline{e^{iQ_{G}}\cos l\vartheta_{c}\frac{e}{m} \frac{\partial\bar{F}_{0}}{\partial\mathcal{E}}J_{0}}\right]\left[\delta\phi _{G}\right]_{\psi}+\sum_{l}\left[\frac{l\omega_{b}\omega_{G}}{\omega_{G}^{2} -l^{2}\omega_{b}^{2}}i\sin l\vartheta_{c}\right.\] \[\left.+\frac{l^{2}\omega_{b}^{2}}{\omega_{G}^{2}-l^{2}\omega_{b} ^{2}}\cos l\vartheta_{c}\right]\left[-\overline{e^{iQ_{G}}\cos l\vartheta_{c }\frac{e}{m}\frac{\partial\bar{F}_{0}}{\partial\mathcal{E}}J_{0}\delta\hat{ \phi}_{G}}\right]\] \[+\sum_{l}\left[\frac{l\omega_{b}\omega_{G}}{\omega_{G}^{2}-l^{2} \omega_{b}^{2}}\cos l\vartheta_{c}+\frac{l^{2}\omega_{b}^{2}}{\omega_{G}^{2} -l^{2}\omega_{b}^{2}}i\sin l\vartheta_{c}\right]\] \[\times\left[\overline{e^{iQ_{G}}i\sin l\vartheta_{c}\frac{e}{m} \frac{\partial\bar{F}_{0}}{\partial\mathcal{E}}J_{0}\delta\hat{\phi}_{G}} \right]\,\]
where \(\delta g_{BG}\) denotes the \(\delta g_{Bz}\) response to GAM, the subscript \(G\) in \(Q_{G}\) reminds that the radial shift operator acts on GAM, and we have assumed an up-down symmetric equilibrium for simplicity but without loss of generality, since the general case could be readily restored at the expense of more complicated formal expressions. Substituting Eq. (4.1) back into the linearized Eq. (32) for the varying component on the considered
magnetic flux surface, we can write
\[\left[-\frac{n_{e}e^{2}}{T_{e}}+\sum_{s}\left\langle\frac{e_{s}^{2}}{ m_{s}}\frac{\partial\bar{F}_{0s}}{\partial\mathcal{E}}\right\rangle_{v}\right]\delta \hat{\phi}_{G}+\sum_{s}\sum_{l}\left\langle\left[\frac{l\omega_{bs}\omega_{G}}{ \omega_{G}^{2}-l^{2}\omega_{bs}^{2}}i\sin l\vartheta_{c}J_{0}e^{-iQ_{Gs}}\right.\right.\] \[\left.\left.+\frac{\omega_{G}^{2}}{\omega_{G}^{2}-l^{2}\omega_{bs} ^{2}}\left(\cos l\vartheta_{c}J_{0}e^{-iQ_{Gs}}-\overline{\cos l\vartheta_{c} J_{0}e^{-iQ_{Gs}}}\right)\right]\right.\] \[\left.\times\left[-e^{iQ_{Gs}}\cos l\vartheta_{c}\frac{e_{s}^{2}} {m_{s}}\frac{\partial\bar{F}_{0s}}{\partial\mathcal{E}}J_{0}\right]\right\rangle _{v}\left.\left[\delta\phi_{G}\right]_{\psi}+\sum_{s}\sum_{l}\left\langle \left[\frac{l\omega_{bs}\omega_{G}}{\omega_{G}^{2}-l^{2}\omega_{bs}^{2}}\right.\right.\right.\] \[\left.\left.\times\left.\left.i\sin l\vartheta_{c}J_{0}e^{-iQ_{Gs }}+\frac{\omega_{G}^{2}}{\omega_{G}^{2}-l^{2}\omega_{bs}^{2}}\left(\cos l \vartheta_{c}J_{0}e^{-iQ_{Gs}}-\overline{\cos l\vartheta_{c}J_{0}e^{-iQ_{Gs}}} \right)\right]\right.\right.\] \[\left.\left.\times\left[-e^{iQ_{Gs}}\cos l\vartheta_{c}\frac{e_{s }^{2}}{m_{s}}\frac{\partial\bar{F}_{0s}}{\partial\mathcal{E}}J_{0}\delta\hat {\phi}_{G}\right]\right\rangle_{v}+\sum_{s}\sum_{l}\left\langle\left[\frac{l \omega_{bs}\omega_{G}}{\omega_{G}^{2}-l^{2}\omega_{bs}^{2}}\right.\right.\right.\] \[\left.\left.\left.\times\left.\left(\cos l\vartheta_{c}J_{0}e^{- iQ_{Gs}}-\overline{\cos l\vartheta_{c}J_{0}e^{-iQ_{Gs}}}\right)+\frac{\omega_{G}^{2}}{ \omega_{G}^{2}-l^{2}\omega_{bs}^{2}}i\sin l\vartheta_{c}J_{0}e^{-iQ_{Gs}} \right]\right.\right.\] \[\left.\left.\times\left[e^{iQ_{Gs}}i\sin l\vartheta_{c}\frac{e_{ s}^{2}}{m_{s}}\frac{\partial\bar{F}_{0s}}{\partial\mathcal{E}}J_{0}\delta\hat{ \phi}_{G}\right]\right\rangle_{v}=0\;,\] (A.2)
where, for simplicity, we have assumed Maxwellian electrons. Equation (A.2) is readily solved for \(\delta\hat{\phi}_{G}\) as a function of \(\left[\delta\phi_{G}\right]_{\psi}\) reducing to well known results, e.g. [62, 63, 41], for Maxwellian ions in the long wavelength limit. Meanwhile, the flux surface averaged quasineutrality condition can be written as
\[\frac{1}{V_{\psi}^{\prime}}\frac{1}{d\psi/dr}\frac{\partial}{ \partial r}\left[V_{\psi}^{\prime}\sum_{s}\rho_{Ls}^{2}\frac{n_{s}e_{s}^{2}}{ T_{s}}D_{Gs}\frac{d\psi}{dr}\frac{\partial}{\partial r}\left[\delta\phi_{G} \right]_{\psi}\right]=\frac{4\pi^{2}}{V_{\psi}^{\prime}}\sum_{s}\sum_{\hat{ \sigma}}\int d\mathcal{E}d\mu\tau_{bs}e_{s}\] \[\qquad\times\sum_{l}\frac{i}{\omega_{G}-l\omega_{bs}}\overline{ \cos l\vartheta_{c}J_{0}e^{-iQ_{Gs}}}\,e^{-il\vartheta_{c}+iQ_{Gs}}N.L.\;,\] (A.3)
where \(\rho_{Ls}^{2}=(T_{s}/m_{s})/\bar{\Omega}_{s}^{2}\), the temperature \(T_{s}\) is defined as \(T_{s}\equiv n_{s}^{-1}\left\langle 2m_{s}\mu B_{0}\bar{F}_{0s}\right\rangle_{v}\) for a generic non-Maxwellian distribution function, \(\bar{\Omega}_{s}\) is the cyclotron frequency computed at the on-magnetic-axis magnetic field \(B_{0}=\bar{B}_{0}\), and \(D_{Gs}\) is the \(s\)-species contribution to the GAM/EGAM dispersion response, expressed as, noting Eq. (A.2):
\[D_{Gs}\left[\delta\phi_{G}\right]_{\psi}=\frac{4\pi^{2}}{V_{\psi }^{\prime}}\sum_{\hat{\sigma}}\int d\mathcal{E}d\mu\tau_{bs}\left\{\left[ \overline{\left[\frac{2\mu\bar{B}_{0}^{2}}{n_{s}B_{0}}\left(\frac{J_{0}^{2}-1}{ \lambda_{s}^{2}}\right)\left(\frac{\partial\bar{F}_{0s}}{\partial\mathcal{E}}+ \frac{1}{B_{0}}\frac{\partial\bar{F}_{0s}}{\partial\mu}\right)\right]}\right.\right.\] \[\left.\left.+\sum_{l}\frac{l^{2}\omega_{bs}^{2}}{\omega_{G}^{2}-l ^{2}\omega_{bs}^{2}}\overline{\cos l\vartheta_{c}J_{0}e^{-iQ_{Gs}}}\overline{ \left[e^{iQ_{Gs}}\cos l\vartheta_{c}\frac{\bar{\Omega}_{s}^{2}}{n_{s}k_{r}^{2}} \frac{\partial\bar{F}_{0s}}{\partial\mathcal{E}}J_{0}\right]}\right]\left[ \delta\phi_{G}\right]_{\psi}\right.\] \[\left.\left.+\sum_{l}\frac{\omega_{G}^{2}}{\omega_{G}^{2}-l^{2} \omega_{bs}^{2}}\overline{\cos l\vartheta_{c}J_{0}e^{-iQ_{Gs}}}\overline{ \left[e^{iQ_{Gs}}\cos l\vartheta_{c}\frac{\bar{\Omega}_{s}^{2}}{n_{s}k_{r}^{2}} \frac{\partial\bar{F}_{0s}}{\partial\mathcal{E}}J_{0}\delta\hat{\phi}_{G}\right]}\right]\right.\] \[\left.\left.-\sum_{l}\frac{l\omega_{bs}\omega_{G}}{\omega_{G}^{2}-l ^{2}\omega_{bs}^{2}}\overline{\cos l\vartheta_{c}J_{0}e^{-iQ_{Gs}}}\overline{ \left[e^{iQ_{Gs}}i\sin l\vartheta_{c}\frac{\bar{\Omega}_{s}^{2}}{n_{s}k_{r}^{2}} \frac{\partial\bar{F}_{0s}}{\partial\mathcal{E}}J_{0}\delta\hat{\phi}_{G}\right]} \right]\right\}\right\}\;.\] (A.4)
Note that electrons do not give contribution the the GAM/EGAM dispersion response since they cannot respond to \(n=0\) perturbations in the GAM/EGAM frequency range,
as it is well known. Equation (A.4) generalizes previously derived expressions of the GAM/EGAM dispersion relation [57, 42, 58] (cf. Ref. [50] for a recent review) to the case of general geometry and distribution functions, and recovers them in the proper limit; e.g., for circular cross section tokamak equilibria, where, upon expanding \(e^{iQ_{Gs}}\simeq 1+iQ_{Gs}\) in the long wavelength limit,
\[iQ_{Gs}\simeq\left[1+\left(1+\frac{\mu\bar{B}_{0}}{\bar{v}_{\parallel}^{2}} \right)\frac{r}{R_{0}}\cos\theta\right]\frac{qR_{0}}{r}\frac{\bar{v}_{ \parallel}}{\bar{\Omega}_{s}}\partial_{r}\;,\] (A.5)
with \(R=R_{0}\) denoting the magnetic axis, \(\bar{v}_{\parallel}\) the parallel velocity at \(\bar{B}_{0}\), and we also have \(V_{\psi}^{\prime}=4\pi^{2}qR_{0}/\bar{B}_{0}\) and \(\tau_{b}=2\pi qR_{0}/|\bar{v}_{\parallel}|\) for well circulating particles. In fact, assuming \(|\omega_{G}|\gg\omega_{b}\) and one single ion species, Eq. (A.2) yields
\[\delta\hat{\phi}_{G}\simeq 2\frac{T_{e}}{T_{i}}\frac{T_{i}/m_{i}}{\Omega_{i} \omega_{G}}\frac{i}{R_{0}}\sin\theta\frac{\partial}{\partial r}\left[\delta \phi_{G}\right]_{\psi}\;,\]
having noted that \(\vartheta_{c}\simeq\hat{\sigma}\theta\) for well circulating particles, while Eq. (A.4) reduces to
\[D_{Gi}\simeq 1-\frac{2T_{i}/m_{i}}{R_{0}^{2}\omega_{G}^{2}}\left(\frac{7}{4}+ \frac{T_{e}}{T_{i}}\right)\;,\]
from which the leading order GAM frequency can be obtained. The GAM collisionless damping and/or resonanant EGAM excitation by phase space anistropic EPs can be obtained from the wave-particle resonances embedded in Eq. (A.4). Meanwhile, GAM/EGAM collisional damping is readily restored by letting \(N.L.\to N.L.-\left(C_{g}+\mathcal{S}\right)\) on the RHS of Eq. (A.3) (cf. Eq. (42) in Sec. 3). Most importantly, however, the formally nonlinear term on the RHS of Eq. (A.3) allows us to discuss the relative role of different processes contributing to GAM/EGAM nonlinear dynamics [43, 50], which are addressed in the next two subsections.
### Zero frequency zonal flow generation by geodesic acoustic modes
Consider the generation of zero frequency zonal flow by self-modulation of interacting GAMs. In particular, let us look at the flux surface averaged quasineutrality condition, Eq. (42), in the form of Eq. (44); i.e.,
\[\sum_{s}\frac{n_{s}e_{s}^{2}}{T_{s}}\chi_{zs}\left[\delta\phi_{z}\right]_{\psi }=-\sum_{s}e_{s}\partial_{t}^{-1}\left[\mathbf{\nabla}\cdot\mathbf{\Gamma}_{N.L.s} \right]_{\psi}\;,\] (A.6)
where \(\chi_{zs}\) is the general polarizability expression derived above in Eq. (43). In order to calculate the nonlinear flux due to GAM, we assume that mode frequency is much larger than bounce/transit frequency, \(|\omega_{G}|\gg\omega_{b}\), and, thus, from Eq. (A.1),
\[\delta g_{BG}\simeq-e^{iQ_{G}}\frac{\partial\bar{F}_{0}}{m}\frac{\partial\bar {F}_{0}}{\partial\mathcal{E}}J_{0}\delta\phi_{G}+i\sum_{l}\frac{l\omega_{b}}{ \omega_{G}}\sin l\vartheta_{c}\left[-\overline{e^{iQ_{G}}\cos l\vartheta_{c} \frac{e}{m}\frac{\partial\bar{F}_{0}}{\partial\mathcal{E}}J_{0}}\right]\left[ \delta\phi_{G}\right]_{\psi}\;,\] (A.7)
up to first order in the \(\omega_{b}/\omega_{G}\) expansion. Note that all bounce harmonics are retained due to the fact that GAM are characterized by finite frequency and that, similarly to Eq. (16), we have considered an up-down symmetric equilibrium for simplicity but without loss of generality. Now, let's note that \(\delta\hat{\phi}_{G}={\cal O}(k_{z}\rho_{L})[\delta\phi_{G}]_{\psi}\) for GAM [40, 41, 42, 43] and, thus, that linear as well as nonlinear dynamics are dominated by finite orbit width effects. As a consequence, the nonlinear flux due to GAM on the RHS of Eq. (15) is dominated by the \(\propto e^{iQ_{z}}\hat{\theta}_{z}\partial_{\theta}\delta F_{z}\) term. Noting also that
\[\delta\dot{\theta}_{z}=-\frac{cRB_{\phi}}{{\cal J}B_{0}B_{\parallel}^{*}(d \psi/dr)}(J_{0}\delta E_{rz})\simeq-\frac{cRB_{\phi}}{{\cal J}B_{0}^{2}(d\psi/ dr)}(J_{0}\delta E_{rz})\]
at the leading order, where \(\delta E_{rz}\) is the GAM radial electric field, we have
\[\delta E_{rz}\simeq\frac{1}{2}\left(\delta E_{rG}(r,t)e^{-i\omega_{G}t}+ \delta E_{rG}^{*}(r,t)e^{i\omega_{G}t}\right)\;; \tag{17}\]
and, thus,
\[\overline{e^{iQ_{z}}N.L.} = \Bigg{[}\hat{\sigma}\sum_{l}\overline{e^{iQ_{G}}\cos l\vartheta_{ c}\frac{cRB_{\phi}}{4{\cal J}B_{0}^{2}(d\psi/dr)}(J_{0}\delta E_{rG})^{*}} \tag{18}\] \[\times\ \frac{il^{2}\omega_{b}}{(\omega_{G}+i\partial_{t})} \overline{e^{iQ_{G}}\cos l\vartheta_{c}\frac{e}{m}\frac{\partial\bar{F}_{0}}{ \partial{\cal E}}J_{0}\delta\phi_{G}}+c.c.\Bigg{]}\] \[\simeq \partial_{t}\underbrace{\int_{l}\frac{l^{2}\hat{\sigma}\omega_{b }}{\omega_{G}^{2}}e^{iQ_{G}}\cos l\vartheta_{c}\frac{cRB_{\phi}}{4{\cal J}B_{0 }^{2}(d\psi/dr)}(J_{0}\delta E_{rG})^{*}}\] \[\times\ e^{iQ_{G}}\cos l\vartheta_{c}\frac{e}{m}\frac{\partial \bar{F}_{0}}{\partial{\cal E}}J_{0}\delta\phi_{G}\;,\]
where we have assumed that \(\partial_{\theta}\sin l\vartheta_{c}\simeq l\hat{\sigma}\cos l\vartheta_{c}\) for well circulating particles [1]1, \(c.c.\) stands for complex conjugate and \((\omega_{G}+i\partial_{t})^{-1}\) is the inverse of \((\omega_{G}+i\partial_{t})\). Equation (15), thus, becomes
Footnote 11: As for the case of up-down symmetric equilibria, this assumption simplifies notations but can be generally relaxed when carrying out numerical quadratures, which allow using the general map \(\theta\mapsto\vartheta_{c}\) for given constants of motion \((P_{\phi},{\cal E},\mu)\).
\[\sum_{s}\frac{n_{s}e_{s}^{2}}{T_{s}}\chi_{zs}\left[\delta\phi_{z} \right]_{\psi} = -\frac{4\pi^{2}}{V_{\psi}^{\prime}}\sum_{s}\sum_{\hat{\sigma}}\int d{ \cal E}d\mu_{bs}\overline{e^{-iQ_{zs}}J_{0}} \tag{19}\] \[\times\underbrace{\int_{l}\frac{l^{2}\hat{\sigma}\omega_{bs}}{ \omega_{G}^{2}}\overline{e^{iQ_{Gs}}\cos l\vartheta_{c}\frac{cRB_{\phi}}{4{\cal J }B_{0}^{2}(d\psi/dr)}(J_{0}\delta E_{rG})^{*}}}\] \[\times\ e^{iQ_{Gs}}\cos l\vartheta_{c}\frac{e_{s}^{2}}{m_{s}} \frac{\partial\bar{F}_{0s}}{\partial{\cal E}}J_{0}\delta\phi_{G}\;.\]
This expression generalizes that derived in Ref. [43] and reduces to it upon expanding \(e^{iQ_{Gs}}\simeq 1+iQ_{Gs}\) in the long wavelength limit and noting Eq. (14) for a high aspect-ratio tokamak equilibrium. Consistent with [43], Eq. (19) suggests that efficient generation of zero frequency zonal flow by GAM occurs at short wavelength due to
finite orbit width effects. Meanwhile, in the long wavelength limit, the leading order response is finite only for distribution functions that are not even in \(\hat{\sigma}\). For distribution functions that are symmetric in \(\hat{\sigma}\), retaining higher order contributions in the \(e^{iQ_{Gs}}\) and \(e^{-iQ_{zs}}\) expansions is necessary for computing the leading order non-vanishing term on the RHS of Eq. (25), as it was shown in Ref. [64] for the case of a bi-Maxwellian \(\bar{F}_{0}\), which is readily recovered from Eq. (25) in the proper limit.
### Null modification of GAM by ZFs nor GAM second harmonic generation
Let us first consider the GAM modulation by the ZFs generated either the process discussed in Sec. A.2 or by \(n\neq 0\) toroidal symmetry breaking fluctuations. In Eq. (24) with \(|\omega_{G}|\gg\omega_{b}\), the relevant nonlinear term reads
\[\frac{i}{\omega_{G}}\int d\mathcal{E}\tau_{bs}\overline{J_{0}N.L.}=\frac{i}{\omega_{G}}\int d\mathcal{E}\oint d\theta J_{0}\left\{\left[\frac{ cRB_{\phi}}{B_{0}v_{\parallel}}\frac{\partial}{\partial\psi}\left(J_{0} \left[\delta\phi_{z}\right]_{\psi}\right)\frac{\partial}{\partial\theta}\right.\] \[\left.-\frac{\partial}{\partial\theta}\left(\frac{cRB_{\phi}v_{ \parallel}}{B_{0}}\right)\frac{\partial}{\partial\psi}\left(J_{0}\left[ \delta\phi_{z}\right]_{\psi}\right)\frac{\partial}{\partial\mathcal{E}}\right] \delta F_{Gs}\] \[+\left[\frac{cRB_{\phi}}{B_{0}v_{\parallel}}\frac{\partial}{ \partial\psi}\left(J_{0}\left[\delta\phi_{G}\right]_{\psi}\right)\frac{ \partial}{\partial\theta}-\frac{\partial}{\partial\theta}\left(\frac{cRB_{ \phi}v_{\parallel}}{B_{0}}\right)\right.\] \[\left.\times\frac{\partial}{\partial\psi}\left(J_{0}\left[\delta \phi_{G}\right]_{\psi}\right)\frac{\partial}{\partial\mathcal{E}}\right]\delta F _{zs}\right\}\;, \tag{26}\]
at the leading order in the \(\mathcal{O}(k_{z}q\rho_{L}),\mathcal{O}(k_{G}\rho_{L})\) expansion, where \(\delta F_{G}\) denotes the GAM particle response, while \(\delta F_{z}\) is the low frequency particle response consistent with the ZFs generated nonlinearly in Eq. (25). Here, the first terms in the square brackets represent \(\delta\dot{\theta}_{z,G}\partial_{\theta}\), respectively, while the second ones stand for \(\delta\dot{\mathcal{E}}_{z,G}\partial_{\mathcal{E}}\). Integrating by parts in \(\theta\) the first terms and by parts in \(\mathcal{E}\) the second ones, it can be recognized that this expressions vanish at the leading order, which means that GAM cannot be modulated by ZFs, either generated by self-modulation or by other by \(n\neq 0\) toroidal symmetry breaking fluctuations on the parallel nonlinearity time scale. This result is consistent with the findings of Ref. [43].
Let us now reconsider the GAM self-modulation and compute the generation of GAM second harmonic. Equation (26) with \(|\omega_{G}|\gg\omega_{b}\) can be specialized to this case and becomes, at the leading order in the \(\mathcal{O}(k_{G}\rho_{L})\) expansion,
\[\frac{i}{\omega_{GII}}\int d\mathcal{E}\tau_{bs}\overline{J_{0}N. L.}=\frac{i}{\omega_{GII}}\int d\mathcal{E}\oint d\theta J_{0}\left[\frac{ cRB_{\phi}}{B_{0}v_{\parallel}}\frac{\partial}{\partial\psi}\left(J_{0} \left[\delta\phi_{G}\right]_{\psi}\right)\frac{\partial}{\partial\theta}\right.\] \[\left.-\frac{\partial}{\partial\theta}\left(\frac{cRB_{\phi}v_{ \parallel}}{B_{0}}\right)\frac{\partial}{\partial\psi}\left(J_{0}\left[ \delta\phi_{G}\right]_{\psi}\right)\frac{\partial}{\partial\mathcal{E}} \right]\delta F_{Gs}\;. \tag{27}\]
Here, \(\omega_{GII}\simeq 2\omega_{G}\) stands for GAM second harmonic possibly driven by the considered finite amplitude GAM. Again, integrating by parts in \(\theta\) the first term in square brackets and by parts in \(\mathcal{E}\) the second one, this expression vanishes at the leading order, which means that GAM self-modulation cannot generate second harmonic GAM on the
parallel nonlinearity time scale. Second harmonic GAM generation becomes possible by inclusion of EP nonlinear dynamics in the GAM self-modulation or higher order thermal plasma finite orbit width effects. Equation (12) generalizes to shaped geometry and arbitrary distribution functions the original result of Ref. [43, 50, 59, 60, 61].
### Nonlinear dynamics of energetic particle driven geodesic acoustic modes
When looking at GAM excited by EPs, the assumption \(\left|\omega_{G}\right|\gg\omega_{b}\) underlying the derivations in Sec. A.3 is not applicable any longer and that bears consequences for the GAM-ZFs and GAM-PSZS interactions. Let us reconsider Eq. (13) with the most general nonlinear interaction term on the RHS. We analyze first the PSZS induced nonlinear term, noting that, in the low-frequency limit,
\[\delta F_{z}=e^{-iQ_{z}}\overline{e^{iQ_{z}}\delta F_{z}}\;.\]
This means that the corresponding low-frequency \(\delta F_{z}\) is a function only of \((P_{\phi},\mathcal{E},\mu)\). Thus, when looking at its nonlinear interaction with a generic fluctuation structure, including the \(n=0\) GAM/EGAM, we have
\[\left(\delta\dot{\psi}_{G}\partial_{\psi}+\delta\dot{\theta}_{G} \partial_{\theta}+\delta\dot{\mathcal{E}}_{G}\partial_{\mathcal{E}}\right) \delta F_{z}=-\left(\partial_{t}+\dot{\mathbf{X}}_{0}\cdot\mathbf{\nabla}\right)\left( \frac{e}{m}\left\langle\delta L_{gG}\right\rangle\right)\frac{\partial\delta F _{z}}{\partial\mathcal{E}}\] \[\qquad+\left(\partial_{t}+\dot{\mathbf{X}}_{0}\cdot\mathbf{\nabla}\right) \left(\frac{RB_{\phi}\left\langle\delta A_{\parallel gG}\right\rangle}{B_{0}} \right)\frac{\partial\delta F_{z}}{\partial\bar{\psi}}+\frac{e}{m}\partial_{t }\left\langle\delta L_{gG}\right\rangle\frac{\partial\delta F_{z}}{\partial \mathcal{E}}\;, \tag{13}\]
where we have noted Eqs. (14) and (15). Now recall Eq. (18) along with Eqs. (37) and (38). Thus, when computing the contribution of the first term on the RHS above to the nonlinear interaction term in Eq. (13), we have
\[\overline{e^{-il\theta_{c}+iQ_{G}}N.L.}=i(\omega_{G}-l\omega_{b})\overline{e^ {-il\theta_{c}+iQ_{G}}\left(\frac{e}{m}\left\langle\delta L_{gG}\right\rangle \right)}\frac{\partial}{\partial\mathcal{E}}\overline{e^{iQ_{z}}\delta F_{z}}\;. \tag{14}\]
A similar equation can be derived for the nonlinear contribution of the second term on the RHS in Eq. (13). As a consequence, we can incorporate the low frequency response into the zonal state, by allowing a fast spatial variation, consistent with Eq. (24) as well as Eqs. (17) and (23), preserving the structure of the governing equations. Physically, this means that the low frequency distortion in the particle distribution function can be treated as a nonlinear equilibrium and corresponds to the renormalization of particle response discussed in Sec. 2.2. It also further illuminates the physical meaning of PSZS and zonal state.
We analyze now the effect of ZFs on GAM/EGAM. At the leading order, we have
\[e^{iQ_{G}}\left(\delta\dot{\psi}_{z}\partial_{\psi}+\delta\dot{\theta}_{z} \partial_{\theta}+\delta\dot{\mathcal{E}}_{z}\partial_{\mathcal{E}}\right)e^{ -iQ_{G}}\delta F_{BG}=\left[e^{iQ_{z}}\left(\delta\dot{\theta}_{z}\partial_{ \theta}+\delta\dot{\mathcal{E}}_{z}\partial_{\mathcal{E}}\right)\right]\delta F _{BG}\;. \tag{15}\]
Therefore, the propagator \((\omega_{z}-l\omega_{b})^{-1}\) in Eq. (41) is renormalized as
\[\left(\omega_{G}+i\partial_{t}-l\omega_{b}-\Delta_{1}\right)^{-1}\;, \tag{16}\]
where
\[\Delta_{1}=-ie^{-i\vartheta_{c}}\left[e^{iQ_{z}}\left(\delta\dot{\theta}_{z} \partial_{\theta}+\delta\dot{\mathcal{E}}_{z}\partial_{\mathcal{E}}\right) \right]e^{il\vartheta_{c}}\;. \tag{1.17}\]
Here, \(\{...\}^{-1}\) denotes the inverse operator and, for simplicity, we have assumed isolated resonances, nearby which, given the results of Sec. A.3, we can expect that the dominant nonlinear effects occur. The overlapping resonance case can be handled by a similar approach at the price of additional technical complications. Note that Eq. (1.17) describes the "shearing" effect of the ZFs, that is the wave particle decorrelation effect near resonance due to the poloidal flow as well as the low-frequency axisymmetric energy redistribution due to ZFs. The latter is typically negligible for toroidal symmetry breaking fluctuations but, in general, needs to be taken into account for a proper treatment of the effect of \(n=0\) GAM/EGAM. Following a similar argument and Dupree's classical approach to resonance broadening theory [65], we can also evaluate the effect of renormalization of the propagator in Eq. (1.16) by generation of second harmonic component in the particle distribution functions. Noting that, near the \(\omega_{G}=l\omega_{b}\) resonance, where \(\delta F_{BG}\simeq e^{il\vartheta_{c}}\delta F_{BG}^{(l)}\),
\[\delta F_{BGII}\simeq-i\sum_{l^{\prime}}\frac{e^{il^{\prime}\vartheta_{c}}}{ \left(\omega_{GII}-l^{\prime}\omega_{b}\right)}\overline{e^{-il^{\prime} \vartheta_{c}}\left[e^{iQ_{G}}\left(\delta\dot{\theta}_{G}\partial_{\theta}+ \delta\dot{\mathcal{E}}_{G}\partial_{\mathcal{E}}\right)\right]e^{il \vartheta_{c}}}\,\delta F_{BG}^{(l)}\;, \tag{1.18}\]
where, noting Eq. (1.8),
\[\delta\dot{\theta}_{G} = -\frac{cRB_{\phi}}{\mathcal{J}B_{0}B_{*}^{*}}\frac{J_{0}\delta E _{rG}}{2d\psi/dr}\,,\] \[\delta\dot{\mathcal{E}}_{G} = \frac{cv_{\parallel}}{\mathcal{J}B_{\parallel}^{*}}\frac{\partial }{\partial\theta}\left(\frac{RB_{\phi}v_{\parallel}}{B_{0}}\right)\frac{J_{0 }\delta E_{rG}}{2d\psi/dr}\,.\]
Thus, the further renormalization of the propagator in Eq. (1.16) yields
\[\left(\omega_{G}+i\partial_{t}-l\omega_{b}-\Delta_{1}-\Delta_{2}\right)^{-1}\;, \tag{1.19}\]
with
\[\Delta_{2} = -\sum_{l^{\prime}}\overline{e^{-il\vartheta_{c}}\left[e^{iQ_{G} }\left(\delta\dot{\theta}_{G}\partial_{\theta}+\delta\dot{\mathcal{E}}_{G} \partial_{\mathcal{E}}\right)^{*}\right]e^{il^{\prime}\vartheta_{c}}}\frac{1 }{\left(\omega_{GII}-l^{\prime}\omega_{b}\right)} \tag{1.20}\] \[\times\overline{e^{-il^{\prime}\vartheta_{c}}\left[e^{iQ_{G}} \left(\delta\dot{\theta}_{G}\partial_{\theta}+\delta\dot{\mathcal{E}}_{G} \partial_{\mathcal{E}}\right)\right]e^{il\vartheta_{c}}}\;.\]
Equation (1.20) accounts for nonlinear frequency shift as well as of resonance broadening [65], acting as spontaneous nonlinear regulation of the minimum resonance width for a coherent nearly-periodic spectrum [10, 11]. In summary, the GAM/EGAM nonlinear problem is formally linear and given by Eq. (1.3) with vanishing RHS, where, however, in Eq. (1.4) the PSZS is given by Eq. (24) and the renormalized propagator by Eq. (1.19). This is consistent with the findings of Appendix A.3, predicting null nonlinear interactions in the GAM-ZFs and GAM-GAM system when wave-particle interactions are neglected [43]. The nonlinear system is closed by the PSZS evolution equation, Eqs.
(21) and (22), which, near the \(\omega_{G}\simeq l\omega_{b}\) resonance, can be combined as:
\[\partial_{t}\overline{e^{iQ_{z}}F_{0*}}=\left.\overline{e^{iQ_{z}} \left[C_{g}+\mathcal{S}\right]}\right|_{z}+\left[\overline{e^{iQ_{G}}\sin l \vartheta_{c}\frac{cv_{\parallel}}{\mathcal{J}B_{\parallel}^{*}}\frac{ \partial}{\partial\theta}\left(\frac{RB_{\phi}v_{\parallel}}{B_{0}}\right) \frac{J_{0}|\delta E_{rG}|}{2d\psi/dr}}\right] \tag{21}\] \[\times\left.\frac{\partial}{\partial\mathcal{E}}\left\{l\omega_{b }\left[\left(\omega_{G}-l\omega_{b}\right)^{2}+\partial_{t}^{2}\right]^{-1} \partial_{t}\left[\overline{e^{iQ_{G}}\cos l\vartheta_{c}\frac{e}{m}\frac{ \partial\bar{F}_{0*}}{\partial\mathcal{E}}J_{0}\left|\delta\phi_{G}\right|} \right]\right\}\right\}\;.\]
Here, for simplicity, we have dropped \(\Delta_{1}\) and \(\Delta_{2}\) terms in Eq. (2.19) and noted the result of Appendix A.2 to neglect the \(\propto\dot{\theta}_{G}\partial_{\theta}\) contribution to the nonlinear response for \(F_{0*}\) symmetric in \(\hat{\sigma}\). We have also assumed \(T_{e}/T_{i}\ll 1\), without loss of generality, in order to drop \(\delta\hat{\phi}_{G}\) with respect to \([\delta\phi_{G}]_{\psi}\) following Ref. [50]. Equation (21) represents the evolution of the zonal state under the action of sources and collisions, as well as of emission and re-absorption of the GAM/EGAM fluctuations. In this respect, neglecting sources and collisions, Eq. (21) is a Dyson-like equation [66, 67] as noted earlier [1, 2, 3, 6], and its solution, which can be formally represented as a Dyson series, describes the evolution of the ZS. Equation (21) is given in time representation and is the extension to general geometry of the analogous equation for the evolution of the renormalized fast ion distribution function given in Ref. [50] using the frequency representation. Equations (A.3) and (21) are perhaps one of the simplest possible illustrations of the DSM to the self-consistent evolution of the ZS. More detailed analyses of Eqs. (A.3) and (21) are beyond the present scope of illustration of simple applications of the general theoretical framework; thus, they will be reported elsewhere.
|
2310.15693 | Towards Automated Recipe Genre Classification using Semi-Supervised
Learning | Sharing cooking recipes is a great way to exchange culinary ideas and provide
instructions for food preparation. However, categorizing raw recipes found
online into appropriate food genres can be challenging due to a lack of
adequate labeled data. In this study, we present a dataset named the
``Assorted, Archetypal, and Annotated Two Million Extended (3A2M+) Cooking
Recipe Dataset" that contains two million culinary recipes labeled in
respective categories with extended named entities extracted from recipe
descriptions. This collection of data includes various features such as title,
NER, directions, and extended NER, as well as nine different labels
representing genres including bakery, drinks, non-veg, vegetables, fast food,
cereals, meals, sides, and fusions. The proposed pipeline named 3A2M+ extends
the size of the Named Entity Recognition (NER) list to address missing named
entities like heat, time or process from the recipe directions using two NER
extraction tools. 3A2M+ dataset provides a comprehensive solution to the
various challenging recipe-related tasks, including classification, named
entity recognition, and recipe generation. Furthermore, we have demonstrated
traditional machine learning, deep learning and pre-trained language models to
classify the recipes into their corresponding genre and achieved an overall
accuracy of 98.6\%. Our investigation indicates that the title feature played a
more significant role in classifying the genre. | Nazmus Sakib, G. M. Shahariar, Md. Mohsinul Kabir, Md. Kamrul Hasan, Hasan Mahmud | 2023-10-24T10:03:27Z | http://arxiv.org/abs/2310.15693v1 | # Towards Automated Recipe Genre Classification using Semi-Supervised Learning
###### Abstract
Sharing cooking recipes is a great way to exchange culinary ideas and provide instructions for food preparation. However, categorizing raw recipes found online into appropriate food genres can be challenging due to a lack of adequate labeled data. In this study, we present a dataset named the "Assorted, Archetypal, and Annotated Two Million Extended (3A2M+) Cooking Recipe Dataset" that contains two million culinary recipes labeled in respective categories with extended named entities extracted from recipe descriptions. This collection of data includes various features such as title, NER, directions, and extended NER, as well as nine different labels representing genres including bakery, drinks, non-veg, vegetables, fast food, cereals, meals, sides, and fusions. The proposed pipeline named 3A2M+ extends the size of the Named Entity Recognition (NER) list to address missing named entities like heat, time or process from the recipe directions using two NER extraction tools. 3A2M+ dataset provides a comprehensive solution to the various challenging recipe-related tasks, including classification, named entity recognition, and recipe generation. Furthermore, we have demonstrated traditional machine learning, deep learning
and pre-trained language models to classify the recipes into their corresponding genre and achieved an overall accuracy of 98.6%. Our investigation indicates that the title feature played a more significant role in classifying the genre.
Keywords:Named Entity Recognition (NER), Annotation, Active Learning, 3A2M, Recipe dataset, Human-in-the-loop (HITL), Recipe classification, RecipeNLG, RoBERTa, DistilBERT
## 1 Introduction
Food recipe classification is a crucial aspect of understanding and organizing recipe data, particularly in the field of machine learning and deep learning. With the growing availability of large food recipe datasets (RecipeNLG [1] and recipe 1M+ [2]) on the internet, researchers have been able to train and test deep learning models in the culinary domain [3; 4]. Recipe data includes a wealth of information, including ingredients, cooking instructions, recipe titles, and categories, that can be used to train machine learning models.
The primary goal of recipe classification is to make it easier for users to find alternative recipes based on their preferred genre [5]. The classification of food recipes into genres makes it easier for users to find recipes based on their preferences and tastes, as well as enhancing the search functionality of online recipe databases. Recipe genre classification allows the creation of recommendation and recipe generation systems that can make better suggestions to users based on their preferred cuisine or ingredients. However, classifying recipes into different genres can be subjective and based on personal opinions, which can lead to inconsistencies in the classification process. Classifying recipes into few genres can result in oversimplification, while classifying recipes into too many genres can make it difficult for users to find the recipe they are looking for [5; 6; 7; 8]. Therefore, in this study, we have used a machine learning-based approach to classify a large recipe dataset, taking advantage of the ability of machine learning algorithms to handle large amounts of data and identify patterns and relationships in the data [8]. Methods used to classify food recipes include traditional machine learning, deep learning models and pre-trained language models.
This study has made several significant contributions to the applications of deep learning and natural language processing proposing a standardized framework for recipe genre classification.
* To begin with, we created a pipeline that generated a vast annotated dataset consisting of two million culinary recipes with extended named entities extracted from recipe descriptions. Additionally, we extended the Named Entity Recognition (NER) list to cover a wider range of named entities and their variations. By extending the named entity list of the 3A2M [9] dataset extracted from recipe descriptions (directions), we were able to address the issue of missing named entities like temperature of food processing, time of cooking and process of cooking or preserving. We named our resulting dataset the "Assorted,
Archetypal, and Annotated Two Million Extended (3A2M+) Cooking Recipe Dataset".
* Secondly, the application of conventional machine learning, deep learning, and pre-trained language models was able to correctly classify recipe genres with an accuracy rate of 98.6%. This confirms the efficacy of pre-trained language models such as DistilBERT and RoBERTa in the classification of recipe genres.
* Finally, to classification, named entity recognition, and recipe generation, the dataset we have developed could also be useful in other recipe-related tasks such as recipe recommendation, ingredient substitution, dietary analysis, and recipe summarization. Therefore, the "Assorted, Archetypal, and Annotated Two Million Extended (3A2M+) Cooking Recipe Dataset" can serve as a valuable resource for various applications in the domain of culinary research and development. The 3A2M+ dataset will be available at this URL: [https://t.ly/tnhB](https://t.ly/tnhB).
The remainder of the paper is organized in the subsequent manner: Section 2 examines the necessary contextual information and previous research. The dataset description and the process of named entity extraction is discussed in section 3, while section 4 presents the proposed methodology. In section 5, the experimental setup is introduced, followed by the classification findings in section 6. The conclusion of the study and potential future work is discussed in section 7.
## 2 Literature Review
One of the research challenges in this field is the automatic classification of recipes into different categories such as desserts, soups, or salads. Another challenge is the provision of personalized recipe recommendations based on user preferences and dietary restrictions. To address these issues, researchers have developed a variety of machine learning methods, including transfer learning and active learning techniques. Moreover, the availability of large-scale recipe datasets has facilitated the development of advanced models capable of capturing complex relationships between ingredients, flavors, and cooking methods. This section covers a review of the literature on existing recipe datasets, recipe generation, machine learning techniques for recipe genre classification.
### Recipe Dataset
The availability of online recipe datasets has significantly transformed the way individuals discover and obtain recipes. These datasets, which are accessible through the internet, offer a vast collection of recipes, enabling people to conveniently search for recipes according to their preferences. Some of the commonly used online recipe datasets include Recipe1M+ [2], RecipeNLG [1], Food.com1, RecipeDB2, and Recipe5K3. With these datasets, people can easily find and access an extensive range of recipes, providing them with valuable information about ingredients,
instructions, and even nutrition information. As such, these datasets have become a valuable resource for both home cooks and professional chefs alike.
Salvador et al. [10] proposed a method to learn cross-modal embeddings that can represent both food images and cooking recipes in a shared space, enabling the model to reason about the relationships between them. The authors used a deep neural network architecture to learn the cross-modal embeddings. The network consists of three components: a vision module that processes food images, a language module that processes cooking recipes, and a joint embedding module that combines the output of the vision and language modules into a shared space. The joint embedding module was trained to minimize the distance between the embeddings of corresponding food images and cooking recipes. To evaluate the effectiveness of their method, the authors conducted experiments on two datasets: Recipe1M and Food-101. The Recipe1M dataset contains over one million recipes with corresponding images, while the Food-101 dataset contains images of 101 food categories. The authors showed that their method outperformed several baseline methods in both the recipe-to-image and image-to-recipe retrieval tasks, indicating that their cross-modal embeddings were able to capture meaningful correlations between food images and cooking recipes.
Marin et al. [2] introduced a new dataset called Recipe1M+ for learning cross-modal embeddings for cooking recipes and food images. The authors argued that food is a highly visual and sensory experience, and that there is a need for models that can understand the relationship between food images and the corresponding recipes. The Recipe1M+ dataset [2] is created by combining two existing datasets, Recipe1M and the ECCV 2018 Food Image Recognition Challenge dataset. Recipe1M contains one million recipes with associated metadata, while the ECCV dataset contains 55,000 food images with associated labels. The authors used a combination of automatic and manual methods to align the recipes and images in the dataset. To demonstrate the usefulness of the Recipe1M+ dataset, the authors trained several models for cross-modal retrieval, including a recipe-to-image model and an image-to-recipe model. The authors showed that the models were able to retrieve relevant images and recipes based on a query, and that the quality of the retrieval was improved with the use of cross-modal embeddings learned from the dataset. The authors also provided a detailed analysis of the dataset, including statistics on the number of recipes, images, and unique ingredients in the dataset. They also provided examples of the types of relationships that can be learned from the dataset, such as the similarity between different types of pizza.
Generating natural language text that follows a semi-structured format, such as a cooking recipe, requires models that can understand the underlying structure and content of the text. In order to develop and evaluate such models, large and diverse datasets are required. Bien et al. [1] presented RecipeNLG, a new dataset for semi-structured text generation in the domain of cooking recipes. The authors described their methodology for collecting and annotating the dataset. They first collected a large number of recipes from various sources, such as recipe websites and cookbooks. They then annotated the recipes by identifying and labeling the key components of the recipe, such as ingredients, cooking actions, and cooking times. The authors also
provided statistics on the size and diversity of the dataset, including the number of unique ingredients, cooking actions, and recipe structures. The authors of this dataset conducted experiments on two tasks, namely recipe generation and recipe rewriting, to demonstrate the utility of the RecipeNLG dataset. To generate new recipes, they employed a neural language model, while for translating recipes from one language to another, they used a neural machine translation model. The authors were able to prove that the RecipeNLG dataset is valuable and can be utilized for training and assessing semi-structured text generation models for both tasks. They also compared the RecipeNLG dataset to other cooking recipe datasets and found that it is larger and more varied than previous datasets. They suggested that future studies can use the RecipeNLG dataset to enhance the quality and diversity of semi-structured text generation models.
Recipe5k1 is a popular publicly available dataset of recipes that has been widely used in various research studies. The dataset contains around 5,000 recipes in English, each with an associated ingredient list and cooking instructions. Created by the Computer Vision Center at the Universitat Autonoma de Barcelona, Recipe5k has been used as a benchmark dataset to evaluate the performance of various recipe-related algorithms. One common use of Recipe5k is for recipe categorization, in which recipes are classified into different categories such as cuisine type, dietary restrictions, or meal type. It can be utilized to develop a recipe recommendation system that is based on the similarity of ingredients, demonstrating the effectiveness of the system in generating personalized recipe recommendations. Another important use of Recipe5k can be ingredient recognition, in which a system is trained to recognize ingredients from the text of the recipe. Overall, Recipe5k has become a widely used dataset in the field of recipe-related research, offering valuable resources and opportunities for researchers to evaluate and develop new algorithms and systems.
Footnote 1: [http://www.ub.edu/cvub/recipes5k/](http://www.ub.edu/cvub/recipes5k/)
The 3A2M dataset [9] is based on the RecipeNLG dataset and incorporates all of its data and features. The data attributes include title, directions, NER and genre as labels. Three human experts classified 300K recipes into nine categories based on NER, with remaining 1900K recipes automatically classified using active learning and a query by committee approach. The dataset includes over two million recipes, each classified into one of the nine predefined genres. Preprocessing techniques such as unique word discovery, genre principle categorized word matching, and lowercase English letter conversion were used on the original unlabeled data using Natural Language Processing (NLP) techniques. The Fleiss Kappa score for the nine genres is around 0.56026.
We believe that 3A2M+ dataset introduced in this study is a valuable resource for those working on recipe classification and named entity recognition (NER) tasks, as well as other applications of recipe data. Its consistent format can help extract relevant information for classification and NER tasks, and the pre-processed annotations for several tasks can save time and effort in preparing the dataset. The flexibility of the dataset can also allow for exploration of different applications of recipe data beyond classification and NER. Overall, the 3A2M+ dataset can provide a diverse set of recipes to train and test models for various recipe-related tasks.
### Recipe Classification
The absence of openly available recipe datasets suitable for machine learning methods has obstructed progress in data-driven culinary research [11]. Although certain online recipe databases have made use of data-driven techniques to promote culinary research, there is a scarcity of a sizable annotated dataset of recipes categorized by genre that can help create dependable machine learning models and promote the development of this area of research.
Britto et al. [12] proposed a text analysis method to classify Brazilian Portuguese cooking recipes due to the need for personalized recipe recommendations and improved nutritional profiles. The authors manually categorized 1,080 recipes into seven categories and applied text mining techniques such as stemming, stop words removal, and TF-IDF to extract features. Machine learning algorithms including Naive Bayes, Random Forest, and Support Vector Machines were used to classify the recipes. The proposed approach achieved an accuracy of up to 94%, outperforming similar studies.
On a later work, Britto et al. [13] introduced a multi-label classification method to identify food restrictions in recipes due to the difficulty people with allergies, intolerances, or dietary preferences face in finding suitable recipes. The authors manually labeled 1,080 Brazilian Portuguese recipes with food restrictions and used text mining techniques like stemming, stop words removal, and TF-IDF to extract features. Multi-label classification algorithms, such as Binary Relevance and Classifier Chains, were used to predict the food restriction labels. The proposed method achieved an F1-score of up to 0.93, surpassing baseline methods, and an ablation study was conducted to analyze the features' contribution to performance.
Jayaraman et al. [14] aimed to analyze classification models for cuisine prediction using machine learning due to the increasing interest in multiculturalism and personalized food recommendations. They used a dataset of 20,000 recipes from 20 cuisines and applied text mining techniques and machine learning algorithms such as Naive Bayes, Decision Trees, and Support Vector Machines to classify the recipes. The proposed approach achieved up to 80% accuracy, outperforming baseline methods, and the authors compared the algorithms in terms of accuracy, precision, recall, and F1-score. To give an overview of recipe genre categorization, Table 1 provides a summary of previous studies on recipe classification.
Pre-trained language models, such as BERT (Bidirectional Encoder Representations from Transformers) [15] have shown great success in a wide range of natural language processing tasks. However, to the best of our knowledge, currently we have not found any study that utilized pre-trained language models for food recipe genre classification. One possible reason is the lack of large-scale datasets specifically designed for this task, which makes it difficult to train and fine-tune such models. Additionally, recipe classification may require domain-specific knowledge and understanding of cooking terminology and techniques, which may not be effectively captured by pre-training on general text corpora. Moreover, traditional feature extraction techniques and machine learning algorithms have shown good performance in this task, which may lead to a preference for these methods. Therefore, in this work,
we have focused on finding how large language models perform on recipe genre classification task.
## 3 3A2M+ Dataset
This section provides a detailed description of the 3A2M+ dataset, construction process of an extended Named Entity Recognition (NER) list, compilation of dataset statistics, and comparison of the dataset with other existing datasets. 3A2M+ dataset incorporated all the data and features of 3A2M dataset and extended the named entity recognition list.
### Data Collection
Our data was sourced from the 3A2M dataset, which served as the base for our collection efforts. As we discovered some missing NER in this dataset, we extended the NER list by including previously absent ingredients, process names, temperature data, and cooking materials. As a result, we named the extended dataset 3A2M+ to reflect these changes. The 3A2M dataset [9] contains a vast collection of 2,231,142 culinary recipes, making it the largest publicly available dataset of its kind. One limitation of the dataset is that it lacks specific genre categorization for the recipes. Each recipe consists of a title, a list of ingredients with quantities, and step-by-step instructions. The recipe title provides an accurate description of the dish, while ingredient quantities are adjusted for serving sizes, and the corresponding measurement units are linked. The recipe instructions outline the various steps involved in preparing the dish, with the correct amount of each ingredient utilized. However, the structure of recipe titles in the 3A2M dataset [9] has some limitations. The authors did not find a consistent classification system, they identified a Named Entity Recognition (NER) list of ingredients, which is not comprehensive since the same ingredient often appears in multiple recipes. Expanding the NER list is one possible way to improve the 3A2M dataset, as the current list has a limited number of ingredients and may not recognize some. By increasing the size of the NER list, the dataset could more accurately identify ingredients and overcome one of its limitations.
\begin{table}
\begin{tabular}{|c|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline
**Arti-de** & **Description** & **Model Used** & **Number of Instances** & **Accu-racy** \\ \hline
[12] & Proposed a text analysis method to classify Brazilian Portuguese cooking recipes due to the need for personalized recipe recommendations and improved nutritional profiles. & Naïve Bayes, Random Forest, and Support Vector & 1080 & 94\% \\ \hline
[13] & Introduced a multi-label classification method to identify food restrictions in recipes due to the difficulty people with allergies, intolerance, or dietary preferences face in finding suitable recipes & Text mining techniques like stemming, stop words removal, and TF-IDF & 1080 & 93\% \\ \hline
[14] & Analyzed classification models for cuisine prediction using machine learning due to the increasing interest in multiculturalism and personalized food recommendations. & Naïve Bayes, Decision Trees, and Support Vector Machines & 20,000 recipes and 20 cuisines & 80\% \\ \hline \end{tabular}
\end{table}
Table 1: Summary of Previous Works on Recipe Classification
### Extended NER Generation
Named Entity Recognition (NER) is a subtask of Natural Language Processing (NLP) that aims to identify named entities in text and classify them into predefined categories, such as people, organizations, locations, and products. Named Entity Recognition (NER) plays an essential role in 3A2M dataset [9], which is a collection of over 2 million recipes crawled from the web. The 3A2M dataset includes information such as recipe titles, ingredients, cooking instructions, nutritional information, and more. NER is used to extract entities from the recipe text, such as ingredients and cooking actions, and to classify them into different categories, such as food types, cooking methods, and tools.
The NER list of 3A2M dataset [9] may need enhancement because it currently only covers a limited set of named entities, such as ingredients, measurements, cooking actions, and tools. While these are important entities in the context of recipe generation, there may be other relevant entities that are not currently included in the list. For example, the dataset does not include information on the origin or cultural background of a dish, which could be useful in generating more personalized recipe recommendations for users. Additionally, the current NER list may not capture all possible variations of the named entities, which could lead to inaccurate or incomplete results in the recipe generation process. Therefore, enhancing the NER list to include a broader range of named entities and their variations could improve the overall quality and relevance of the generated recipes. With this motivation, in this study, we propose a pipeline to extend the NER list of 3A2M dataset [9] and include it in our 3A2M+ dataset. The pipeline is depicted in Figure 1.
At first, we maintain the Named Entity Recognition (NER) list obtained from the 3A2M dataset. We then go through each recipe direction and pass it to both NLTK2 and spaCy3 toolkits, which produce two separate sets of named entities. Finally, we combine the original NER list with the newly generated NER list, creating a single set without any duplicates.
Figure 1: Workflow of Extended NER Generation Procedure.
The Natural Language Toolkit (NLTK) performs named entity recognition (NER) using machine learning algorithms. Specifically, NLTK's NER module uses a maximum entropy classifier to identify entities within a text. The process starts with tokenizing the input text, i.e., splitting the text into individual words and punctuation marks. Next, the module labels each token with its corresponding part of speech (POS) tag using a POS tagger. The NER module then extracts features from each token, such as the word itself, its POS tag, and its context within the text. The maximum entropy classifier is then trained on a labeled dataset, such as the CoNLL 2003 dataset, which contains pre-labeled examples of named entities in text. The classifier uses these labeled examples to learn patterns in the features and to predict the label of new, unlabeled text. During the prediction phase, the classifier considers the features of each token in the text and predicts whether it belongs to one of the pre-defined named entity types, such as person, location, or organization. Finally, the module produces an output in which named entities are marked with their respective entity types. NLTK also provides the option to train the maximum entropy classifier on user-defined datasets, allowing for customization to specific domains or tasks.
spaCy for a particular example recipe direction. Table 3 presents the final extended NER list obtained from our proposed pipeline.
### Dataset Properties
The Assorted, Archetypal and Annotated Two Million Extended dataset (3A2M+ dataset) is built on top of the 3A2M dataset [9] and we have incorporated as well as utilized all the data along with their respective features. In terms of recipe title, cooking step by step directions, ingredients, and recipe sources, all of the data are the same type. 3A2M+ dataset has in total five attributes: _title, directions, NER, Extended NER, genre_, and _label_ among which the data of _title, directions, NER, genre, label_ attributes are directly incorporated from 3A2M dataset.
In Table 4 number of the instances per genre are shown from the 3A2M+ dataset. Some sample instances from 3A2M+ dataset are displayed in the Table 5.
### 3A2M and 3A2M+ Dataset Comparison
The differences and advantages of the 3A2M+ dataset over the 3A2M dataset [9] are listed below:
* **Extended Named Entity Recognition**: 3A2M has a limited list of named entities, which can lead to inaccurate or incomplete extraction of entities from recipe texts. In contrast, the 3A2M+ dataset has an extended list of named entities, which includes more specific ingredients and cooking techniques. This can improve the accuracy of NER tasks performed on the dataset.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Genre ID** & **Genre Name** & \begin{tabular}{c} **No of instances in** \\ **3A2M+** \\ \end{tabular} & **Human Annotated Data** &
\begin{tabular}{c} **Machine Annotated** \\ **Data** \\ \end{tabular} \\ \hline
1 & Bakery & 160712 & 28481 & 132231 \\ \hline
2 & Drinks & 353938 & 45113 & 308825 \\ \hline
3 & NonVeg & 315828 & 40757 & 275070 \\ \hline
4 & Vegetables & 398677 & 56245 & 342432 \\ \hline
5 & Fast Food & 177108 & 31476 & 145633 \\ \hline
6 & Cereal & 340495 & 45677 & 294818 \\ \hline
7 & Meal & 53257 & 7009 & 46248 \\ \hline
8 & Sides & 338497 & 37210 & 301287 \\ \hline
9 & Fusion & 92630 & 8028 & 84602 \\ \hline
**Total Data** & **2231142** & **299996** & **19311456** \\ \hline \end{tabular}
\end{table}
Table 4: Total Number of Instances in the 3A2M+ Dataset
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Title** & **Direction** & **NER** & **Ext_NER** \\ \hline Pannu & [”Preheat oven to 350 degrees.”, “Melt butter in oven & “flour”, “fbour”, “sugar”, “ggss”, “Bake \\ (Finnish & out.”, “Meanwhile, mix other ingredients like hell - & “esgs”, “esgss”, “vanilla, 350 \\ Oven & till very frothy. Pour batter into pan with melted & “milk”, “vanilla” & degrees’, “flour”, “milk” \\ Pancake) & butter.”, “Bake 40 minutes. Eat immediately.” & “vanilla” & “milk” \\ \hline \end{tabular}
\end{table}
Table 3: Final NER List Achieved from Proposed Extended NER Pipeline.
* **Recipe Recommendation**: Since both the datasets are labeled with genre categories, machine learning models can be used for genre classification. This means that given a new recipe, a model can predict which genre category it belongs to based on the features of the recipe. This can have several benefits in recipe recommendation and personalization. For example, a user may have a preference for a specific genre of recipes, such as Italian or Mexican. By using a genre classification model trained on the 3A2M dataset [9], a recipe recommendation system can personalize its recommendations to the user's preferred genre. Additionally, a user may be looking for recipes for a specific occasion or meal type, such as a breakfast recipe or a recipe for a dinner party. By using the genre classification model, the recommendation system can narrow down the list of possible recipes to those that are most appropriate for the user's specific needs.
* **Availability**: Like 3A2M, the 3A2M+ dataset is also publicly available, making it more accessible for researchers and developers.
In summary, the 3A2M+ dataset provides significant enhancements that can provide more accurate and granular information about recipes, allowing for more targeted analysis and modeling based on specific aspects of recipes.
## 4 Methodology for Recipe Genre Classification
In this study, we have used various traditional machine learning models, such as logistic regression, support vector machines, random forests, and naive Bayes, as well as a five-layer convolutional neural network to classify recipes into nine distinct genres. Additionally, we have employed two pre-trained language models, RoBERTa
\begin{table}
\begin{tabular}{|p{42.7pt}|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline
**title** & **directions** & **NER** & **Extended NER** & **gearre** & **label** \\ \hline No Bake Cheese-Cake & Mix cream cheese and augar with electric mixer on medium speed until well blended. Gently stir in Cooh Wip. Spoon into crust. Refrigerate 3 hours or overnight. & “stream cheese”, “graham cracker crust” & \begin{tabular}{} \end{tabular} & 1 \\ \hline Lime Sherre-Net & Dissolve Jell-O in boiling water and lemon juice. When cool add milk. Freeze. If needed beat with mixer until smooth. & “time Jell-O”, “boiling water”, “sugar”, “lemons”, “milk” & 2 \\ \hline Chicken Pot & Cook chicken until no longer pink and cut up. Stir all ingredients together and put in gie shell (use deep dfish pie pan). Cut slits in top crust. Bake at 375\(\backslash\)u00b0 for 40 minutes. & “ceram of chicken coup”, “ceram of potato soup”, “Vegasí”, “cnicken breasts” & 3 \\ \hline Brais & Heat oven to 350\(\backslash\)u00b0. Mix all ingredients together except chocolate chips until moistened. Stir in chocolate chips. Press mixture evenly in unreagened 9 x 13-inch pan. Bake until center is set 15 to 17 minutes. & “Bisquick baking oots”, “bow sugar”, “margarine”, “egg”, “smsmarinewet choco-late chips”, “raisins” &
\begin{tabular}{} \end{tabular} & 6 \\ \hline \end{tabular}
\end{table}
Table 5: 3A2M+ Dataset Structure
and DistilBERT, and we believe that we are the first to use transformer-based language models for classifying cooking recipe genres. In this chapter, we discuss our proposed methodology for cooking recipe genre classification on 3A2M+ dataset.
The proposed methodology depicted in Figure 2 for classifying cooking recipes into nine genres combines machine learning, transformer, and deep learning models. The input data, which can include recipe title, directions, NER list, and extended NER list is preprocessed and transformed using Count Vectorizer before being used to train and test different models. The final prediction is made by selecting the genre with the highest probability obtained through the softmax layer. This approach can be useful for recipe recommendation and menu planning.
To perform the classification task, the input data is preprocessed to ensure it is in a format that can be effectively analyzed. This step involves data normalization, cleaning, and tokenization to transform the raw data into processed data. The data is then divided into three subsets, training, validation, and testing sets, which are used to generate various features such as word embeddings and a document-term matrix.
After generating the features, machine learning models such as Logistic Regression, Random Forest, Naive Bayes, and Support Vector Machine are trained using the features obtained from the count vectorizer. Transformer models like RoBERTa and DistilBERT are also employed to generate input data representations based on pre-trained language models, which are then used as features for the softmax layer. A deep learning model is also used, which employs keras dense layer embeddings to
Figure 2: Framework for Recipe Genre Classification
train a Convolutional Neural Network (CNN) that recognizes patterns and associations in the input data. The final predictions are obtained by selecting the genre with the highest probability score from the softmax layer.
Overall, the workflow illustrated in Figure 2 allows for accurate classification of cooking recipes into nine different genres using a combination of machine learning, transformer, and deep learning models, which can be useful for various applications such as recipe recommendation and menu planning.
## 5 Experimental Design
This section discusses the selection of baseline models, preprocessing techniques, and evaluation metrics for the baseline. The dataset was evaluated after construction to determine if machine learning models could distinguish between the classes. Two attention-based models were used, and a strong baseline performance was established. Analyzing the entire 2231K dataset took more than six months and involved over a thousand hours of work. Google Colab Pro Plus was used to tackle the computational challenge, but each set of results still took a long time to produce. After annotating the dataset, a machine learning model was created and applied to annotate the remaining 1900K rows of the dataset automatically. Traditional machine learning and language models were evaluated in the second phase. In the final phase, a new approach was taken using pre-trained models from a recipe dataset to contribute to NER. The combination of the old and new NER was a notable achievement. The collected data was analyzed using various machine learning algorithms.
### Selection of Baseline Models
The baseline performance on 3A2M+ dataset is assessed using traditional machine learning models such as Logistic Regression [16], SVM [17], Naive Bias [18], and Random Forest [19], as well as a deep convolutional neural network (CNN) [20]. Additionally, we have utilized smaller, optimized pre-trained language models, including RoBERTa [21] and DistilBERT [22], which are built and trained over large BERT [15] architecture. This was done to consider the large size of the dataset and resource constraints. We have chosen RoBERTa and DistilBERT models in this study for the following reasons:
1. BERT models are able to grasp the context of each word by taking into account the words before and after it. Context understanding is critical for determining the severity of tweets, which makes BERT models more likely to be effective compared to traditional deep learning models. LSTM, BiLSTM, and unidirectional transformer models like GPT are examples of traditional deep learning models. In these models, each token only contains information from previous tokens in the self-attention layers of the transformer [23].
2. Previous research has demonstrated that fine-tuning BERT models can produce remarkable results in tasks like text categorization and question-answering, due to the large amount of unlabeled data used in their pre-training through self-supervised learning. These BERT-based models have proven to be particularly
effective in categorizing social media posts or comments. BERT-based models have demonstrated impeccable performance in the domain of categorizing social media posts or comments, for example, sentiment analysis of social media posts [24], political social media message categorization [25], rumor identification from tweets [26]. These models mitigate different limitations of previous state-of-the-art language models like ELMO [27] by adopting the transformer encoder instead of the recurrent neural network architecture.
3. Implementing a recipe genre classification system for food recipe title, direction texts on devices with limited computing power can be challenging because of the high number of parameters in BERT (Base: 110 million), which increases the demands for computation and time during both training and inference. To mitigate this challenge, RoBERTa and DistilBERT can be used instead, as they offer comparable performance to BERT while having almost 40% fewer parameters and 60% less inference time, making it possible to run the system on edge devices.
### Experiments
Cooking recipe genre classification is a challenging task due to the large variations in recipe styles and formats. In this paper, we perform four different experiments to explore the performance of various models on cooking recipe genre classification. Through these four experiments, we aim to develop an accurate and efficient genre classification model that can be utilized in recipe search engines, recommendation systems, and other food-related applications. Our experimental results shed light on the effectiveness of traditional machine learning models, deep learning models, and pre-trained language models in cooking recipe genre classification, and the contribution of different features in improving genre classification accuracy.
* In Experiment I, we aim to investigate the effectiveness of traditional machine learning models and a deep learning model on genre classification. We use Logistic Regression, Naive Bayes, Support Vector Machine, Random Forest, and Convolutional Neural Network (CNN) models, and several combinations of title, Named Entity Recognition (NER), and Extended NER features. Our objective is to explore the contribution of different features in genre classification, specifically the title, title with NER, and title with Extended NER. The NER and Extended NER features consist of ingredients or process names. As the work is text-based, we utilize deep learning models and pre-trained language models to ensure that our genre classification creates a pipeline model to classify foods from any genre.
* In Experiment II, we aim to improve the performance of genre classification by using the DistilBERT classifier with several combinations of features. We explore the impact of title, NER, Extended NER, and Directions features on genre classification accuracy, with a random distribution of the recipe data.
* In Experiment III, we investigate the effect of equalizing the distribution of recipe data on genre classification accuracy. We use the DistilBERT classifier and several combinations of title, NER, Extended NER, and Directions features, but this time with an equalized distribution of the recipe data.
* In Experiment IV, we use the RoBERTa pre-trained language model with a single feature, Directions, to classify cooking recipe genres. Our objective is to determine the effectiveness of a pre-trained language model on genre classification and if equalizing the distribution of the recipe data would impact the accuracy of the classification.
### Experimental Settings and Pre-processing
The effectiveness of BERT-based models in categorizing social media user comments is due to their pre-training on vast collections of text from various domains. As a result, the proposed dataset only needs minimal preprocessing to eliminate any unnecessary white spaces or tabs from the recipe title, NER, Extender NER, and Direction texts. Important variables are then defined and the Dataset class is created to specify how text is processed before being sent to the neural network. Data Loader is also defined, which is used to provide data to the neural network in appropriate batches for training and processing. The Dataset and Data Loader are both PyTorch elements that manage data preprocessing and transport to the neural network, with control options like _batch size_ and _max_length_. The dataloaders are used in the training, testing and validation phases. The Training Dataset, consisting of 80% of the original data, is used for fine-tuning the model. The Validation Dataset is used to evaluate the model's performance and contains data that the model has not seen during training. The aim of optimizing classification performance is to fine-tune the pre-trained model weights to the specific task at hand, which involves the recipe title, NER, and directions texts, along with their annotated labels. As these models have been pre-trained using data from different sources, this is important. The experiment's training parameters are established first, and then the input is fine-tuned using a specific methodology. The recipe titles and directions are properly formatted before being fed into the pre-trained models for embedding. In order to represent the entire text input as a single vector, BERT-based models use tokenizers to divide the input sequence into word fragments or wholes. These token strings are used to identify related words and comprehend the context of the input sequence. Several types of specific token strings are used to indicate the task type, the start of the input sequence, the mask, and other factors [28].
* '[SEP]' refers to the end of one input sequence and the beginning of the following.
* '[PAD]' is employed to denote the required padding.
* '[UNK]' is an unidentified token.
* '[CLS]' refers to the classification task.
The classifiers require input sequences of the same length, which means that each title-NER combination formed for cross-embedding must have the same number of tokens after being converted into token strings. If a comment has less than 256 tokens, "[PAD]" tokens are added to achieve the 256-token limit. The maximum length for directions is 512 characters. During fine-tuning, some additional input data that was not included in DistilBERT and RoBERTa's pre-trained token vocabulary may arise. In these cases, the new input substring is replaced with "[UNK]". Finally, the token
strings are converted into token IDs represented as integers to generate the final input vector for the models.
### Hyper-Parameter Settings
The proposed dataset must be split into 3 parts to facilitate model evaluation and fine-tuning: training, validation, and testing. A 80% of the recipes from each class are randomly selected for the training set, while the remaining tweets are divided equally between the validation and testing sets. Base-uncased45 versions of the pre-trained models with a total of 768 hidden output states are used for fine-tuning. The Categorical Cross-Entropy loss is preferred over other loss functions for classification applications due to its superior performance [29]. The 'AdamW' optimizer [30] is chosen because it is efficient and works well with a fixed weight decay. The learning rate was set to \(1\times 10^{-5}\) and that 20% of the steps are classified as warm-up steps, the training phase would utilize the first 20% of the steps to increase the learning rate from \(0\) to \(1\times 10^{-5}\). Throughout the period of fine-tuning, the model weights are modified a total number of times, represented by steps. Both of these models were fine-tuned in a supervised manner for 10 epochs on the proposed dataset with a training batch size of 128 in order to predict the recipe genre from recipe NER, Extended NER, and Directions and achieved excellent performance on all nine classes.
Footnote 4: [https://huggingface.co/docs/transformers/model_doc/distilbert](https://huggingface.co/docs/transformers/model_doc/distilbert)
Footnote 5: [https://huggingface.co/docs/transformers/model_doc/roberta](https://huggingface.co/docs/transformers/model_doc/roberta)
### Evaluation Metrics
In predictive classification, evaluation metrics play a crucial role in determining a model's performance. However, using only metrics such as precision, accuracy, or recall may lead to incorrect conclusions, particularly in severely imbalanced datasets, where high accuracy can be achieved without making any meaningful predictions [31][32]. In such cases, using multiple evaluation metrics, including recall, is necessary to provide a comprehensive assessment of the model's performance. Unlike accuracy or precision, recall considers false negatives, which can be a more crucial factor in highly unbalanced datasets. Therefore, using only a single metric can be misleading, and a combination of evaluation metrics is necessary for accurate assessment [33][34].
This study uses ROC curve and AUC-ROC as evaluation metrics to determine how well the models can distinguish between different classes. The ROC curve calculates the True Positive Rate (TPR) and False Positive Rate (FPR) for a series of predictions at various thresholds made by the model. The TPR shows the number of positive class samples correctly classified by the classifier, while the FPR shows the number of negative class samples misclassified by the classifier [35]. This data may be used to evaluate the model's ability to distinguish across classes.
## 6 Experimental Results
The baseline classification results on the 3A2M+ dataset is introduced in this section. Experiments details stated in Section 5.2 whose are mainly divided into two parts. The first part involves experiments utilizing traditional classifiers, specifically Experiment I. The second part encompasses experiments using language based model DistilBERT, divided into three experiments (Experiment II, III, IV).
### Experiment I - Classification Performance Result on Traditional Machine Learning Models
To overcome computational time and processing difficulties resulting from the environmental setup, the data is separated in multiple data blocks, which is employed in the experiments. To analyze the data, we utilized five traditional machine learning models. The dataset consists of 11000K machine-annotated data, divided into 11 data blocks for training and 300K human-annotated data was utilized. The training to testing ratio was mostly 80:20. The configuration details of the Experiment I is illustrated in the Table 6. In Table 7 performances of differnet traditional machine learning model is shown by experimenting on _Title_, _Title with NER_ and _Title with Extended NER_ features. Classwise ROC and Recall graphs are stated the performance analysis in the Fig. 3 and Fig. 4 for the _Title with NER_ and _Title with Extended NER_ respectively. In addition, Figure 5 and Fig. 6 demonstrated the accuracy and loss function of the neural network simulation for the _Title with NER_ and _Title with Extended NER_ respectively.
The classification performance of traditional machine learning on 1100K machine data for training and 300K human annotated data for testing is summarized in Table 8.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Features** & \multicolumn{2}{c|}{**Machine Learning Models**} & \multicolumn{1}{p{113.8pt}|}{**Training Instances**} & \multicolumn{1}{p{113.8pt}|}{**Validation Instances**} & \multicolumn{1}{p{113.8pt}|}{**Testing Instances**} \\ \hline
1. Title & 1. Logistic Regression (LR) & 1100K Data which are annotated by the Machine. We have separated the data in 11 files each contains 100K instances because of the massive computational cost & From the Training Instances 10\% of the instances are used for the validation purpose. Here is also 11 files created with equalize size. \\ \hline \end{tabular}
\end{table}
Table 6: Experiment-I Configuration Details
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Feature** & \begin{tabular}{c} **LR** \\ **Train** \\ \end{tabular} & \begin{tabular}{c} **LR** \\ **Test** \\ \end{tabular} & \begin{tabular}{c} **SVM** \\ **Train** \\ \end{tabular} & \begin{tabular}{c} **SVM** \\ **Test** \\ \end{tabular} & \begin{tabular}{c} **NB** \\ **Train** \\ \end{tabular} & \begin{tabular}{c} **NB** \\ **Test** \\ \end{tabular} & \begin{tabular}{c} **CNN** \\ **Train** \\ \end{tabular} & \begin{tabular}{c} **RF** \\ **Train** \\ \end{tabular} & \begin{tabular}{c} **RF** \\ **Test** \\ \end{tabular} & \begin{tabular}{c} **RF** \\ **Train** \\ \end{tabular} &
\begin{tabular}{c} **Test** \\ \end{tabular} \\ \hline Title+NER & 91.49\% & 74.79\% & 89.94\% & 81.11\% & 59.23\% & 60.65\% & 99.63\% & 95.30\% & 93.71\% & 56.84\% \\ \hline Title+Extended NER & 99.96\% & 95.28\% & 90.67\% & 81.09\% & 78.20\% & 61.43\% & 99.61\% & 95.33\% & 97.43\% & 76.89\% \\ \hline \end{tabular}
\end{table}
Table 7: Accuracy of Different Machine Learning Models for Different Features.
Figure 4: Class-wise Recall-ROC Graph of Support Vector Machine(SVM) on Title with Extended NER Feature
Figure 5: Training and Validation Accuracy and Loss Graph on Title with NER Feature
Figure 3: Class-Wise Recall-ROC Graph of Logistic Regression (LR) on Title with NER Feature
Among the 1100K training data, 10% of the data is used for validation purposes. In Table 8 demonstrated the best performance model for the each of the feature with the highest accuracy points for the training and testing sessions. We found that the "title" is the most contributing feature from the numerical results. The analysis and justification of the experimental results are discussed in the "Discussion of the Experiment" section, which provides a comprehensive overview of the findings and explains the implications of the results.
### Experiment II - Classification Performance Result on DistilBERT Model over Random Distributed Data
In this part of the experiment DistilBERT models is used to analyze the data. 1100K machine-annotated data for training and 300K human-annotated data for testing is utilized. The data is maintained a split ratio 80:20 for training and testing, and the training data was further divided 90:10 for the validation issue. Experiment configuration details is presented Table 9. In this experiment 4 set of features were used. Begin with _Title_ and followed by _Title with NER_, _Title with extended NER_ and _Directions_ feature from the 3A2M+ dataset.
Classification performance on language based model DistilBERT is summarized in the Table 10. It is important to analyze and discuss the results of experiments to draw meaningful conclusions and insights. In this table it picturize clearly that,
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Features** & **Training Performance** & **Testing Performance** & **Best Performance Model** \\ \hline Title & 99.03\% & 98.11\% & Support Vector Machine \\ \hline Title+NER & 99.41\% & 95.31\% & Convolutional Neural \\ & & & Network \\ \hline Title+Extended NER & 99.61\% & 95.33\% & Convolutional Neural \\ \hline \end{tabular}
\end{table}
Table 8: Dataset Analysis Summary on Classical Machine Learning Models
Figure 6: Training and Validation Accuracy and Loss Graph on Title with Extended NER Feature
_Title with NER_ feature performed 98.31% accuracy in the training session, whereas _Title with Extended NER_ feature performed 99.26% accuracy. In the validation of the data, which is showing the trend in the same direction, and in the testing phase, where human annotated data were utilized to check. There also discovered testing _Title with NER_ feature testing accuracy 99.10% where _Title with Extended NER_ performed accuracy 99.45% which is the obviously ahead of 0.35%. In the case of the experiments performed, the details of the analysis and evaluation of the results are discussed in the "Discussion of the Experiment" section.
In terms of percentage change, it is quite minor, but in terms of total dataset performance, it is a substantial shift and a major reflection of the study we have conducted.
In the Figure 7 shows the differentiate the significance of the implementation of the _Title with NER_ and _Title with Extended NER_. This study outcome is symbolized on that figure with some distinctions. The dataset using the DistilBERT model with a maximum length of 512, considering the system's maximum capacity. The maximum length of directions in the dataset is 2416 words. Here the data volume is huge so multiple times run performed and took the average value from the test runs. In the Figure 7 shows the differentiate the significance of the implementation of the _Title with NER_ and _Title with Extended NER_.
### Experiment III - Classification Performance Result on DistilBERT Model over Equalized Data
Some ambiguity was noticed when using the directions feature while analyzing the whole dataset. Out of 1900K machine-annotated data, 46K instances was found in a specific genre. To address this issue, this experiment created a configuration for training dataset by taking 46K of data from each of the 9 genres, resulting in a total of 414K data. The training data was split 90:10, with 10% used for validation. For testing, 27K data was used from the human-annotated data. This experiment is analyzed through four features _Title, NER, Extended NER, and Directions_ by DistilBERT model over equalized data. For the first three features, a maximum length of 256 words was used, but for the Directions feature, which was too long to handle, used 512 words. To handle the large amount of data in the Directions genre, was divided
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Features & Training Performance & Validation Performance & Testing Performance \\ \hline Title & 97.89\% & 98.17\% & 99.14\% \\ \hline Title+NER & 98.31\% & 97.99\% & 98.86\% \\ \hline Title+Extended NER & 99.26\% & 98.13\% & 99.45\% \\ \hline Directions & 86.32\% & 51.06\% & 48.79\% \\ \hline \end{tabular}
\end{table}
Table 10: Experiment II on Dataset using DistilBERT Model
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Title Feature** & **Data** & **Remarks** \\ \hline Training Dataset Size & 1100000 & Machine Annotated Data \\ \hline Train Data & 1000000 & Machine Annotated Data \\ \hline Validation Data & 100000 & Machine Annotated Data \\ \hline Test Data & 299998 & Human Annotated Data \\ \hline \end{tabular}
\end{table}
Table 9: Configuration Details of Experiment II- using DistilBERT model
the 414K data into 4 groups and processed each group with 103500 data and 10 epochs in a single fold operation. The experimental setup is illustrated in Table 11.
This work obtained the precision, recall, and F1-score for the classification of the _Title with NER_ feature, separated by genre. In the Table 12 is illustrated the details of the experiment. Besides this, in the Table 13 is presented the details of the experiment of recall, and F1-score for the classification of the _Title with Extended NER_ feature.
The experiment with an equalized data distribution, created a total intances of 414K. In this case, this study discovered that the _Title with NER_ feature performed 98.99% in the training session, whereas the _Title with Extended NER_ performed 99.08%. While the validation outcome was negative with very tiny accuracy, the
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Title Feature** & **Data** & **Remarks** \\ \hline Training Dataset Size & 414000 & Machine Annotated Data \\ \hline Train Data & 103500 & Machine Annotated Data \\ \hline Validation Data & 10350 & Machine Annotated Data \\ \hline Test Data & 27000 & Human Annotated Data \\ \hline \end{tabular}
\end{table}
Table 11: Configuration of Experiment III - using DistilBERT over Equally Distributed Data
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Genre ID** & **Genre Name** & **Precision** & **Recall** & **F1-Score** \\ \hline
1 & Bakery & 0.98 & 0.98 & 0.98 \\ \hline
2 & Drinks & 0.98 & 0.98 & 0.98 \\ \hline
3 & NonVeg & 0.99 & 0.99 & 0.99 \\ \hline
4 & Vegetables & 0.99 & 1.00 & 0.99 \\ \hline
5 & Fast Food & 0.99 & 0.99 & 0.99 \\ \hline
6 & Cereal & 0.99 & 0.99 & 0.99 \\ \hline
7 & Meal & 1.00 & 0.97 & 0.99 \\ \hline
8 & Sides & 1.00 & 0.99 & 1.00 \\ \hline
9 & Fusion & 0.98 & 0.99 & 0.98 \\ \hline \end{tabular}
\end{table}
Table 12: Genre wise Precision, Recall and F1-Score of Tile NER Feature Classification
Figure 7: Comparison of Title with NER and Extended NER
testing accuracy for _Title with NER_ feature 98.86% where _Title with Extended NER_ performed accuracy 98.98% which is far above of 0.12%. Despite the fact that any range of data _Title with Extended NER_ perform greater accuracy, this is a significant addition. Table 14 and Figure 8 have shown the details of the difference between the _Title with NER feature_ and _Title with Extended NER_ feature experiments.
The results of the machine learning classification based on language, using 414K machine annotated data for training and 27K human annotated data for testing. 41K of the training data was used for validation purposes. Experiment results are summarized in Table 15. However, gaining a comprehensive understanding of the performance of the models and features used in the experiments requires analyzing and discussing the experimental results, which can be found in the "Discussion of the Experiment" section.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Feature** & **(Title + NER)** & **(Title + Extended NER)** & **Remarks** \\ \hline Training Dataset Size & 416000 & 416000 & Machine Data \\ \hline Feature & Title, NER & Title, Extended NER & Updates \\ \hline Maxlen & 256 & 256 & Cover NER \\ \hline Train Data & 374400 & 374400 & Machine Data \\ \hline Validation Data & 41600 & 41600 & Machine Data \\ \hline Test Dataset & 27000 & 27000 & Human data \\ \hline Embedding & Cross & Cross & Position \\ \hline Train Accuracy & 98.99\% & 99.08\% & +0.09 \\ \hline Validation Accuracy & 97.55\% & 97.49\% & -0.06 \\ \hline Test Accuracy & 98.86\% & 98.98\% & +0.12 \\ \hline Ratio & 90:10 & 90:10 & Increase \\ \hline \end{tabular}
\end{table}
Table 14: Comparative Analysis of _Title with NER_ and _Title with Extended NER_ Features with DistilBERT Model over Equalized Distribution Data
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Features** & **Training Performance** & **Validation Performance** & **Testing Performance** \\ \hline Title & 96.83\% & 97.06\% & 97.21\% \\ \hline Title+NER & 98.99\% & 97.55\% & 98.86\% \\ \hline Title+Extended NER & 99.08\% & 97.49\% & 98.98\% \\ \hline Directions & 85.98\% & 49.23\% & 37.35\% \\ \hline \end{tabular}
\end{table}
Table 15: Experiment III on Dataset using DistilBERT Model
### Experiment IV - Classification Performance Result on RoBERTa Model over Equalized Data
The _direction_ feature is evaluated with 1100K data for training and 300K data for testing and found that there was an under-fitting issue, with a large difference between the training and testing results in Table 10. To address this, an equal distribution of the genre based data is taken and re-evaluated the results in the Table 15, which is also showed an even larger gap compared to the previous analysis. As a result, a decision is been taken to use another pre-trained RoBERTa model, for further analysis. The results of using RoBERTa, a BERT based model, on 416K data for training and 27K data for testing are shown in Table 16. However, the average result was not promising and consistent with the previous poor results from the DistilBERT analysis. "Discussion of the Experiment" section provides an in-depth analysis and evaluation of the findings, allowing for a deeper understanding of the strengths and limitations of the models and features, as well as potential areas for future work and improvements.
### Discussion on Experiments
In the analysis, various features such as _Title, NER, Extended NER_ and _Directions_ were used. DistilBERT and RoBERTa models have been used to evaluate the performance of the _Direction_feature. The analysis shows that the performance of the models is improved with the increase in the data. However, the results of the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Results** & **Simulation-1** & **Simulation-2** & **Simulation-3** & **Simulation-4** & **Average** \\ \hline Training Accuracy & 49.56\% & 11.34\% & 34.27\% & 47.23\% & **35.60\%** \\ \hline Validation Accuracy & 49.22\% & 10.46\% & 33.12\% & 46.12\% & **34.73\%** \\ \hline Test Accuracy & 37.84\% & 11.74\% & 32.57\% & 45.57\% & **31.93\%** \\ \hline \end{tabular}
\end{table}
Table 16: Direction Feature Classification by RoBERTa
Figure 8: Comparison of Title with NER and Extended NER over Equalized Data
RoBERTa model were poor compared to the DistilBERT model for the Direction feature.
* In the Experiment I, five traditional machine learning models were used to analyze the title and genre, obtaining 99% accuracy during training and 98% accuracy during testing. Utilizing these five models to analyze the title, NER, and Extended NER, resulting in 95% accuracy during training and 90% accuracy during testing was presented. Our experimentation with various machine learning algorithms revealed that logistic regression, support vector machines (SVM), naive Bayes, and random forest were well-suited to the task of text classification. Additionally, a large training dataset of 100,000 examples enabled our model to generalize well to unseen data.
* The high accuracy achieved by our recipe genre classification model can be attributed to several factors, including the quality and nature of the dataset, the machine learning algorithms employed, and the features extracted from the text data. In natural language processing (NLP), the features extracted from the recipe titles play a crucial role in the performance of the model in Experiment I. Due to the short length and limited unique words in recipe titles, along with the similarity within titles of the same genre, our model can easily learn genre-specific patterns.
* In the Experiment II, with the use of a BERT based model DistilBERT, an achievement of a 99% accuracy in testing and 97% accuracy in training was resulted when analyzing the title. But in the Experiment III, with the use of same amount of instances in 9 genre we have achieved a narrow higher 97% accuracy in testing and close to 97% accuracy in the training over 416000 instances.
* In the Experiment II, by analyzing the title with NER, and title with Extended NER using the DistilBERT model, this experiment was obtained 98% and 99% accuracy in training respectively. In that same experiment we have found 98.86% and 99.45% accuracy in testing for the title with NER, and title with Extended NER features respectively.
* Relying solely on Named Entity Recognition (NER) or Extended NER for recipe classification proved to be insufficient as ingredients can be common to multiple recipes across different genres. Instead, combining the food title with the NER list resulted in more accurate genre classification results. Despite the short length of the recipe titles, they contain significant words that can specify the genre, whereas the similarity of NER lists or Extended NER lists are higher even though it can be a length of more than 47 words. By merging the title and NER or Extended NER, we created a larger word sequence in the Experiment II and III, which improved the performance of the title with NER or title with Extended NER, yielding slightly lower accuracy than the title feature alone. Additionally, the extended NER list's inclusion of process type, temperature, and cooking style information further improved the model's classification performance.
* In the Experiment II, The Directions was selected for analysis using the BERT based model DistilBERT, with a maximum length of 512, in accordance with
the system's maximum throughput. Test accuracy was less than 50% while train accuracy was around 85%, indicating a significant problem.
* Human annotated 27000 test data were used, equal in number from each of the nine genres, along with 103500 data, also equal in number from each genre were used in the Experiment III for the Directions feature. Training accuracy was 87% and testing accuracy was 47%.
* In the Experiment II analysis of 414,000 data with equal distribution was resulted a 0.35% improvement title with extended NER feature in accuracy compared to previous analysis of that only utilized title and NER, which remarked as a distinction of this research.
## 7 Limitations
A vast analysis is performed with the features in the 3A2M+ dataset. Fig. 3 shows that the classes become more compact as the annotators' agreement approaches completion. The study is benefited from the expertise of experienced annotators from culinary and computer science domain, ongoing supervision, and communication between the annotators and subject-matter experts. There are some limitations of this work that is listed to improve future works.
* The annotated dataset was expected to have better quality if the annotators used the Extended NER.
* Domain experts resolved tie situations. If the study was to redo the Extended NER process for the entire 1900K dataset, it would make the dataset more robust.
* Due to the time frame limitations of the Google Colab Pro+ hardware, this work had to limit the dimensions of word embedding in NER and Extended NER from the Directions.
## 8 Conclusion and Future work
The present study presents a novel annotated dataset that incorporates an advanced feature for Named Entity Recognition (NER) of recipes. The dataset was created using robust reference sources and validated by food experts. The broader impact of this research could lead to personalized food selection and nutrition management, with dietary specific menus and menu designs created by nutritionists and culinary experts to meet the needs of users.
The initial classification results for the dataset were obtained by optimizing pre-existing models, specifically DistilBERT. It's important to note that certain features of the dataset, such as directions and extended NER, were not considered during the annotation process. Enhancing the dataset's imbalance in terms of class distribution, incorporating more training examples, cleaning the data prior to training through pre-processing, and using more robust pre-trained models could all contribute to a more precise classification result.
Individuals can choose items from their kitchen or store and come up with meal names or categories that can be made using those ingredients. Some recipes might fit into multiple categories due to similarities in ingredients, however, they have been
annotated based on expert opinions. The normalization of ingredient lists can help to address the issue of the same ingredient appearing in different forms. Adding additional metadata, such as cuisine type, meal type, or level of difficulty, could also improve genre categorization and enable the development of more specialized models. With its large size and categorization by genre, the medical field, particularly those dealing with food nutrition, can suggest a range of meals from the collection.
## 9 Declarations
### Ethical Approval and Consent to participate
Not Applicable.
### Consent for publication
Not Applicable.
### Human and Animal Ethics
Not Applicable.
### Competing interests
The authors declare that they have no competing interests.
### Funding
No funding was received for conducting this study.
|
2305.11643 | Time Optimal Ergodic Search | Robots with the ability to balance time against the thoroughness of search
have the potential to provide time-critical assistance in applications such as
search and rescue. Current advances in ergodic coverage-based search methods
have enabled robots to completely explore and search an area in a fixed amount
of time. However, optimizing time against the quality of autonomous ergodic
search has yet to be demonstrated. In this paper, we investigate solutions to
the time-optimal ergodic search problem for fast and adaptive robotic search
and exploration. We pose the problem as a minimum time problem with an ergodic
inequality constraint whose upper bound regulates and balances the granularity
of search against time. Solutions to the problem are presented analytically
using Pontryagin's conditions of optimality and demonstrated numerically
through a direct transcription optimization approach. We show the efficacy of
the approach in generating time-optimal ergodic search trajectories in
simulation and with drone experiments in a cluttered environment. Obstacle
avoidance is shown to be readily integrated into our formulation, and we
perform ablation studies that investigate parameter dependence on optimized
time and trajectory sensitivity for search. | Dayi Dong, Henry Berger, Ian Abraham | 2023-05-19T12:48:23Z | http://arxiv.org/abs/2305.11643v1 | # Time Optimal Ergodic Search
###### Abstract
Robots with the ability to balance time against the thoroughness of search have the potential to provide time-critical assistance in applications such as search and rescue. Current advances in ergodic coverage-based search methods have enabled robots to completely explore and search an area in a fixed amount of time. However, optimizing time against the quality of autonomous ergodic search has yet to be demonstrated. In this paper, we investigate solutions to the time-optimal ergodic search problem for fast and adaptive robotic search and exploration. We pose the problem as a minimum time problem with an ergodic inequality constraint whose upper bound regulates and balances the granularity of search against time. Solutions to the problem are presented analytically using Pontryagin's conditions of optimality and demonstrated numerically through a direct transcription optimization approach. We show the efficacy of the approach in generating time-optimal ergodic search trajectories in simulation and with drone experiments in a cluttered environment. Obstacle avoidance is shown to be readily integrated into our formulation, and we perform ablation studies that investigate parameter dependence on optimized time and trajectory sensitivity for search.
## I Introduction
The ability for robots to effectively balance time against the thoroughness of search in strict time conditions is vital for providing timely assistance in many search and rescue applications [1, 2]. For example, it is often desired to have robots quickly survey large areas in minimal time and then execute a refined search based on any information gathered. This approach can better assist rescue personnel in providing immediate assistance as needed. While recent algorithmic advances have made it possible to generate robot trajectories that provide effective coverage of search areas [3, 4, 5], few consider the explicit dependence on time in the problem. What makes the problem of reasoning about time versus coverage difficult is in the inherent duality between time spent covering an area and the thoroughness of the coverage. Therefore, in this work, we are interested in addressing the question: is it possible to balance time and coverage quality in a single optimization problem?
Autonomous search and exploration has largely been studied from the perspective of coverage-based methods [6, 7, 8, 9]. These problems optimize a path that a robot will follow that visits a discretized set of nodes (or way-points) defined over a work-space (i.e., search area). Similar problems exist in continuous spaces and are solved through some form of spatial approximation [10, 11] or coverage based on sensor envelop [12, 13] with the use of multiple robots. However, few works include time considerations, i.e. how long a robot spends in an area and how quickly the robot navigates and explores a space. Methods that do consider time will often do so in bi-level optimization or as hybrid approaches that still require some form of node-based discretization [14, 15]. In contrast, recent advances in ergodic coverage-based search methods have demonstrated it is possible to consider time more explicitly in autonomous coverage problems [16, 17, 18, 19, 20, 21].
Ergodic search methods optimize continuous robot search trajectories by minimizing the distance between how long a robot spends in a given region and a measure of information distributed in the region [16, 22]. This distance is measured using the ergodic metric [16, 23] which can be directly optimized against robot trajectories and arbitrary measures of
Fig. 1: **Example Time-Optimal Ergodic Search Trajectories.** The proposed work investigates solutions to the time-optimal ergodic search problem for generating time-optimal coverage trajectories for search and exploration. a) Planned time-optimal trajectory for coverage in a cluttered environment in optimal time. b) Experimental drone trajectory execution of time-optimal ergodic trajectory through the cluttered environment. Trajectory was optimized to uniformly explore the environment in \(13.5\)s. Multimedia demonstration provided in [https://sites.google.com/view/time-optimal-ergodic-search](https://sites.google.com/view/time-optimal-ergodic-search) and code [https://github.com/ialab-yale/time_optimal_ergodic_search](https://github.com/ialab-yale/time_optimal_ergodic_search).
"information". As a result, ergodic trajectories spend more time in high-information areas while quickly exploring in low-information regions given enough time [16, 24, 23, 19, 25]. However, these methods optimize trajectories over a fixed time horizon, resulting in a lack of control over the granularity of how a robot searches an area. Therefore, in this paper, we pose and investigate solutions to the time-optimal ergodic search problem for generating time-optimal robotic search trajectories that sufficiently explores an area.
This paper proposes a trajectory optimization routine for scenarios where robots need to generate dynamic trajectories that optimize continuous coverage in minimum time. Our approach is to formulate this problem as a time-optimal ergodic search problem where the ergodic metric imposes a coverage constraint. Satisfying the ergodic metric constraint is shown to yield sufficient coverage requirements based on an upper bound value that can be afforded by the robot [23]. Because the metric is defined over Fourier spectral modes, a constraint permits optimizing time against trajectories that provide varying levels of continuous coverage over a space. We investigate computing trajectory solutions to the proposed time-optimal ergodic search problem by 1) analytically demonstrating the existence of conditions of optimality based on Pontryagin's maximum principle [26], and 2) numerically using a direct transcription-based approach [27, 28, 20]. Furthermore, we demonstrate time-optimal trajectories in simulation and in drone experiments in cluttered environments through the integration of safety-based obstacle avoidance constraints [29, 30, 20]. In summary, our contributions are as follows:
1. A novel time-optimal ergodic trajectory optimization method for producing time-optimal coverage trajectories for autonomous search and exploration;
2. Proof of analytical conditions of optimality for the time-optimal ergodic search problem; and
3. Demonstration of time-optimal search trajectories on a drone system in a cluttered environment (see Fig. 1).
The paper is structured as follows: Section II overviews related work. Section III describes preliminary information on ergodic search and time-optimal control. Section IV poses the time-optimal ergodic search problem and presents solutions to the problem. Section V then presents various simulated and experimental results for the proposed solution to generate time-optimal ergodic search trajectories. Last, Section VI provides conclusions and an outlook on future work.
## II Related Work
### _Coverage-Based and Ergodic Search Methods_
Prior work on coverage-based planning for search and exploration has largely been focused on specifying paths or assigning robots to locations that maximizes sensor coverage in a bounded space [6, 31]. These solutions provide guaranteed coverage over a grid and provide robot paths using algorithms such as lawnower algorithms [32, 7, 33, 34] and traveling sales-person problems [35, 36, 33]. Recent extensions have moved away from the limited grid approximations and worked on continuous work-spaces using cellular decomposition or continuous potential-field methods [37, 12, 38]. In addition, information-based extensions to these search methods have provided effective strategies for robots exploring unstructured areas [4, 39, 40]. However, as search-based methods moved towards continuous spaces, coverage guarantees become more difficult to obtain, especially under the presence of distributed information.
Novel work on ergodic search methods has emerged to compute continuous coverage trajectories of an area given enough time [16]. Ergodic search methods optimize robot trajectories against some underlying distributed information over an area which the robot can explore [24, 22, 18, 3]. The success of ergodic search methods compared to prior methods is attributed to the unique ergodic metric used in the trajectory optimization. The ergodic metric quantifies the effectiveness of a trajectory in exploring a region based on the time a robot spends in an area. A trajectory is ergodic (i.e., optimizes the ergodic metric) if the time spent along the trajectory in each region is proportional to the measure of information distributed in that region. As a result, optimized ergodic trajectories are significantly more robust to external sensor disturbances [22] and have been shown to be an optimal strategy for information-gathering tasks [41, 42] with real-world application [3]. However, prior work typically has planning horizons that are fixed and are not considered part of the optimization. In this work, we investigate the time-optimal extension of the ergodic search problem and its solutions.
### _Time Optimal Planning and Control_
Time-optimality in planning and control is a well-studied problem going back to the original statement of Pontryagin's maximum principle [26]. The canonical problem minimizes time subject to continuous-time system dynamics and constraints on the state of the system. For select systems, the conditions of optimality generate a closed-form control solution to reach desired states in optimal time [43, 26]. Recent research on time-optimal planning has since extended the work for fast drone flight at the level of human drone pilots [27, 44]. These methods solve time-optimal trajectories over specified control "knot" points [27] given a trajectory tracking cost. In this work, we use a variation of the solution in [27] to directly compute ergodic trajectory solutions from our time-optimal ergodic search problem formulation.
With respect to search and exploration, past work on time-optimal search has typically used various "hybrid" formulations to solve the problem of trajectory generation [45]. These approaches restrict robot trajectories on discrete node-based structures and then optimize for time along each node, which takes the form of Shortest Watchman Tour problems [46], Art Gallery problems [47], and Traveling Sales-person problems [36]. Other time-optimal trajectory planning methods have used a two-step optimization approach that first generates a motion path [48, 49] and then refines the time of the found path [50, 51, 52]. However, these methods do not consider
search in the problem, nor do they consider the coupling between continuous trajectory planning, time, and the physical robot's dynamic constraints.
## III Preliminaries
In this section, we provide an overview of ergodic search methods and outline the necessary nomenclature used throughout this paper. The canonical ergodic trajectory optimization problem is formulated, and then we briefly define the time-optimal control problem statement for reference.
### _Ergodic Search_
Let us first define a robot trajectory at time \(t\) with state \(x(t):\mathbb{R}^{+}\rightarrow\mathcal{X}\subseteq\mathbb{R}^{n}\) and control input \(u(t):\mathbb{R}^{+}\rightarrow\mathcal{U}\subseteq\mathbb{R}^{m}\) where \(\mathcal{X},\mathcal{U}\) are the state and control spaces of dimensionality \(n\) and \(m\) respectively. Next, define \(\dot{x}=f(x(t),u(t))\) where \(f(x,u):\mathcal{X}\times\mathcal{U}\rightarrow\mathcal{T}_{\mathcal{X}}\) is the continuous-time (potentially nonlinear) dynamics of the robot. In addition, let us define a map \(g(x):\mathcal{X}\rightarrow\mathcal{W}\) such that \(\mathcal{W}=[0,L_{0}]\times\ldots\times[0,L_{v-1}]\) where \(v\leq n\), and \(L_{i}\) are the bounds of the workspace \(\mathcal{W}\) which we denote as the exploration space.1 The map \(g\) then takes us from state space \(\mathcal{X}\) to exploration space \(\mathcal{W}\).
Footnote 1: For example, \(g(x)=\mathbf{I}_{p}x\) where \(\mathbf{I}_{p}\) is a selection matrix with all zeros except for the parts of the state \(x\) that correspond to an exploration space in the subset of the robot’s global position in the world.
A trajectory \(x(t),\forall t\in[t_{0},t_{f}]\) is _ergodic_ with respect to a measure \(\phi(w):\mathcal{W}\rightarrow\mathbb{R}^{+}\) if and only if
\[\lim_{t_{f}\rightarrow\infty}\frac{1}{t_{f}}\int_{t_{0}}^{t_{f}}\mu(g(x(t))) dt=\int_{\mathcal{W}}\phi(w)\mu(w)dw \tag{1}\]
for all Lebesgue integrable functions, \(\mu\in\mathcal{L}^{1}\)[23]. Because we can not run a robot for \(t_{f}\rightarrow\infty\), we consider \(t_{f}<\infty\) where trajectories are sub-ergodic. For a finite \(t_{f}\), where \(x(t)\) is a deterministic trajectory we define the left-hand side of Eq. (1) as the time-averaged trajectory statistics
\[c(w,x(t))=\frac{1}{t_{f}}\int_{t_{0}}^{t_{f}}\delta[w-g(x(t))]dt \tag{2}\]
where \(\delta\) is the Dirac delta function (in place of \(\mu\)) and \(w\in\mathcal{W}\) is a point in the exploration space. Using Eq. (2) as part of an optimization routine is not possible in the current form as the delta function is not differentiable. To define the ergodic metric for optimization, we use spectral methods and construct a metric in the Fourier space [16, 23, 22].
Let us define the \(k^{\text{th}}\in\mathbb{N}^{v}\) cosine Fourier basis function as
\[F_{k}(w)=\frac{1}{h_{k}}\prod_{i=0}^{v-1}\cos\left(\frac{w_{i}k_{i}\pi}{L_{i}}\right) \tag{3}\]
and \(h_{k}\) is a normalizing factor (see [22, 16]). Then, the ergodic metric is defined as
\[\mathcal{E}(x(t),\phi)=\sum_{k\in\mathcal{K}^{v}}\Lambda_{k}\left( c_{k}-\phi_{k}\right)^{2} \tag{4}\] \[=\sum_{k\in\mathcal{K}^{v}}\Lambda_{k}\left(\frac{1}{t_{f}}\int_{t _{0}}^{t_{f}}F_{k}(g(x(t)))dt-\int_{\mathcal{W}}\phi(w)F_{k}(w)dw\right)^{2}\]
where \(k\in\mathcal{K}^{v}\subset\mathbb{N}^{v}\) is the set of all fundamental frequencies, \(c_{k}\) and \(\phi_{k}\) are the \(k^{\text{th}}\) Fourier decomposition of \(c(w,x(t))\) and \(\phi(w)\), respectively, and \(\Lambda_{k}=(1+\|k\|)^{-\frac{v+1}{2}}\) is a weight coefficient that places higher importance on lower-frequency modes.
We formulate the ergodic trajectory optimization problem as the following minimization problem over state and control trajectories \(x(t),u(t)\):
_Ergodic Trajectory Optimization:_
\[\min_{x(t),u(t)}\left\{\mathcal{E}(x(t),\phi)+\int_{t_{0}}^{t_{f}}u(t)^{\top} \mathbf{R}u(t)dt\right\}\] (5a) s.t. \[\begin{cases}x\in\mathcal{X},u\in\mathcal{U},g(x)\in\mathcal{W}\\ x(t_{0})=\bar{x}_{0},x(t_{f})=\bar{x}_{f}\\ \dot{x}=f(x,u)\\ h_{1}(x,u)\leq 0,h_{2}(x,u)=0\end{cases} \tag{5b}\]
_where \(h_{1}\), \(h_{2}\) are inequality and equality constraints, and \(\mathbf{R}\in\mathbb{R}^{m\times m}\) is a diagonal positive-definite matrix that penalizes control._
### _Time-Optimal Control Problem Statement_
Given the same robot trajectories \(x(t),u(t)\) and dynamics \(\dot{x}=f(x,u)\) defined previously, we define the time-optimal control problem as minimizing time \(t_{f}\). However, this alone renders an ill-posed problem with a trivial solution \(t_{f}=0\). To circumvent this issue, it is common to include some terminal state condition \(x(t_{f})=x_{f}\) which needs to be satisfied. In addition, constraints are commonly included to further restrict the solution space as one can end up with "infinite" control input which is not feasible on robotic systems. The formulation of the time-optimal control problem is then
Time-Optimal Control Problem:
\[\min_{x(t),u(t),t_{f}} t_{f}\] (6a) s.t. \[\begin{cases}x\in\mathcal{X},u\in\mathcal{U},t_{f}>0\\ x(t_{0})=x_{0},x(t_{f})=x_{f},\\ \dot{x}=f(x,u)\\ h_{1}(x,u)\leq 0,h_{2}(x,u)=0\end{cases} \tag{6b}\]
where we optimize over \(x(t),u(t)\) and \(t_{f}\), \(h_{1},h_{2}\) are inequality and equality constraints respectively, and \(\bar{x}_{f}\) is a terminal state.
As an aside, it is worth noting that time-optimal control problems are often similar in formulation to time-optimal trajectory problems. The different use cases depend on the time horizon settings. In long-time horizons, it is often preferred to solve for robot trajectories directly and use them to track points [27]. It is possible to find closed-form feedback-control solutions based on the conditions of optimality from Pontryagin's maximum principle [26]. In this work, we focus on showing that one can prove the conditions of optimality for the time-optimal ergodic control problem and obtain solutions using a direct trajectory optimization method. We leave computing closed-form solutions to future work.
Numerical solutions to Eq. 6 can be obtained by discretizing trajectories over \(N\) "knot" points where a discrete time is calculated as \(\Delta t=\frac{t_{f}}{N}\)[27]. The continuous-time dynamics of the robot are then transcribed using an integration method (e.g., forward Euler, implicit Euler, Runge-Kutta) where \(x_{t+\Delta t}=x_{t}+\Delta tf(x_{t},u_{t})\) denotes an explicit Euler method and the subscripts refer to discrete time points. In the following section, we formalize the time-optimal ergodic search problem and derive analytical solutions using conditions of optimality and numerical solutions using direct trajectory transcription.
## IV Time-Optimal Ergodic Search
In this section, we formulate and pose the time-optimal ergodic search problem. Solutions to the problem are presented in two manners: 1) analytically through conditions of optimality; and 2) numerically through a direct transcription approach. The analytical approach is used to establish conditions of optimality (which can be seen as continuous time analogies of the KKT-conditions [53, 54]). We derive these results as purely analytical, with the intent that these conditions are to be used to further analyze the structure of the time-optimal ergodic search problem in future work. The numerical approach provides a direct form of calculating robot trajectories and control solutions for the time-optimal ergodic search problem.
### _Problem Formulation_
Let us consider the same robot with state and control trajectories \(x(t),u(t)\) with continuous time (nonlinear) dynamics \(\dot{x}=f(x,u)\). In addition, consider the bounded exploration space \(\mathcal{W}\) with map \(g:\mathcal{X}\rightarrow\mathcal{W}\) and information measure \(\phi(w)\). Our goal is to optimize search time \(t_{f}\) while minimizing the ergodic metric Eq. (4). Let us first make more explicit the dependence of the ergodic metric on the time \(t_{f}\)
\[\mathcal{E}(x(t),\phi,t_{f})=\sum_{k\in\mathcal{K}^{v}}\Lambda_{k }\left(c_{k}(x(t),t_{f})-\phi_{k}\right)^{2} \tag{7}\] \[=\sum_{k\in\mathcal{K}^{v}}\Lambda_{k}\left(\frac{1}{t_{f}}\int_ {t_{0}}^{t_{f}}F_{k}(g(x(t)))dt-\int_{\mathcal{W}}\phi(w)F_{k}(w)dw\right)^{2},\]
where \(c_{k}(x(t),t_{f})\) is the term that depends on time.
According to Eq. (1), a trajectory can become ergodic as \(t_{f}\rightarrow\infty\). This makes the problem of time-optimal ergodic search ill-posed as \(t_{f}\) will be significantly large if the ergodic metric is to be minimized. To solve for this, we propose to include the ergodic metric as an inequality constraint \(\mathcal{E}(x(t),\phi,t_{f})\leq\gamma\), where \(\gamma\in\mathbb{R}^{+}\) is an upper bound on ergodicity. As an example of how \(\gamma\) can still provide sufficient coverage, consider that the ergodic metric is defined over Fourier modes. Satisfying the ergodic constraint then requires minimizing the distance between the \(k^{\text{th}}\) modes of \(c_{k}\) and \(\phi_{k}\) such that the sum of squares is less than \(\gamma\). Because we are working with _spectral_ Fourier modes that span the exploration space \(\mathcal{W}\), this implies that \(\gamma\) imposes a lower bound on ergodic coverage based on the spectral bands with the highest amplitudes. Therefore, we can use the ergodic inequality constraint as an additional condition for time optimization so the problem is well-posed.
Including a terminal state condition \(x(t_{f})=x_{f}\) as a secondary boundary condition, the time-optimal ergodic search problem using (6) and (5)is defined as:
Time-Optimal Ergodic Trajectory Optimization: \[\min_{x(t),u(t),t_{f}} t_{f}\] (8a) s.t. \[\begin{cases}x\in\mathcal{X},u\in\mathcal{U}\\ x(t_{0})=x_{0}\\ \dot{x}=f(x,u)\\ x(t_{f})=x_{f},g(x)\in\mathcal{W}\\ h_{1}(x,u)\leq 0,h_{2}(x,u)=0\\ \mathcal{E}(x(t),\phi,t_{f})\leq\gamma,t_{f}>0\end{cases}\] (8b)
where the last set of constraints ensures that solutions are minimizing the ergodic metric up to \(\gamma\) and time is always positive. In this problem, \(h_{1}\) and \(h_{2}\) often encode additional control constraints, e.g., so that \(u(t)\) is bounded or that \(x(t)\) avoids obstacles in the environment. The following subsections propose solutions to the time-optimal ergodic search problem.
### _Indirect Solution via Pontryagin's Maximum Principle_
We can show that there exist analytical conditions of optimality (as done with the original time-optimal control results) for the time-optimal ergodic search problem in (8). To show this, we first express the problem (8) without constraints \(h_{1},h_{2}\) (these can be later introduced, but for now, we are interested in the simpler problem). In addition, we formulate an objective function using Lagrange multipliers \(\lambda(t)\), \(\rho_{1}\), and \(\rho_{2}\):
\[\mathcal{J}(x(t),u(t),t_{f}) =\rho_{1}\mathcal{E}(x(t),t_{f})+\rho_{2}^{\top}(x-\bar{x})\mid_{ t_{f}}\] \[+\int_{t_{0}}^{t_{f}}1+\lambda^{\top}\left(f(x,u)-\dot{x}\right)dt \tag{9}\]
where \(\bar{\mathcal{E}}(x(t),t_{f})=-\log(-\mathcal{E}(x(t),\phi,t_{f})+\gamma)\) is a log barrier term that represents the inequality constraint [55]. The representation in Eq. (9) appears to be in a Bolza form where the cost of time \(t_{f}\) is introduced with the added \(1\) under the integral, i.e., \(\int_{0}^{t_{f}}1dt=t_{f}\). Typically, it is sufficient to apply the maximum principle to the Bolza form and obtain conditions of optimality. However, note that \(\bar{\mathcal{E}}\) requires the full trajectory and not simply the terminal time which makes the problem not in Bolza form. As a result, we need to further simplify the problem.
To do so, we define an extended ergodic state [56]:
**Definition 1**.: _Extended ergodic state. The ergodic metric (4) can be equivalently expressed as_
\[\mathcal{E}(x(t),\phi,t_{f}) =\sum_{k\in\mathcal{K}^{v}}\Lambda_{k}\left(c_{k}(x(t),t_{f})- \phi_{k}\right)^{2}\] \[=\frac{1}{t_{f}^{2}}\|z(t_{f})\|_{\mathbf{\Lambda}}^{2}\]
_where \(z(t_{f})=[z_{0},z_{1},\ldots,z_{|\mathcal{K}^{v}|}]^{\top}\) is the solution to_
\[\dot{z}_{k} =F_{k}(g(x(t)))-\phi_{k} \tag{10}\]
_with initial condition \(z(t_{0})=\mathbf{0}\), and \(\mathbf{\Lambda}=\text{diag}(\Lambda)\) is a diagonal matrix consisting of the weights \(\Lambda=[\Lambda_{0},\ldots,\Lambda_{|\mathcal{K}^{v}|}]\)._
Proof.: From [56], it can be shown that by multiplying time \(t\) we can define
\[z_{k}(t) =c_{k}(x(t),t)-t\phi_{k}\] \[=\int_{0}^{t}F_{k}(g(x(\tau)))d\tau-t\int_{\mathcal{W}}F_{k}(w) \phi(w)dw. \tag{11}\]
When \(t=t_{f}\), it can be readily shown that \(\sum_{k\in\mathcal{K}^{v}}\Lambda_{k}\left(c_{k}(x(t),t_{f})-\phi_{k}\right)^{2 }=\frac{1}{t_{f}^{2}}\|z(t_{f})\|_{\mathbf{\Lambda}}^{2}\). Taking the derivative of Eq. (11) with respect to time, we get \(\dot{z}_{k}=F_{k}(g(x(t)))-\phi_{k}\) which we define as the governing differential equation for the extended ergodic state.
With the extended ergodic state, we are able to extend the dynamics of the system
\[\dot{\bar{x}}=\bar{f}(\bar{x},u)=\begin{bmatrix}f(x,u)\\ \mathbf{F}_{k}(g(x(t)))-\Phi_{k}\end{bmatrix} \tag{12}\]
where \(\bar{x}=[x^{\top},z^{\top}]^{\top}\) is the extended state, \(\mathbf{F}_{k}(w)=[F_{0}(w),F_{1}(w),\ldots,F_{|\mathcal{K}^{v}|}(w)]^{\top}\) is a vector of all the Fourier basis functions, and \(\Phi_{k}=[\phi_{0},\phi_{1},\ldots,\phi_{|\mathcal{K}^{v}|}]^{\top}\) is a vector of all the Fourier coefficients of \(\phi\). The objective function can then be written compactly as
\[\mathcal{J}(\bar{x}(t),u(t),t_{f}) =\rho^{\top}\psi(\bar{x},t_{f})\mid_{t_{f}}\] \[+\int_{t_{0}}^{t_{f}}1+\lambda^{\top}\left(\bar{f}(\bar{x},u)- \hat{\bar{x}}\right)dt \tag{13}\]
where \(\psi(\bar{x},t_{f})=[-\log(-\frac{1}{t_{f}^{2}}\|z(t_{f})\|_{\mathbf{\Lambda} }^{2}+\gamma),(x-\bar{x})^{\top}]^{\top}\). Defining the Hamiltonian of the control system as
\[H(\bar{x},u,\lambda)=1+\lambda^{\top}\bar{f}(\bar{x},u) \tag{14}\]
we are able to apply the maximum principle and obtain conditions of optimality.
**Theorem 1**.: _Conditions of (Local) Optimality. For a control system that follows the dynamics \(\dot{\bar{x}}=\bar{f}(\bar{x},u)\) (Def. 1) and Hamiltonian_
\[H(\bar{x},u,\lambda)=1+\lambda^{\top}\bar{f}(\bar{x},u),\]
_the tuple \((\bar{x}(t),u(t),\lambda(t),t_{f})\) is a locally optimal solution to the time-optimal ergodic search problem (8) over the free time interval \(t\in[0,t_{f}]\) if the following conditions are satisfied:_
\[\psi(\bar{x}(t_{f}),t_{f})=0 \tag{15a}\] \[\bar{x}(t_{0})=\bar{x}_{0}\] (15b) \[\dot{\bar{x}}=\frac{\partial H}{\partial\lambda}^{\top}\] (15c) \[\dot{\lambda}=-\frac{\partial H}{\partial\bar{x}}^{\top}\] (15d) \[u^{\star}=\operatorname*{arg\,min}_{u\in\mathcal{U}}H(\bar{x}^{ \star},u,\lambda^{\star})\] (15e) \[\lambda(t_{f})=\frac{\partial\psi}{\partial\bar{x}}^{\top}\rho \bigg{|}_{t_{f}}\] (15f) \[H(\bar{x}(t_{f}),u(t_{f}),\lambda(t_{f}),t_{f})=-\rho^{\top}\frac {\partial\psi}{\partial t_{f}}\bigg{|}_{t_{f}} \tag{15g}\]
Proof.: See Appendix A.
This theorem provides evidence that time-optimal ergodic solutions (if they exist) can satisfy a set of continuous-time conditions for optimality. In practice, it is possible to use these conditions to generate control solutions to the time-optimal ergodic search problem. However, we found that they do not work for long time horizons due to the numerical instabilities when computing the two-point boundary value problem from equations (15)(c) and (d). Instead, we use an approximate direct optimization method using the KKT conditions over \(N\) discrete knot points to solve for time-optimal ergodic trajectories which we describe in the following subsection.
### _Direct Solutions via Transcription_
In this section, we outline a direct transcription method for numerically solving (8). Our approach is similar to that of prior time-optimal planning methods [27, 57].
We first begin by defining the continuous-time dynamics \(\dot{x}=f(x,u)\) as a discrete-time system over a sequence of \(N\) discretized "knot" points:
\[x_{t+1}=x_{t}+\Delta tf(x_{t},u_{t}) \tag{16}\]
where \(\Delta t=\frac{t_{f}}{N}\) and the subscripts define a discrete time point. Note that we depict an Explicit Euler integration scheme, but this is not specific to our method and can be changed to a Runge-Kutta or Implicit integration scheme. Next, we define the optimization variables as \(\mathbf{x}=\{x_{0},\ldots,x_{N}\}\), \(\mathbf{u}=\{u_{0},\ldots,u_{N-1}\}\). The ergodic metric in discrete-time becomes
\[\mathcal{E}(x(t),\phi,t_{f})\approx\hat{\mathcal{E}}(\mathbf{x}, \phi,t_{f})=\sum_{k\in\mathcal{K}^{v}}\Lambda_{k}\left(c_{k}(\mathbf{x},t_{f})- \phi_{k}\right)^{2}\] \[=\sum_{k\in\mathcal{K}^{v}}\Lambda_{k}\left(\frac{1}{t_{f}}\sum_{ t=0}^{N-1}F_{k}(g(x_{t}))\Delta t-\phi_{k}\right)^{2} \tag{17}\]
Since we describe the time discretization \(\Delta t\) as derived from \(t_{f}\), we can directly write the optimization problem over \(\mathbf{x},\mathbf{u},t_{f}\) as a nonlinear program (NLP):
\[\min_{\mathbf{x},\mathbf{u},t_{f}}\quad t_{f}\] (18a) s.t. \[\begin{cases}\begin{cases}\Delta t=\frac{t_{f}}{N}\\ x_{t}\in\mathcal{X},u_{t}\in\mathcal{U}\\ x_{0}=x_{0},x_{t+1}=x_{t}+\Delta tf(x_{t},u_{t})\\ x_{N}=x_{f},g(x_{t})\in\mathcal{W}\\ h_{1}(\mathbf{x},\mathbf{u})\leq 0,h_{2}(\mathbf{x},\mathbf{u})=0\\ \hat{\mathcal{E}}(\mathbf{x},\phi,t_{f})\leq\gamma,t_{f}>0\end{cases} \tag{18b}\]
The optimization in (18) is solved as a direct-collocation problem, i.e., optimization variables are free where constraints impose physical robot limitations and dynamics. Note that as a result of this implementation, the initial conditions may be chosen arbitrarily and the chosen solver will "stitch" together a trajectory based on the relevant constraints. We use a NLP solver (specifically a custom variation of an augmented Lagrangian constrained optimization solver [55, 54]), that directly solves (18). Solutions are verified against conditions (to establish convergence) using the KKT conditions of the NLP problem [54]. Note that it is possible to use the optimality conditions in Theorem 1; however, these will only be approximate conditions due to the time discretization. Furthermore, the choice of initial condition determines whether the solver converges. Because the ergodic metric is highly nonlinear and non-convex, the dependence of solutions as a function of initial conditions may vary drastically (as shown in [17]). We fix the initial trajectory conditions to be a linear interpolation between the initial and final conditions for all examples. In the following section, we demonstrate simulated and experimental results for solutions to (18).
## V Results
In this section, we demonstrate simulated and experimental results for time-optimal ergodic search in several scenarios. The proposed approach is evaluated as a trajectory optimizer in settings where obstacles in the environment are known and the utility of coverage over specific areas is provided to the robot. 2 Specifically, we are interested in scenarios that allow us to investigate aspects of the proposed approach that answer the following questions:
Footnote 2: The construction of the coverage utility \(\phi\) is often a function of new measurements collected from executing the time-optimal trajectories and independent of how coverage trajectories are optimized.
* Can we generate time-optimal ergodic trajectories, and how do they compare to fixed-time ergodic trajectories?
* Do we retain the ability to bias the search with a non-uniform \(\phi\)?
* What influence does the ergodic upper bound \(\gamma\) have on optimized time (and control)?
* How much do trajectory solutions change with changes in discretizing knot points \(N\) and how do initial conditions affect generated solutions?
* Is it possible to include constraints in the problem formulation (18) to explore in more realistic scenarios, e.g., a cluttered environment?
* And last, do optimized trajectories transfer to real-world drone applications for search in a cluttered environment?
We organize the results section such that each of these questions are answered sequentially (with A#:) and through the figures starting from Fig. 2. Implementation details are provided in the text and in Appendix B.
### _Simulated Results_
**A1: Comparison to Fixed-Time Ergodic Search.** Our first result compares the proposed direct solution (18) to the original ergodic trajectory optimization problem (5). For this result, we use a 2-D double integrator (or point-mass) dynamical system as our simulated robotic system whose goal is to uniformly explore a bounded exploration space \(\mathcal{W}\). We specify \(\phi(w)\) as a uniform distribution and use initial solver parameters \(t_{f,\text{init}}=10s\), \(N=200\), and control penalization matrix \(\mathbf{R}=0\) for (5).
Fig. 2 is a side-by-side comparison against the fixed-time ergodic trajectory solved using (5) and the proposed time-optimal approach (18). Note that with the fixed-time ergodic search method, planned trajectories are limited in how they explore an area based on the initial planning time \(t_{f}\). In contrast, optimizing time alongside ergodic trajectories can be seen to provide a range of granularity in how much the trajectory uniformly covers the space. Specifically, as one decreases the value of \(\gamma\), optimized trajectories focus more on being ergodic and have less emphasis on optimizing time. With larger values of \(\gamma\), time is prioritized with less emphasis on ergodicity so long as trajectories satisfy the ergodic upper bound.
**A2: Biasing Time-Optimal Search with Inf. Distribution.** We can further investigate the efficacy of time-optimal search with respect to a non-uniform \(\phi\). This is of interest because time optimization may impact how much time trajectories spend on high-information areas. In addition, resulting ergodic
Fig. 2: **Ergodic Trajectory Sensitivity Analysis.** Trajectory solutions found using a uniform distribution \(\phi\). (a) A fixed time \(t_{f}=10\)s ergodic trajectory solution with resulting ergodicity \(\mathcal{E}=0.007\). (b) Solutions to time-optimal ergodic trajectories Eq.(8) with varying \(\gamma\). Time solutions range from \(5\)-\(10\)s depending on \(\gamma\) with equivalent coverage to the fixed-time ergodic trajectory being obtained with \(<6\)s.
trajectories can overlook important high-information areas that are critical for search.
To test this, we define a non-uniform distribution for \(\phi\) as illustrated in Fig. 3. The distribution consists of four identical Gaussian peaks placed over the exploration space \(\mathcal{W}\) (see Appendix B for more detail). We use the same 2-D point mass dynamics but test only \(\gamma=0.1\) and \(\gamma=0.001\) which indicate a coarse search with an emphasis on optimizing time and a finer search with less emphasis on time respectively. Trajectories illustrated in Fig. 3 show that even with a high \(\gamma\), the generated trajectory still visits each Gaussian peak. However, the trajectory does not spend too much time in the area and brushes past the first peak. This is the result of the ergodic inequality constraint and the balance of time versus coverage. This can be seen in the difference in elapsed optimal times of \(9.86s\) and \(19.59\) seconds respectively (almost \(2\times\) increase in time in response to 2 orders of magnitude of reduction on \(\gamma\).
**A3,4: Parameter Ablation Studies.** Given the drastic change in performance of optimized time against the change of \(\gamma\) for optimized ergodic trajectories, we investigate parameter sensitivity through two ablation studies. The first study looks at the change of optimal time against changes in \(\gamma\). The second study investigates the effect of initial conditions and the influence of the time discretization introduced by the knot points \(N\). For both studies, the same 2-D point-mass system is used and \(\phi\) defines a uniform distribution over the bounded exploration space.
Figure 4 illustrates the results of the first study. As \(\gamma\to 0\), the ergodic inequality constraint becomes more of an equality constraint. This is due to the ergodic metric being lower-bounded by \(0\) by definition. As a result, the smaller \(\gamma\) becomes, the more the optimized time tends towards \(\infty\)13 This exactly corresponds to the statement of ergodicity (1) that defines a trajectory as being ergodic only at the limit of \(t_{f}\rightarrow\infty\). What is interesting in the control trade-off known in Fig. 4. Plotted is the time-normalized control \(\frac{1}{t_{f}}\int_{0}^{t_{f}}\|u(t)\|dt\) where the value of \(u(t)\) is bounded by \(u_{\text{max}}\) through added constraints. In low \(\gamma\) values (large optimized \(t_{f}\)), less control effort is needed to be ergodic. We suspect this is due to the robot leveraging its dynamics to slowly navigate an area without the need to change direction abruptly. On the other spectrum of \(\gamma\), control actuation begins to fall as time is prioritized. This is due to \(\gamma\) reaching an upper bound on the ergodic metric (as the metric is composed of only cosine functions). Therefore, there is
Fig. 4: **Time-Opt. Variations to Ergodicity Upper Bound. With an increased ergodicity upper bound \(\gamma\) (i.e., less emphasis on coverage), time is prioritized. Aligned with ergodic theory [16], as \(\gamma\to 0\), the optimized time asymptotically approaches infinity which is required for complete ergodic coverage. Interestingly, time-normalized control cost \(\frac{1}{t_{f}}\int_{t}\|u(t)\|dt\) curve implies increased actuation when trading off between coverage requirements and minimizing time. Trajectories are solved using uniform coverage distribution \(\phi\).**
Fig. 5: **Initial Trajectory Ablation.** Here, we show the dependence of time-optimal ergodic solutions on the initial trajectory that was provided to the solver. From left to right we show optimized solutions for linear interpolation (Lerp) from initial to final condition, the linear interpolated trajectory with added normally distributed zero-mean noise with standard deviation \(0.02,\Delta=0.02\), sinusoidal initial condition, and a uniformly random initial condition. Each solution satisfies an ergodicity of \(\mathcal{E}=0.01\) with initial time \(t_{f}=10\) and control knots \(N=200\). We find that the deviation in final optimized time depends on path smoothness. Non-smooth initial trajectories tend to fall into equally ergodic local minima, causing worse optimized time.
Fig. 3: **Time Opt. Ergodic Search Over Info. Distribution.** Using the information distribution \(\phi\), it is possible to solve biased time-optimal ergodic search problems. Shown above are time-optimal trajectory solutions for (a) \(\gamma=0.1\), and (b) \(\gamma=0.001\). To compensate for the increased coverage requirements imposed by \(\phi\), ergodic trajectories are solved to optimize time spent over areas of high information (illustrated as the lighter regions). With tighter requirements on ergodicity with \(\gamma=0.001\), it can be seen that the optimized trajectories spend more time proportionally in the areas of high information.
less emphasis to be ergodic and more emphasis to optimize time (less direct changes in actuation). The balance between optimizing time and being ergodic is then shown to require more actuation.
We further investigate the dependence of the optimized time against the initial condition as the discretizing knot points \(N\). Experimental runs are done using the same point-mass dynamics in the bounded environment with a uniform distribution as \(\phi\) with \(\gamma=0.05\). We vary the initial time \(t_{f,\text{init}}\) between \(4-8s\) with \(1s\) intervals and \(N=200\) which we found to be a range where the solver would provide solutions within acceptable tolerances. In addition, we varied the number of knot points between \(50-600\) with a resolution of \(100\) after \(50\) with \(t_{f,\text{init}}=10\). In Table I the optimized time solutions along with the standard deviation are provided. Optimized time solutions tended to stay near \(5s\) with a standard deviation of \(\pm 0.23s\). We found that knot points having more of an effect on optimized time with \(\pm 0.39s\) of standard deviation. The difference in solution is anticipated as ergodic trajectories parameterize the time-average distribution (2) which can have an infinite number of solutions that yield the same distribution. As a result, deviations in trajectories may satisfy the ergodic inequality, but provide different time-optimal solutions \(t_{f}\).
Last, we studied the dependence of solutions as a function of the initial trajectory that was provided to the solver. The initial trajectory is varied based on common choices (e.g., randomly added noise and sinusoidal paths). Illustrated in Fig. 5 is the resulting optimized time-optimal ergodic trajectory subject to the initial trajectory condition. We find that the more regular and smooth the initial trajectory is, the more well-behaved and consistent the optimized solution. Non-smooth initial trajectories provided high variability in the solution. We believe this is caused by the non-linearity of the ergodic metric and that there exist infinitely many trajectory solutions that satisfy the same ergodicity (all solutions maintain an ergodicity of \(\mathcal{E}=0.01\)) as shown previously in [17].
### _Time-Optimal Ergodic Search in a Cluttered Environment_
In this subsection, we investigate more realistic settings for which to use the proposed time-optimal ergodic search. Specifically, we consider the case of time-optimal exploration in a cluttered environment where the goal is for a robot to navigate around obstacles in the environment and cover the whole area. We first demonstrate the results in simulation and show that it is possible to add in safety-based collision constraints [30] without impeding the coverage performance. Then we execute the time-optimal trajectories on a drone.
**A5: Integrating Safety-Based Collision Constraints.** To successfully navigate and explore in a cluttered environment in optimal time, safety-based constraints are required. We introduce safety here through control-barrier functions (CBFs) [30, 29]. CBFs provide an inequality constraint that, when satisfied, guarantees state trajectories remain within a predefined safe set of states. For more information, please see Appendix B. The constraints are integrated such that each CBF is centered around an object scattered in the environment (see [20]). In this example, we assume that we know the location of each obstacle and the goal is to uniformly explore the cluttered area. We use a constrained 2-D single integrator system (kinematic system) as it closely matches the Crazyflie 2.0 drone movements which have limits on how fast they can fly. The CBF constraints are integrated through \(h_{1}\) found in (18) where more details can be found in Appendix B.
Results for time-optimal trajectories are illustrated in Fig. 7 for \(\gamma=0.2,0.1,0.01,0.002\). As the value of \(\gamma\) decreases, the optimized time is increased. This can be seen in Fig. 7 where the trajectory becomes more ergodic and explores each area between the obstacles (taking more time to search carefully as \(\gamma\) decreses). The CBF constraints prevent the trajectory from getting too close to any obstacle while allowing the solver to reach the required value of ergodicity.
Additionally, we evaluate the proposed method on a non-linear aircraft dynamics model in a cluttered environment (see Fig. 6 and Appendix B for more detail). In this example, the coverage problem is defined in \(\mathcal{W}\subset\mathbb{R}^{3}\) with uniformly random ellipsoids distributed over the search space. We find the proposed solver is capable of providing uniform coverage trajectories subject to the nonlinear dynamics constraints. Note that with the dimensionality increasing, so will the computational complexity as described in prior work [19, 21, 58].
**A6: Time-Opt. Ergodic Search in Clutter.** Using the Crazyflie 2.0 drone, we demonstrate the ability of a drone to execute time-optimal ergodic search trajectories in a cluttered environment. As shown in Fig. 8 (a), the experimental setup is created so we can track the position of the drone using two Lighthouse trackers. The drone is tasked to explore the environment uniformly, starting at the initial position and ending at the final position in optimal time. As in simulations, the obstacle position and shape are assumed known and a respective CBF safety constraint is used for each obstacle in the environment. Velocity constraints are imposed through the problem constraints specified in (18).
Fig. 6: **Time-Opt. Uniform Ergodic Search with Nonlinear Aircraft Dynamics.** The proposed optimization method is capable of incorporating nonlinear dynamics in \(\mathcal{W}\subset\mathbb{R}^{3}\) with safety-based collision avoidance constraints [20]. Time-optimal coverage trajectories can be computed ahead of time and executed on the physical system. Collected information can be used to update and bias search.
We test the drone's ability to execute the time-optimal ergodic trajectories in three levels: 1) fast search with \(\gamma=0.1\), 2) balanced search with \(\gamma=0.05\), and 3) long search with \(\gamma=0.01\). The rendered trajectories (bottom) closely match the drone's trajectory (top) in Fig. 8 (b). The elapsed time is shown on the top figures. Note that the drone execution time and the optimized simulation time are within \(1.5s\) which demonstrates good tracking and that the constraints are approximating the drone's behavior closely. Future work will consider real-time control implementation of time-optimal ergodic search where obstacles in the environment are unknown.
## VI Conclusion
In conclusion, we demonstrated a novel time-optimal ergodic trajectory method for synthesizing time-optimal autonomous search and exploration trajectories. We posed the problem of time-optimal ergodic search from the perspective of time-optimal control. Analytical conditions of optimality were proven through a Bolza formulation of the problem and using Pontryagin's maximum principle. A solution based on direct numerical optimization was presented and analyzed for producing time-optimal ergodic trajectories. We show that it is possible to balance time against the granularity of search in several different scenarios that include search in a cluttered environment. The proposed optimization was shown to handle additional constraints without loss of coverage performance. Last, we demonstrated minimum-time search and exploration trajectories in a cluttered environment on a physical drone.
Future work will consider real-time optimization routines for online planning and control. A limitation of the proposed work is that the solutions are not globally optimal, but locally optimal solutions. This is due to the ergodic metric being highly nonlinear and non-convex. Interestingly, many trajectory solutions can satisfy the same ergodic metric yielding the same value. Future work will explore the conditions of optimality and the class of trajectories that are considered equivalently optimal. Furthermore, future directions will include environmental uncertainty that has the potential to be integrated into the search and exploration approach.
|
2304.06016 | PD-ADSV: An Automated Diagnosing System Using Voice Signals and Hard
Voting Ensemble Method for Parkinson's Disease | Parkinson's disease (PD) is the most widespread movement condition and the
second most common neurodegenerative disorder, following Alzheimer's. Movement
symptoms and imaging techniques are the most popular ways to diagnose this
disease. However, they are not accurate and fast and may only be accessible to
a few people. This study provides an autonomous system, i.e., PD-ADSV, for
diagnosing PD based on voice signals, which uses four machine learning
classifiers and the hard voting ensemble method to achieve the highest
accuracy. PD-ADSV is developed using Python and the Gradio web framework. | Paria Ghaheri, Ahmadreza Shateri, Hamid Nasiri | 2023-04-11T17:24:25Z | http://arxiv.org/abs/2304.06016v1 | PD-ADSV: An Automated Diagnosing System Using Voice Signals and Hard Voting Ensemble Method for Parkinson's Disease
## Abstract
Parkinson's disease (PD) is the most widespread movement condition and the second most common neurodegenerative disorder, following Alzheimer's. Movement symptoms and imaging techniques are the most popular ways to diagnose this disease. However, they are not accurate and fast and may only be accessible to a few people. This study provides an autonomous system, i.e., PD-ADSV, for diagnosing PD based on voice signals, which uses four machine learning classifiers and the hard voting ensemble method to achieve the highest accuracy. PD-ADSV is developed using Python and the Gradio web framework.
## Keywords
Gradient Boosting; LightGBM; Parkinson's disease; XGBoost;
## Code metadata
\begin{tabular}{|l|l|l|} \hline
**Nr** & **Code metadata description** & \\ \hline C1 & Current code version & _V1.0_ \\ \hline C2 & Permanent link to code/repository used & _[https://github.com/Ahmadreza-_](https://github.com/Ahmadreza-_) \\ & for this code version & _Shateri/PD ADSV_ \\ \hline C3 & Permanent link to reproducible capsule & _[https://codeocean.com/capsule/8141825/tree/v1_](https://codeocean.com/capsule/8141825/tree/v1_) \\ \hline C4 & Legal code license & _GNU General Public License v3.0_ \\ \hline C5 & Code versioning system used & _git_ \\ \hline C6 & Software code languages used & _Python_ \\ \hline C7 & Compilation requirements, operating & _Python 3.8 or later_ \\ & environments and dependencies & _Keras, TensorFlow, Pandas, NumPy, Gradio,_ \\ & & _Scikit-team, XGboost, Lightgbm, Altair_ \\ \hline C8 & If available, link to developer & - \\ & documentation/manual & \\ \hline C9 & Support email for questions & [email protected] \\ \hline \end{tabular}
### Software metadata
\begin{tabular}{|l l|} \hline Current software version & _1.5.3_ \\ Permanent link to executables of this version & _[https://github.com/Ahmadreza-Shateri/PD_ADSV_](https://github.com/Ahmadreza-Shateri/PD_ADSV_) \\ Legal Software License & _GNU General Public License v3.0_ \\ & _Microsoft Windows 7 (or later)_ \\ Operating System & _Mac OS 10.12.6 (Sierra or later)_ \\ & _Linux_ \\ Installation requirements \& dependencies & _4 GB of memory_ \\ & _! GB of free disk space_ \\ \hline \end{tabular}
## 1 Introduction
Parkinson's disease (PD) is the second most prevalent neurodegenerative disorder after Alzheimer's and a leading cause of neurological morbidity worldwide [1, 2]. In most cases, Parkinson's disease can be diagnosed based on the patient's motor symptoms [3] or through alternative neuroimaging methods such as PET scans and MRI [4]; However, in addition to being costly, time-consuming, and inaccessible to the general public, these procedures are not remarkably accurate when diagnosing patients. Recent studies indicate that nearly 90 percent of PD patients suffer from vocal disorders as one of its first symptoms [5]. Voice and speech issues are characterized by decreased absolute speech volume and pitch variation, breathiness, tremor, hoarse voice quality (roughness), variable speech rates, and imprecise articulation [6]. Therefore, analyzing the voice signals of Parkinson's patients is a vital step in the early diagnosis of this disorder.
According to previous studies [7, 8, 9], Replicated Acoustic Features of the voice signals of Parkinson's disease patients have been shown to provide crucial and valuable information for diagnosing PD. Consequently, these features were implemented into the software introduced in this paper. PD-ADSV employs the method proposed by Ghaheri et al. [9], which classified extracted features using Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), Gradient Boosting, and Bagging. Furthermore, the Hard Voting Ensemble method was used based on the performance of the four classifiers.
**2. Gradient Boosting Decision Tree**
An ensemble of weak learners, primarily Decision Trees, is utilized in Gradient boosting to increase the performance of a machine learning model [10]. The Gradient boosting decision tree (GBDT) technique enhances classification and regression tree models using gradient boosting. Data scientists frequently employ GBDT to achieve state-of-the-art results in various machine learning challenges [11].
**3. Extreme Gradient Boosting**
Extreme Gradient Boosting (XGBoost) is an improved gradient tree boosting system presented by Chen and Guestrin [12] featuring algorithmic advances (such as approximate greedy search and parallel learning [13, 14]) and hyper-parameters to enhance learning and control overfitting [15, 16]. In recent years, XGBoost has been widely utilized by researchers, and its performance in a range of Machine Learning (ML) challenges has been remarkable [17, 18, 19, 20, 21, 22, 23, 24].
**4. LightGBM**
Researchers from Microsoft and Peking University initially developed the LightGBM [25] to address the efficiency and scalability issues with GBDT (Gradient Boosting Decision Tree) and XGBoost when applied to problems with high-dimensional input features and large datasets [26]. The LightGBM algorithm incorporates two innovative strategies: gradient-based one-side sampling (GOSS) [27] and exclusive feature bundling (EFB) [28, 29].
**5. Bagging**
Leo Breiman [30] proposed bagging in 1994 as a resampling technique for driving single classifiers using bootstrap samples. Bagging reduces variance and overfitting, improves ML algorithm accuracy and consistency, and preserves DT bias [31].
## 6 Software features
PD-ADSV is built on four Machine Learning classifiers: XGBoost, LightGBM, Gradient Boosting, and Bagging. The Hard Voting Ensemble Method has also been used to achieve the highest accuracy using patients' voice signals. This software implements machine learning algorithms utilizing Python and the Gardio web-based visual interface, providing maximum performance and user-friendliness [32]. The developed software uses Python version 3.9.7 and Gradio framework version 3.11.0.
To train the models, a dataset of replicated acoustic features of the voice signals [7] was collected as follows: Maintaining a steady phonation of the /a/ vowel at a comfortable pitch and volume is the vocal task. This phonation must be held for a minimum of five seconds for every breath. Each individual repeats the exercise three times, and each repetition is considered a replication. The voice data is recorded using a portable computer equipped with an external sound card (TASCAM US322) and a cardioid-pattern headband microphone (AKG 520). The Audacity software (release 2.0.5) makes a digital recording with a sampling rate of 44.1 kHz and a resolution of 16 bits/sample. 32 Acoustic features are extracted from the voice signals: 5 Harmonic-to-noise-ratio (HNR), 13 Derivatives of Mel frequency cepstral coefficients (Delta), 13 Mel frequency cepstral coefficients (MFCC), and Glottal-to-Noise Excitation Ratio (GNE).
In general, this software has two steps: 1) receiving the user's voice signals; 2) performing classification, i.e., detecting whether the person has Parkinson's disease signs or not. In the first step, the user uploads the voice signal sample (Fig. 1). Then, in the second step, the input is classified by four trained classifiers, including XGBoost, LightGBM, Gradient Boosting, and Bagging. Moreover, to utilize the advantageous characteristics of each classifier to enhance accuracy, the weighting was set depending on each classifier's performance. Finally, Hard Voting Ensemble Method determined the final prediction (Fig. 2). According to [9], the model utilized in PD-ADSV based on "Parkinson Dataset with Replicated Acoustic Features" [7] achieved an accuracy of 85.42%.
## 7 Impact overview
As previously mentioned, this software can diagnose Parkinson's disease using voice signals. According to the simple user interface, it is accessible to all social classes. Anyone can record their voice and use this program to determine whether or not they have signs of PD. As depicted in Fig. 1, the user uploads their dataset of voice signals, and the result is displayed in less than 0.7 seconds.
Motor symptoms and imaging tests, such as MRI, brain ultrasonography, and PET scans, are usually used to diagnose this disease [33]. However, in addition to their low accuracy and high prices, these techniques are prohibitively hard to perform and are not easily accessible to the general public. In contrast, PD-ADSV, with its remarkable speed and accuracy, can significantly assist healthcare providers, particularly neurologists, in detecting Parkinson's disease.
As mentioned, recording the voices of PD patients for this program does not require special equipment and may be readily offered to patients in any hospital or medical center. Consequently, patients record their voices using the equipment described previously. After classifying the extracted acoustic features of voice signals, PD-ADSV determines whether or not the individual has PD signs.
## 8 Conclusion
This article introduces PD-ADSV, an automated diagnosing system that utilizes voice signals to detect PD. According to its user-friendliness, high accuracy, and availability to everyone, this software can be used in any healthcare center and greatly help doctors. It is implemented based on machine learning methods and reached an accuracy of 85.42% using "Parkinson Dataset with Replicated Acoustic Features."
## Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## CRediT authorship contribution statement
**Paria Ghaheri:** Conceptualization, Methodology, Software, Investigation, Writing - Original Draft. **Ahmadreza Shateri:** Conceptualization, Methodology, Software, Validation, Investigation, Visualization. **Hamid Nasiri:** Conceptualization, Methodology, Validation, Writing - Review & Editing, Supervision.
|
2304.09325 | Dynamic Chunk Convolution for Unified Streaming and Non-Streaming
Conformer ASR | Recently, there has been an increasing interest in unifying streaming and
non-streaming speech recognition models to reduce development, training and
deployment cost. The best-known approaches rely on either window-based or
dynamic chunk-based attention strategy and causal convolutions to minimize the
degradation due to streaming. However, the performance gap still remains
relatively large between non-streaming and a full-contextual model trained
independently. To address this, we propose a dynamic chunk-based convolution
replacing the causal convolution in a hybrid Connectionist Temporal
Classification (CTC)-Attention Conformer architecture. Additionally, we
demonstrate further improvements through initialization of weights from a
full-contextual model and parallelization of the convolution and self-attention
modules. We evaluate our models on the open-source Voxpopuli, LibriSpeech and
in-house conversational datasets. Overall, our proposed model reduces the
degradation of the streaming mode over the non-streaming full-contextual model
from 41.7% and 45.7% to 16.7% and 26.2% on the LibriSpeech test-clean and
test-other datasets respectively, while improving by a relative 15.5% WER over
the previous state-of-the-art unified model. | Xilai Li, Goeric Huybrechts, Srikanth Ronanki, Jeff Farris, Sravan Bodapati | 2023-04-18T22:18:40Z | http://arxiv.org/abs/2304.09325v2 | # Dynamic chunk convolution for unified streaming and non-streaming conformer ASR
###### Abstract
Recently, there has been an increasing interest in unifying streaming and non-streaming speech recognition models to reduce development, training and deployment cost. The best-known approaches rely on either window-based or dynamic chunk-based attention strategy and causal convolutions to minimize the degradation due to streaming. However, the performance gap still remains relatively large between non-streaming and a full-contextual model trained independently. To address this, we propose a dynamic chunk-based convolution replacing the causal convolution in a hybrid Connectionist Temporal Classification (CTC)-Attention Conformer architecture. Additionally, we demonstrate further improvements through initialization of weights from a full-contextual model and parallelization of the convolution and self-attention modules. We evaluate our models on the open-source Voxpopuli, LibriSpeech and in-house conversational datasets. Overall, our proposed model reduces the degradation of the streaming mode over the non-streaming full-contextual model from 41.7% and 45.7% to 16.7% and 26.2% on the LibriSpeech _test-clean_ and _test-other_ datasets respectively, while improving by a relative 15.5% WER over the previous state-of-the-art unified model.
Xilai Li, Goeric Huybrechts, Srikanth Ronanki, Jeff Farris, Sravan Bodapati AWS AI Labs{lixilai, huybrech, ronanks, jjfarris, sravanb}@amazon.com End-to-end speech recognition, Unified ASR, Streaming ASR, Conformer
## 1 Introduction
End-to-end (E2E) automatic speech recognition (ASR) models such as attention-based encoder-decoder (AED) [1, 2], CTC [3, 4] and Transducer [5, 6, 7] have gained a lot of attention over the past decade due to their simplicity in the integration of the pronunciation, language and acoustic models into a single neural network. While state-of-the-art E2E models work remarkably well in a non-streaming fashion, they suffer from degradation when operating in a streaming manner as the requirement of transcribing text in real time poses an extra challenge. Numerous works try to bridge the gap with non-streaming ASR by training a model specific for a streaming ASR task [8, 9], focusing on mitigating the trade-off between latency and accuracy. While some slight improvements have been observed, the gap with models that take the full acoustic sequence as input (a.k.a. full-contextual models) remains non-negligible [10, 11].
In recent years, efforts have been made to unify streaming and non-streaming into a single model [12, 13, 14, 15, 16, 17], which helps reduce development, training and deployment cost. A commonly explored solution is to expose the unified model to various contexts at training time thereby making the model less susceptible to accuracy degradation at inference time under different latency conditions. In [15], a dynamic chunk training technique is adopted where the input is split into several fixed sized chunks and the audio frames within each chunk attend on themselves and frames from all the previous chunks. They vary the chunk size dynamically from 1 to the maximum utterance length in the batch, so the trained model learns to predict with arbitrary chunk size. [13] and [18] present a quite similar dynamic chunk-based attention strategy, but other methods exists too. [19] for instance, first processes input features with a streaming encoder before passing these to a non-streaming encoder. A single decoder then learns to decode either using the output of the streaming or the non-streaming encoder. [14] introduced dual-mode ASR with shared weights for both streaming and full-context speech recognition to further optimize the performance of streaming ASR. Similarly, the Dual Causal/Non-causal (DCN) self-attention network proposed in [16] processes two sequences of causal and non-causal frames in parallel and prevents the overall context to grow beyond the look-ahead of a single layer. The authors in [17] employ self-supervised pre-training with wav2vec 2.0 [20] and fine-tune the model through dual-mode training.
In general, streaming and non-streaming ASR systems are trained independently for optimized performance. These recent works on unified ASR are a great step towards a single, easy-to-use solution regardless of the inference mode. However, we identify two main shortcomings in the literature: First, the performance gap between streaming and non-streaming of an unified model still remains significant, especially when a Conformer encoder is used [13, 16]. Second, the gap between non-streaming and a full-contextual model enlarges with the increase in the amount of training data [13, 15].
In this work, we propose a _dynamic chunk convolution_ (DCConv), a non-causal convolution with an improved training strategy for unified non-streaming and streaming Conformers [21]. This builds further upon the dynamic chunk training (DCT) from [15], in which the core idea is to divide the input into chunks with a chunk size that gets dynamically generated at training time. The difference lies in our novel convolution which better mimics the inference conditions at training time while keeping a rich acoustic representation and therefore results in superior performance. Besides the proposed DCConv, we extend the original DCT with two other key contributions: a) we demonstrate a better performance in both streaming and non-streaming when the model is fine-tuned from a baseline full-contextual model; b) we further optimize the streaming performance by parallelizing the convolution and self-attention modules within each Conformer block. An extensive ablation study is performed varying chunk size, overlapping chunk ratio and left context size. Empirical evaluations measured on Voxpopuli showcase the efficacy of our proposed approach in terms of the accuracy vs latency trade-off. Overall, the proposed model achieves an average relative improvement of 15.5% WER over the previous state-of-the-art [15] and obtained an absolute WER of 2.0% and 2.4% on the LibriSpeech _test-clean_ dataset in the non-streaming and streaming mode, respectively.
## 2 Approach
### Model architecture
We consider a joint CTC-Attention framework [22, 23] for training our unified models. It consists of three components: a _Shared Encoder_, a _CTC Decoder_ and an _Attention Decoder_. For our experiments, we only use the _CTC Decoder_ at inference time and therefore we configure the _Attention Decoder_ with a shallow transformer [24]. For the _Shared Encoder_, we consider two variants: the Conformer architecture [21] and a parallel Conformer [25]. The Conformer architecture consists of a regular Conformer encoder block [21] in which the convolution module appears after the multi-head self-attention (MSA). We demonstrate that using a parallel Conformer (P-Conf) encoder block [25] instead, which has the convolution and MSA existing next to each other, is beneficial for the unified streaming scenario. The advantage of the P-Conf is to capture both global and local context explicitly in the MSA and convolution branch respectively. As opposed to the recently proposed Branchformer [25], which also uses the parallel structure, we leverage the block for the streaming application too. The P-Conf reduces the overall receptive field due to its parallel nature while maintaining the same model capacity, resulting in more robust streaming performance.
### Dynamic Chunk Training for Self-Attention
For unified models to perform well, they must be exposed to both limited and full context during their training. To accomplish this, [15] propose dynamic chunk training (DCT) for self-attention layers. The DCT idea involves varying the chunk size dynamically from 1 to the max utterance length for different batches in training. This is achieved by applying a dynamic chunk mask to the attention score matrix for each self-attention layer, which is illustrated in Eq. 1:
\[\text{Attn}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{Softmax}(\textbf{Mask}(\mathbf{Q}\mathbf{K}^{ T})/\sqrt{d})\mathbf{V} \tag{1}\]
where \(Q\), \(K\), \(V\) and \(d\) denote the queries, keys, values and embedding dimension respectively. Unlike the window mask (Fig. 1b), the chunk mask (Fig. 1a) strictly enforces the look-ahead size by setting the chunk size, while the receptive field with window masking grows linearly with the stacking of more layers. In this work, we randomly sample the chunk size between 8 (=320ms) and 32 (=1280ms) frames and the left context size between 0 and all left chunks, so that the model becomes robust to numerous sizes at inference time.
### Dynamic Chunk Convolution
The convolution operator is a key component of Conformer ASR models [21]. However, the conventional convolution results in significant accuracy degradation due to the mode mismatch between training and inference, as shown in Fig. 2a. Indeed, the chunk's rightmost frames can see context from the next chunk on their right during training. Whereas at inference, this right chunk context is not available, causing a discrepancy. This inter-chunk correlation is even more magnified when stacking more Conformer blocks.
One solution that is adopted in [15] consists of using a causal convolution, as shown in Fig. 2b. The left-shifted convolution kernel restricts its span from having access to any frames beyond the chunk's right boundary. This still leads to performance degradation though, as the lack of within-chunk future context for the frame being processed results in a poorer acoustic representation.
In this work, we therefore propose a novel non-causal _dynamic chunk convolution_ (Fig. 2c). As opposed to the conventional convolution (Fig. 2a), the chunk convolution operator has no access to any future context beyond its right boundary. This trick allows training to more closely match the streaming inference setting where no future context beyond the right chunk boundary is available either. As opposed to causal convolution (Fig. 2b), the DCConv chunk has access to a limited within-chunk future context of the current frame. This extra within-chunk future context results in more accurate acoustic modeling and therefore better overall accuracy. The authors of [26] introduce a similar non-causal convolution. Unlike their convolution though, ours caches the output of the preceding chunk(s) and pads it to the current chunk, resulting in a superior representation. The non-causal convolution of [26] is also used in the Emformer [27] architecture for streaming use-cases with a fixed chunk size, while ours is implemented in a Conformer architecture with DCT and is applicable to both streaming and non-streaming ASR. These distinctions enable our model to be utilized in a wider range of settings.
\[\mathbf{X}_{C}^{i^{\prime}}=\mathbf{X}_{[iC-L:(i+1)C]}=[\mathbf{X}_{[iC-L:iC]}, \mathbf{X}_{[iC:(i+1)C]}] \tag{2}\]
\[\mathbf{X}_{C}^{i^{\prime}}=\text{Conv}(\mathbf{X}_{C}^{i})\rightarrow\mathbf{X}^{{}^{ \prime}}=\text{Concat}(\mathbf{X}_{C[L:]}^{i^{\prime}}) \tag{3}\]
As shown in Eq. 2-3, we implement the DCConv by splitting the input sequence \(\mathbf{X}\) into chunks \(\mathbf{X}_{C}^{i}\) where \(i\) denotes the index of the chunk within the sequence and \(C\) the chunk size. Each chunk has a left context size \(L=(kernel\_size-1)/2\). After the convolution is applied on every chunk, we concatenate \(\mathbf{X}_{C}^{i^{\prime}}\) from which we have removed the first \(L\) output frames that correspond to the input left context. This DCConv operator does not slow down the training since all the chunks are independent from each other. Furthermore, we ensure to synchronize the size of both the chunk mask for the self-attention layers and for the DCConv such that the overall look-ahead size of the encoder is strictly set to the specified common size.
### Fine-tuning Baseline Full-Contextual Model
Lastly, we showcase that fine-tuning from a baseline full-contextual model results in better overall performances. Instead of training a unified model from scratch, we initialize the weights from a pre-trained full-contextual model. This approach allows to leverage the non-streaming performance of the full-contextual model. Simultaneously, the model will perform better in the streaming mode too as it can transfer common speech recognition knowledge gained from the non-streaming pre-training.
Figure 1: (a) An example of a chunk mask of size 4, left context size 8 and sequence length 20. (b) An example of a window mask of right context size 3, left context size 8 and sequence length 20.
Figure 2: Different types of convolutions: (a) regular convolution, (b) causal convolution, and (c) chunk convolution, with example kernel size 5 and chunk size 16.
## 3 Experimental Settings
### Datasets
**Training data:** We consider 3 different speech corpora varying in size for training our models: A _large-scale_ 50k+ hour English corpus and a _small-scale_ 5k hour subset, sampled from in-house paired audio and text data. Both corpora include audio files with a good mix of accents, speakers, sampling rates and background noise. The third dataset is the open-source _LibriSpeech_[28] corpus, for which we combine _train-clean-100_, _train-clean-360_ and _train-other-500_ to have 960 hours of training data. These 3 data regimes are representative of a wide range of end-to-end ASR systems for various speech applications.
**Benchmarking:** For the LibriSpeech experiments in section 4.3, we evaluate our models on _test-clean_ and _test-other_. For the small- and large-scale experiments in sections 4.1 and 4.2 respectively, we use the following test sets: (1) _Conversational_: A 10+ hour in-house dataset with utterances resembling user inputs to goal oriented conversational dialog systems. The average utterance length is roughly 10 words; (2) _Multi-accent_: A 100+ hour in-house long-form audio dataset, composed of 12 different accents spoken across the US. The average utterance length is roughly 16 words after segmentation; (3) _Wall Street Journal (WSJ)_: We use WSJ's eval_test92 [29], prepared using Kaldi's [30] WSJ recipe. The dataset is 0.7h long. The average utterance length is 16 words; (4) _Voxpopuli_[31]: We use the English test partition, which is 4.9h long. The average utterance length is 24 words. We report absolute word error rate (WER) on WSJ and Voxpopuli and relative WER (WERR) on in-house datasets.
### Experiment Setup
For training, we use the AED architecture with a Conformer as the encoder, and a shallow single-layer transformer [24] as the attention-based decoder. For _LibriSpeech experiments_, we use a Conformer-12x512x8, which consists of 12 encoder layers with 512 feature dimensions and 8 self-attention heads. We train a 24-layered transformer-based neural LM on the _librispeech-train_ dataset to use for rescoring. For the _small-scale experiments_ we use a Conformer-16x512x4, whereas for the _large-scale experiments_ we use a Conformer-20x512x8. The kernel size of our convolution modules is 31. We optimise our model via the hybrid CTC and attention losses. All of our models are trained using ESPNet [32], with the Adam optimizer [33] and a warm-up learning rate scheduler.
For front-end, we use 80 dimensional log-mel features and SpecAugment [34] to perform data augmentation. The BPE embedding is 1024 and 2048 for the small- and large-scale experiments respectively. We train a 4-gram LM on the training text for shallow fusion. For evaluation, we discard the attention-based decoder and only use the CTC decoder to generate outputs with a CTC prefix beam search and beam size of 50. A CTC decoder optimises the real-time factor (RTF) compared to the attention-based decoder, as the latter is non-autoregressive and also needs triggered attention [35] for streaming inference and is therefore slower. We opt for the CTC decoder as ensuring a low RTF is key for streaming applications.
## 4 Results
### Ablation Study on Small-Scale Model
In the next subsections, we start with discussing the different contributions of our work by analyzing the small-scale results in Table 1.
#### 4.1.1 DCConv vs Causal and Normal Convolutions
We demonstrate the effectiveness of our novel DCConv by performing an ablation study on three different models: a DCT model with normal convolution (D), a DCT model with causal convolution (E), and a DCT model with our own DCConv (F). Our model outperforms the two baselines on every dataset in the streaming mode. In the non-streaming application on the other hand, our model always beats the DCT model with causal convolution, but does degrade the model with regular convolution. Devising our convolution in such a way that within-chunk future context is used for a more informative acoustic representation (as opposed to (D)), while refraining of using outside-chunk future context to match the training and inference modes more closely (as opposed to (E)), is therefore empirically shown to be advantageous in most cases.
#### 4.1.2 Fine-tuning from Full-Contextual Model
Fine-tuning (G) from a pre-trained full-contextual model instead of training a DCConv model from scratch (F) leads to improvements in both the non-streaming and streaming mode for every single dataset. We observe an average relative WER improvement of 5.5% and 2.4% in the non-streaming and streaming modes respectively. Furthermore, with the exception of the WSJ dataset, we now always outperform the regular convolution model in the non-streaming mode. Fine-tuning helps as the model relies on previously gained knowledge from a pre-trained model instead of learning from scratch. It maintains and even improves the non-streaming performance of the full-contextual model, while boosting the streaming performance as it leverages previously learned speech recognition knowledge common to both streaming modes.
\begin{table}
\begin{tabular}{l l c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{**Relative WER**} & \multicolumn{4}{c}{**Absolute WER**} \\ \cline{3-10} & & \multicolumn{2}{c}{Conversational} & \multicolumn{2}{c}{Multi-accent} & \multicolumn{2}{c}{WSJ} & \multicolumn{2}{c}{Voxpopuli} \\ Model & Training Strategy & Non-streaming & Streaming & Non-streaming & Streaming & Non-streaming & Streaming & Non-streaming & Streaming \\ \hline \multirow{2}{*}{Transformer} & (A) Full-context & - & - & - & - & 7.1 & 9.7 & 13.9 & 18.2 \\ & (B) DCT & -11.0\% & 10.2\% & -4.2\% & 6.9\% & 7.8 & 8.9 & 14.7 & 17.1 \\ \hline \multirow{4}{*}{Conformer} & (C) Full-context & **16.0\%** & 11.6\% & **10.4\%** & 10.9\% & **6.1** & 8.5 & 12.5 & 16.1 \\ & (D) DCT & 14.0\% & 14.3\% & 8.9\% & 12.6\% & **6.1** & 7.6 & **12.4** & 15.6 \\ \cline{1-1} & (E) DCT w/ Causal Conv & 6.0\% & 23.1\% & 0.5\% & 8.1\% & 7 & 9.2 & 15 & 18.9 \\ \cline{1-1} & (F) DCT w/ DCConv & 9.0\% & 27.2\% & 2.6\% & 18.6\% & 6.5 & 7.3 & 13.1 & 14.2 \\ \cline{1-1} & (G) + Fine-tune & **16.0\%** & 29.3\% & 9.9\% & 21.9\% & 6.4 & 7.2 & **12.4** & 14.0 \\ \hline \multirow{2}{*}{P-Conformer} & (H) Full-context & **16.0\%** & 14.3\% & **10.4\%** & 12.6\% & 6.3 & 8.3 & **12.4** & 16 \\ \cline{1-1} & (D) DCT w/ DCConv & 8.0\% & 27.9\% & 2.1\% & 19.4\% & 6.5 & 7.3 & 13.0 & 14.0 \\ \cline{1-1} & (J) + Fine-tune & **16.0\%** & **30.6\%** & 9.9\% & **22.3\%** & 6.2 & **6.9** & 12.6 & **13.8** \\ \hline \hline \multirow{2}{*}{Conformer-Large} & (K) Full-context & - & - & - & **4.5** & 6.2 & 9.2 & 12 \\ \cline{1-1} & (L) DCT w/ DCConv + Fine-tune & **1.9\%** & **22.9\%** & **0.0\%** & **11.0\%** & 4.6 & **5.6** & **9.1** & **10.5** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Small- and Large-Scale experiments with different architectures and training strategies. All streaming evaluations are done with a 640ms chunk size, 50% overlapping, 1280ms left context and averaged encoder latency of roughly 480ms.
#### 4.1.3 P-Conf vs Conformer
Our experiments indicate that using a P-Conf instead of a regular Conformer is beneficial in the streaming mode, while it shows similar results in non-streaming. This holds for the full-contextual (H), DCConv (I) and fine-tuned DCConv (J) models, where using a P-Conf (J) leads to an relative WER improvement of 1.9%, 0.5%, 4.2% and 1.4% respectively across the 4 datasets over Conformer (G). We hypothesize that the lower receptive field of the P-Conf as a result of the parallel MSA and convolution model during training makes the model more robust in the streaming mode, as it more closely matches the inference conditions.
#### 4.1.4 Ablation Study with Different Streaming Parameters
We take a closer look at the latency vs accuracy trade-off impact with different model parameters that can be controlled during streaming inference. In Fig. 3, we run an ablation study on the DCT hyper-parameters. Fig. 3a demonstrates improvement in WER with the increase in chunk size. Likewise, Fig. 3b and Fig. 3c reveal better WER performance with more overlap between successive chunks and greater left context size. All are the result of a wider and therefore superior acoustic representation. However, these improvements go at the expense of speed due to the latency-accuracy trade-off. Depending on the use-case we can then select one or the other setting. More importantly, we observe for all settings that our fine-tuned DCConv model performs the best, illustrating the robustness of our approach. In our subsequent experiments, we opt for a 640ms chunk with a 1280ms left context in order to keep those values low to mimic a real-life streaming setting. Furthermore, we stick to a 50% overlapping ratio as this ratio performs only slightly worse than the 75% ratio but provides better latency.
### Result with Large-Scale Model
In Table 1, we also compare a fine-tuned DCConv Conformer model (L) to a baseline full-contextual model (K) on a large-scale dataset. The results show no degradation (except for WSJ), and even minor improvements, in the non-streaming mode. This is despite the fact that our model was trained in a unified fashion. In the streaming mode, we observe an average WERR improvement of 14.0% across all datasets. This validates the gains of our suggested contributions.
### Results on LibriSpeech
In Table 2, we illustrate the performance of our proposed approach when trained and tested on the widely used public LibriSpeech dataset. First, we compare our DCConv model (C) to a full-contextual model trained without DCT (A) and a DCT model with a regular causal convolution (B) in a non-streaming and a 50% overlapping streaming mode. We observe a 28.9% WERR improvement compared to the full-contextual model in the streaming mode, emphasizing the utility of DCT training. Additionally, we notice an average 7.9% WERR improvement compared to the DCT model with regular causal convolution for both streaming modes, proving the effectiveness of our devised convolution. Furthermore, we demonstrate that fine-tuning (D) instead of training a DCConv model from scratch is especially helpful if you wish to keep a high non-streaming performance of your unified model. It even outperforms the full-contextual model by a WERR improvement of 5.3% in the non-streaming mode, while keeping the overlapping streaming performance almost intact. Lastly, we observe a minor 2.8% relative WER improvement when using a P-Conf (F) instead of the conventional one (D) in the streaming mode.
Overall, compared to the full-contextual Conformer model (A), our final fine-tuned DCConv P-Conf model (F) improves the WER by 32.1% in the streaming mode and reduces the degradation of that mode over the non-streaming full-contextual model from 41.7% and 45.7% to 16.7% and 26.2% on the _test-clean_ and _test-other_ datasets respectively. Moreover, we further improve on the state-of-the-art model (B) that reduces this streaming gap too by an average 15.5% WERR across the 4 settings.
## 5 Conclusion
In this work, we propose a novel _dynamic chunk convolution_ that further improves the existing dynamic chunk training. We achieve this as our convolution better mimics the inference conditions at training time, while keeping a rich acoustic representation. Additionally, we introduce a fine-tuning mechanism and a parallel Conformer block for the unified ASR setting. Our results demonstrate that our unified streaming model closes and even exceeds the gap with a full-contextual model operating in a non-streaming mode, while also showcasing improvements in the streaming mode under different latency constraints. Overall, we outperform the previous state-of-the-art by an average 15.5% WERR across the LibriSpeech datasets.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{test-clean} & \multicolumn{2}{c}{test-other} \\ & Non-streaming & Streaming & Non-streaming & Streaming \\ \hline (A) Conformer (Full-context) & 2.1 & 3.6 & 5.1 & 9.4 \\ (B) + DCT or small conv & 2.6 & 2.9 & 5.8 & 6.8 \\ (C) + DCT w/ DCConv & 2.3 & 2.6 & 5.4 & **6.6** \\ (D) + Fine-tune & **2.0** & **2.5** & **4.8** & **6.6** \\ \hline (B) P-Conformer (Full-context) & 2.1 & 3.4 & 4.9 & 9.1 \\ (F) + DCT w/ DCConv + Fine-tune & **2.0** & **2.4** & **4.8** & **6.5** \\ \hline \hline \end{tabular}
\end{table}
Table 2: LibriSpeech experiment. Chunk size of 640ms and left context size of 1280ms for 50% overlapping chunk streaming.
Figure 3: Ablation study on small-scale models on how the chunk size, overlapping ratio and left context size affect the streaming decoding WER on the Voxpopuli testset. (a) Different chunk sizes with 50% overlapping and 1280ms left context. (b) Different overlapping ratios with 640ms chunk size and 1280ms left context. (c) Different left context sizes with 640ms chunk size and 50% overlapping chunk decoding. |
2306.11313 | Deep graph kernel point processes | Point process models are widely used for continuous asynchronous event data,
where each data point includes time and additional information called "marks",
which can be locations, nodes, or event types. This paper presents a novel
point process model for discrete event data over graphs, where the event
interaction occurs within a latent graph structure. Our model builds upon
Hawkes's classic influence kernel-based formulation in the original
self-exciting point processes work to capture the influence of historical
events on future events' occurrence. The key idea is to represent the influence
kernel by Graph Neural Networks (GNN) to capture the underlying graph structure
while harvesting the strong representation power of GNNs. Compared with prior
works focusing on directly modeling the conditional intensity function using
neural networks, our kernel presentation herds the repeated event influence
patterns more effectively by combining statistical and deep models, achieving
better model estimation/learning efficiency and superior predictive
performance. Our work significantly extends the existing deep spatio-temporal
kernel for point process data, which is inapplicable to our setting due to the
fundamental difference in the nature of the observation space being Euclidean
rather than a graph. We present comprehensive experiments on synthetic and
real-world data to show the superior performance of the proposed approach
against the state-of-the-art in predicting future events and uncovering the
relational structure among data. | Zheng Dong, Matthew Repasky, Xiuyuan Cheng, Yao Xie | 2023-06-20T06:15:19Z | http://arxiv.org/abs/2306.11313v3 | # Deep graph kernel point processes
###### Abstract
Point process models are widely used to analyze asynchronous events occurring within a graph that reflect how different types of events influence one another. Predicting future events' times and types is a crucial task, and the size and topology of the graph add to the challenge of the problem. Recent neural point process models unveil the possibility of capturing intricate inter-event-category dependencies. However, such methods utilize an unfiltered history of events, including all event categories in the intensity computation for each target event type. In this work, we propose a graph point process method where event interactions occur based on a latent graph topology. The corresponding undirected graph has nodes representing event categories and edges indicating potential contribution relationships. We then develop a novel deep graph kernel to characterize the triggering and inhibiting effects between events. The intrinsic influence structures are incorporated via the graph neural network (GNN) model used to represent the learnable kernel. The computational efficiency of the GNN approach allows our model to scale to large graphs. Comprehensive experiments on synthetic and real-world data show the superior performance of our approach against the state-of-the-art methods in predicting future events and uncovering the relational structure among data.
**Keywords:** Point Processes with Graph, Non-Stationary Influence Kernel, Graph Neural Networks
## 1 Introduction
Point processes represent a prominent category of stochastic models designed to capture discrete events occurring over time. Modern applications are often characterized by the presence of separate categories of events, known as _marks_, which encapsulate attributes such as specific locations or types of events. The occurrence of one type of event can influence the occurrence of future events of the same or another type, and the identification of dependencies among events plays a crucial role in predicting and understanding upcoming asynchronous events. For example, crime linkage analysis [46] provides insight for criminal pattern detection and decision-making to practitioners. Historical interactions between users and groups on social networks favor accurate future recommendations aligned with their individual interests. When these influence structures can be represented by a _graph_, such models can be referred to as graph point processes.
There exists a wide range of methods for modeling graph point processes. Classical multivariate Hawkes processes [29] assume a parametric form of the conditional intensity. Many modern approaches focus on modeling a more general form of the intensity of events incorporated with the graph structure, adopting function approximators such as neural networks. For example, recent approaches utilize self-attention with respect to the latent graph structure to reflect inter-node influence on the graph [40; 47]. However, such approaches often assume some parametric restriction on the form of the conditional intensity, potentially leading to degraded performance and hindering interpretability. Furthermore, these methods only model single-hop influence on the latent graph based on the adjacency matrix.
Graph neural networks (GNNs) have been a rapidly developing tool for extracting informative patterns from graph-structured data [37]. For instance, message passing GNNs have been widely used in the context of point processes for relationship inference [43], temporal interaction prediction [38], and event propagation modeling [36]. A specific approach would be convolutional GNNs, which generalize the operation of convolution in neural networks from grid data to graph data. They allow models to capture cyclic mutual dependencies of graph nodes and are efficient enough to be scaled to large graphs. Convolutional GNNs can thereby provide a framework for flexible modeling of point processes on graphs, yielding superior prediction performance and interpretability.
Contribution:In this paper, we propose a novel framework for modeling general influence kernels in point processes on latent graphs based on GNNs, which inform potential event category relationships and provide reliable predictions for future events. Specifically, the kernel integrates a unified framework for convolutional GNNs based on localized graph filter basis functions, and is flexible enough to capture non-stationary inter-node event promotion, inhibition, and multi-hop effects. Our model with the proposed influence kernel provides an efficient, effective, and interpretable object for point process modeling on large graphs.
The contributions in this paper can be summarized as follows:
1. Our proposed method explicitly models the influence kernel in point processes via convolutional GNNs as opposed to typical intensity-based models. This permits greater expressivity of inter-event-category contributions, including non-stationary, multi-hop exciting, and inhibiting effects. Furthermore, the graph kernel can be directly interpreted, yielding clear information about the relational structure in the modeled graph point process.
2. The proposed convolutional-GNN-based deep kernel can be efficiently scaled to large graphs by taking advantage of the localized graph filter basis. The basis allows the deep kernel to go beyond simple distance-based influence for graphs representing events in space, providing a model structure for non-spatial graphs such as social networks. Meanwhile, a larger class of GNN models can be incorporated within our framework, enabling broader versatility in real-world applications.
3. Comprehensive experiments demonstrate that including the latent graph structure in the deep kernel modeling yields benefits over the states-of-the-art in both simulated and real data settings. Our method is applicable to a wide array of point process data settings, including events generated by infrastructural, climatic, and social phenomena.
### Related Works
Using machine learning to model influence between different event types is a long-studied area in the point process literature. Seminal point process works construct parametric models of the conditional intensity [15, 25], which are often not expressive enough to capture complex influence mechanisms. Common approaches to achieve more expressive conditional intensity models [13, 22] utilize recurrent neural networks (RNNs). Due to advances in attention models for sequential data modeling [34], RNN approaches have been surpassed by self-attention approaches, which include the Transformer Hawkes Process (THP) [47] and the Self-Attentive Hawkes Process (SAHP) [40]. These RNN and self-attention methods provide expressive models for the conditional intensity, however they often suffer from high computational cost and lack of model interpretability. Instead of modeling the conditional intensity, many recent approaches recover the influence kernel in point processes [12, 26, 45]. Learned influence kernels provide a direct means for interpreting inter-event influence in point processes; however, prior works have not exploited graph structure in kernel models, while we incorporate graph topology in the construction of the influence kernel. Alternative approaches have been proposed [27, 31] for flexible modeling of the conditional probability or cumulative probability function. Nonetheless, they decouple the distributions of event time and type and the model expressiveness is still limited.
In contemporary applications, the collection of point process data often reveals an underlying latent graph structure, leading to the widespread adoption of models incorporating graph structures for a variety of purposes. A common goal is to infer the topology of the latent graph existing in asynchronous discrete event data, such as for individual event dependence modeling [20] or for relation inference between event types in [2, 6, 43]. Our proposed model aims for learning point processes with influence kernels and has the capability to recover the kernel (graph) structure, discover event dependency, and predict events simultaneously. The Geometric Hawkes Process (GHM) [30] combines the Hawkes process with graph convolution RNNs, exploiting graph structure in order to learn marked Hawkes processes and predict future events. However, this approach still assumes a parametric form for the conditional intensity function. Recent studies adopt attention-based mechanisms for point process modeling and lend themselves naturally to a graphical interpretation of the event influence structure. The learned attention weights in the Attentive Neural Hawkes Process (A-NHP) [39] can be interpreted to represent the underlying graph structure. Another study [41] extends SAHP to events on a learned latent graph, and event influences are modeled according to the edges of the graph. However, these approaches consider only single-hop, adjacency-based influence on the latent graph. In our work, the incorporation of localized graph filters in graph convolution permits the recovery of complex event dependency, such as multi-hop influence mechanisms, according to the graph topology, and holds the potential to integrate either spatial- or spectral-based graph neural network structures.
Our work is related to GNNs, which have seen wide applications to areas including temporal phenomena on graphs [21, 37]. The popularity of graph convolutions in GNNs has been rapidly growing in recent years. They incorporate spatial convolutions considering propagation according to the adjacency structure in the graph or spectral convolutions based upon the graph Laplacian [5]. An early attempt applying spatial convolutions is the Diffusion-convolutional neural networks (DCNN) [1] by graph diffusion processes. Other spatial-based approaches include attention models such as the Graph Attention Network (GAT) [4, 35] and the Graph Transformer [14]. Prototypical
spectral convolutions based on Chebyshev polynomials of the graph Laplacian are utilized in Chebnet [10] and graph convolutional networks (GCN) [19], with recent extensions including auto-regressive moving average (ARMA) spectral convolutions [3] and the Simple Spectral Graph Convolution (SSGC) constructed via a Markov diffusion kernel [44]. Modern approaches incorporate both local and global features, such as the flexible combination of spatial and spectral convolutions in L3Net [8] and of local filters and global attention in the General, Powerful, Scalable (GPS) Graph Transformer [28]. Practically, GNN models can often be applied to model event occurrences on graphs, such as anomaly detection using GCNs [7], graph attention [11], and combined spatial/spectral techniques [33] and change point detection using transformer convolutions [42] and GATs [32].
## 2 Background
Temporal point process (TPP).A TPP [29] models the occurrence of discrete events that depend on the observed history in a continuous time domain. Let \(\mathcal{H}=\{t_{1},\ldots,t_{n}\}\) be an observed event sequence, where \(t_{i}\in[0,T]\subset\mathbb{R}\) is the time of \(i\)-th event. We denote the history before a given time \(t\) as \(\mathcal{H}_{t}=\{t_{i}|t_{i}<t\}\). The conditional intensity of events is defined as \(\lambda(t)=\lim_{\Delta t\downarrow 0}\mathbb{E}\left[\mathbb{N}([t,t+\Delta t ])|\mathcal{H}_{t}\right]/\Delta t\), where the counting measure \(\mathbb{N}\) is defined as the number of events occurring in \([t,t+\Delta t]\). For notational simplicity, we omit the dependency of history \(\mathcal{H}_{t}\) in \(\lambda(t)\). The well-known Hawkes process [15] models the self-excitation effect from history in an additive manner. The conditional intensity function is defined as
\[\lambda(t)=\mu+\sum_{t^{\prime}\in\mathcal{H}_{t}}k(t^{\prime},t),\]
where \(\mu\) is the background intensity, and \(k\) is the so-called influence kernel measuring the effects of historical events.
In a _marked point process_, each event is associated with an additional attribute called _mark_ denoted by \(v\in V\). The mark represents specific characteristics of the event and can be either continuous or categorical, such as event location or event type. Let \(\mathcal{H}=\{(t_{i},v_{i})\}_{i=1}^{n}\) and \(\mathcal{H}_{t}=\{(t_{i},v_{i})|t_{i}<t\}\) be the observed event sequence and history before time \(t\), respectively. The conditional intensity with influence kernel \(k\) can be written as:
\[\lambda(t,v)=\mu+\sum_{(t^{\prime},v^{\prime})\in\mathcal{H}_{t}}k(t^{\prime},t,v^{\prime},v). \tag{1}\]
The influence kernel is crucial when learning the conditional intensity \(\lambda(t,v)\) from event sequences. Our kernel goes beyond the parametric kernel in the classic Hawkes process and leverages latent data structures, enabling us to better capture the underlying event generating mechanism.
Grpah convolution.Graph convolutions in graph neural networks [37] extend the convolution strategy to the graph and address the problem of cyclic mutual dependencies architecturally. Graph convolutions fall into two categories: spectral- and spatial-based models. Spectral graph convolutions introduce graph filters \(g_{\theta}\) based on the full eigen-decomposition of the graph Laplacian. The graph signal \(X\) is convoluted by \(X*_{G}g_{\theta}=Ug_{\theta}U^{T}X\), where \(U\) is the matrix of the eigenvectors
of the graph Laplacian ordered by eigenvalues. For instance, in Spectral Convolutional GNNs [5], the graph filter \(g_{\theta}=\Theta_{i,j}\) contains a set of learnable parameters that characterize the relations between node pairs. On the other hand, spatial-based graph convolution is performed by information propagation along edges. The weight matrix in each layer is constructed based on the node's spatial relations (_i.e._, adjacency matrix). Either the localized filter or the weight matrix plays a pivotal role in capturing the nodal dependencies. Various structures of graph convolutions, both spectral and spatial, can be integrated into our proposed influence kernel to describe a wide spectrum of intricate inter-event-category dependencies.
## 3 Point processes on graphs
### Problem definition
The objective of this study is to construct a point process model for the occurrence of multiple types of events within a latent graph structure. Let \(G=(V,E)\) denote the underlying graph, where each node \(v\in V\) represents one event type. An undirected edge connecting nodes \(u\) and \(v\) indicates the existence of potential interaction between type-\(u\) and type-\(v\) events. Note that the edges merely suggest the support of possible inter-event-category interactions without dictating the directions.
Consider a set of event sequences \(\mathcal{S}=\{\mathcal{H}^{1},\mathcal{H}^{2},\ldots,\mathcal{H}^{|\mathcal{ S}|}\}\), where each \(\mathcal{H}^{s}=\{(t^{s}_{i},v^{s}_{i})\}_{i=1}^{n_{s}}\) is a collection of events \((t^{s}_{i},v^{s}_{i})\) occurring on node \(v^{s}_{i}\) at time \(t^{s}_{i}\). Our proposed graph point process is expected to: (i) jointly predict the times and types of forthcoming events based on the observed historical data and (ii) provide an interpretable understanding of the event generation process by revealing the interdependences among multiple types of events. Toward this end, we adopt the statistical formulation of conditional intensity in (1) and introduce an influence kernel built on convolutional GNN components, aiming to explicitly characterize the complicated contributing relationship between any binary event pair (_e.g._, excitation, inhibition, or other dynamic influences).
### Deep temporal graph kernel
Modeling the multi-dimensional influence kernel \(k\) for intricate event dependency is crucial yet challenging. To go beyond simple parametric forms of the kernel while maintaining the model efficiency, we represent the multi-dimensional kernel by taking advantage of the kernel singular value decomposition (SVD) [23, 24]. Specifically, the influence kernel \(k(t^{\prime},t,v^{\prime},v)\) in (2) is decomposed into basis kernel functions as follows:
\[k(t^{\prime},t,v^{\prime},v)=\sum_{d=1}^{D}\sigma_{d}g_{d}(t^{\prime},t-t^{ \prime})h_{d}(v^{\prime},v), \tag{2}\]
where \(\{g_{d},h_{d}\}_{d=1}^{D}\) are sets of basis kernels in terms of event time and type, respectively. The scalar \(\sigma_{d}\) is the corresponding weight (or "singular value") at each rank \(d\). Instead of directly learning the multi-dimensional event dependency, we simplify the task by "separately" modeling specific modes of event dependency over time or graph using different basis kernels. It is worth noting that the weighted combination of basis kernels covers a broad range of non-stationary influence kernels used in point processes, and our kernel \(k\) is not decoupled over time and graph space. While functional
SVD is usually infinite-dimensional, in practice, we can truncate the decomposition as long as the singular values \(\sigma_{k}\) decay sufficiently fast, only considering a finite rank representation.
The temporal basis kernels are carefully designed to capture the heterogeneous temporal dependencies between past and future events. First, the parametrization of temporal kernels \(\{g_{d}\}_{d=1}^{D}\) using displacements \(t-t^{\prime}\) instead of \(t\) provides us a low-rank way to approximate general kernels [12]. To proceed, we approximate \(\{g_{d}\}_{d=1}^{D}\) using shared basis functions:
\[g_{d}(t^{\prime},t-t^{\prime})=\sum_{l=1}^{L}\beta_{d}\psi_{l}(t^{\prime}) \varphi_{l}(t-t^{\prime}),\quad\forall d=1,\ldots,D.\]
Here \(\{\psi_{l},\varphi_{l}:[0,T]\rightarrow\mathbb{R}\}_{l=1}^{L}\) are two sets of one-dimensional basis functions characterizing the temporal impact of an event occurring at \(t^{\prime}\) and the decaying pattern of that impact spread over \(t-t^{\prime}\). The scalar \(\beta_{d,l}\) is the corresponding weight. Each of the basis functions \(\{\psi_{l},\varphi_{l}\}_{l=1}^{L}\) are represented by a fully-connected neural network. The universal approximation power of neural networks enables the model to go beyond specific parametric forms of the influence kernel or conditional intensity.
### Graph kernel with localized graph filters
We develop a novel framework for the graph basis kernels by leveraging the localized graph filters in graph convolution to extract informative inter-event-category patterns from graph-structured data. Specifically, the basis kernels \(\{h_{d}\}_{d=1}^{D}\) are represented as follows:
\[h_{d}(v^{\prime},v)=\sum_{r=1}^{R}\gamma_{dr}B_{r}(v^{\prime},v),\quad\forall d =1,\ldots,D,\]
where \(\{B_{r}(v^{\prime},v):V\times V\rightarrow\mathbb{R}\}_{r=1}^{R}\) are \(R\) bases of localized graph filters, and \(\gamma_{dr}\) is the corresponding weight for each \(B_{r}\). The bases can be constructed either from a spatial or a spectral approach, corresponding to two categories of commonly-seen graph convolutions.
To showcase the model flexibility of our graph basis kernel, we present four examples of incorporating various localized graph filters in spectral- or spatial-based GNNs into our proposed frameworks. These include (Details are given in Appendix A):
1. _Chebnet_[10]: we let \(B_{r}=T_{r-1}(\tilde{L})\), where \(T_{r}\) is the Chebyshev polynomial of order \(r\) evaluated at the scaled and normalized graph Laplacian \(\tilde{L}=2L/\lambda_{\max}-I\). Here \(L=I-D^{-1/2}AD^{-1/2}\), \(D\) is the degree matrix, \(A\) is the adjacency matrix, and \(\lambda_{\max}\) is the largest eigenvalue of \(L\).
2. _L3Net_[8]: we choose an integer \(o_{r}\) for each \(B_{r}\), and \(B_{r}(v^{\prime},v)\neq 0\) only if \(v\in N_{v^{\prime}}^{(o_{r})}\), where \(N_{v^{\prime}}^{(o_{r})}\) denotes the set of \(o_{r}\)-th order neighbors of \(v^{\prime}\). Note that the neighborhood orders \((o_{1},\ldots,o_{R})\) can be adjusted accordingly with duplication, and all bases \(\{B_{r}\}_{r=1}^{R}\) are trainable.
3. _GAT_[35] with \(R\) attention heads: each \(B_{r}(v^{\prime},v)\) is a learnable localized graph filter with positive entries and column-wise summation normalized to one (i.e., a learnable affinity matrix).
4. _GPS Graph Transformer_[14] with \(R\) attention and MPNN layers: we introduce \(\{B_{r}(v^{\prime},v)\}_{r=1}^{2R}\). For \(\forall r\in\{1,\ldots,R\}\), \(B_{r}\) is a learnable affinity matrix, and \(B_{2r}\) is the adjacency matrix \(A\).
By integrating the idea of localized graph filters in GNNs, the benefits of our design for the influence kernel \(k\) lie in the following concepts: (i) The kernel enables the adoption of various spectral and spatial filter bases. And the combination of \(R\) bases allows us to represent complex local and global patterns of inter-node influences with great model expressiveness. (ii) Our framework substantially reduces the number of model parameters to \(\mathcal{O}(RC|V|)\) for modeling graph-structured point process data with \(|V|\) event types, while classic multivariate point processes and other neural point processes typically require more than \(\mathcal{O}(|V|^{2})\) parameters. Here \(C\) represents the average local patch size [8]. In practice, we have \(C,R\ll|V|\) when dealing with sparse graphs and considering only up to \(o\)-hop influence (commonly 2 or 3), which significantly improves the scalability of our model when applied to large graphs. Details of the complexity analysis can be found in Appendix A.
Formally, the temporal graph influence kernel \(k\) can be represented as:
\[k(t^{\prime},t,v^{\prime},v)=\sum_{r=1}^{R}\sum_{l=1}^{L}\alpha_{rl}\psi_{l}(t ^{\prime})\varphi_{l}(t-t^{\prime})B_{r}(v^{\prime},v), \tag{3}\]
where \(\alpha_{rl}=\sum_{d=1}^{D}\sum_{r=1}^{R}\sum_{l=1}^{L}\sigma_{d}\beta_{dl} \gamma_{dr}\). For the experiments in this paper, we adopt the bases of localized graph filters in L3Net, for it provides a unified framework for both spatial- and spectral-based graph convolutions. Figure 1 illustrates the mechanism of the graph filter bases to capture the event dependencies during the modeling of sequential events on an 8-node graph. The neighborhood orders are highlighted as the superscripts of each graph filter basis.
Figure 1: An example of modeling sequential events on an 8-node graph using graph filter bases in L3Net: (a) The latent graph structure. Blue and red nodes represent the 1st and 2nd order neighbors of \(v_{0}\), denoted by \(N_{v_{0}}^{(1)}\) and \(N_{v_{0}}^{(2)}\), respectively. (b) Three graph filter bases \(B^{(0)}\), \(B^{(1)}\), and \(B^{(2)}\) capture the dependencies between events. Hollow circles are events observed on each node. Colored lines indicate the potential influence of the earliest type-\(v_{0}\) event on future events captured by different bases.
### Model estimation
To learn the model from data, we adopt the widely-used approach through MLE [29] by maximizing the log-likelihood of observing event sequences \(\mathcal{S}\) on \([0,T]\times V\):
\[\max_{\theta}\ell(\theta):=\frac{1}{|\mathcal{S}|}\sum_{s=1}^{|\mathcal{S}|} \left(\sum_{i=1}^{n_{s}}\log\lambda\left(t_{i}^{s},v_{i}^{s}\right)-\sum_{v \in V}\int_{0}^{T}\lambda(t,v)dt\right). \tag{4}\]
Note that the model parameter \(\theta\) is incorporated into the intensity function \(\lambda\). Since negative values of the influence kernel are allowed for indicating inhibiting effects from past events, an additional constraint for the non-negativity of the conditional intensity function is required during model estimation. For this purpose, we use the log-barrier method for optimization in point processes [12], which maintains the model interpretability of the conditional intensity function with influence kernel while being computationally efficient. To be precise, we introduce an additional term \(p(\theta,b)\) to the optimization problem that penalizes the value of intensity on a dense enough grid over space, denoted \(\mathcal{U}_{\text{bar},t}\times V\) where \(\mathcal{U}_{\text{bar},t}\subset[0,T]\). The final optimization problem is formulated as
\[\begin{split}\min_{\theta}\mathcal{L}(\theta):=-\ell(\theta)+ \frac{1}{w}p(\theta,b)&=-\frac{1}{|\mathcal{S}|}\sum_{s=1}^{| \mathcal{S}|}\left(\sum_{i=1}^{n_{s}}\log\lambda\left(t_{i}^{s},v_{i}^{s} \right)-\sum_{v\in V}\int_{0}^{T}\lambda(t,v)dt\right)\\ &\quad-\frac{1}{w|\mathcal{S}||\mathcal{U}_{\text{bar},t}\times V |}\sum_{s=1}^{|\mathcal{S}|}\sum_{t\in\mathcal{U}_{\text{bar},t}}^{|\mathcal{S }|}\sum_{v\in V}\log(\lambda(t,v)-b),\end{split} \tag{5}\]
which is a combination of model log-likelihood and log-barrier penalization. Here, the scalar \(w>0\) is a weight to control the trade-off between log-likelihood and log-barrier, and \(b>0\) is a lower bound of the intensity value over space to guarantee the feasibility of logarithm. Both the weight \(w\) and lower bound \(b\) can be adjusted accordingly during optimization. Note that the optimization problem with log-barrier penalty can be computed efficiently. More details for model learning and computation are summarized in Appendix B and Appendix C Algorithm 1.
## 4 Experiment
In this section, we compare our method with deep graph kernel, referred to as GraDK, with five state-of-the-art point process methods on large-scale synthetic and real-world data sets. The superior performance of our model against baselines is demonstrated in terms of both recovering underlying event dependencies and predicting future events. More details and additional results are presented in Appendix D.
Baselines.Three of the baselines do not explicitly consider graph structure, including the (i) Recurrent Marked Temporal Point Process (RMTPP) [13], which uses a recurrent neural network to encode dependence through time; the (ii) Fully Neural Network model (FullyNN) [27], which models the cumulative distribution via a neural network; and the (iii) Deep Non-Stationary Kernel (DNSK) [12], which produces a low-rank neural-network-based influence kernel. We also include two baselines that encode graph information, including the (iv) Structured Transformer Hawkes
Process (THP-S) [47] and the (v) Graph Self-Attentive Hawkes Process (SAHP-G) [40] with a given graph structure, which both use self-attention mechanisms to represent the conditional intensity.
Experimental setup.We choose our one-dimensional temporal basis functions to be fully-connected neural networks with two hidden layers of width 32. Each layer is equipped with softplus activation function except the output layer. The bases of the localized graph filters take the form of the learnable bases in L3Net [8]. For each data set, all the models are trained using 80% of the data and tested on the remaining 20%. Our model parameters are estimated through (5) using the Adam optimizer [18] with a learning rate of \(10^{-2}\) and batch size of 32. Details about the experimental setup for baselines can be found in Appendix D.
### Synthetic data
We first evaluate the efficacy of our model on synthetic data. We generate three data sets using point processes with the following kernels and latent graph structures: (i) a non-stationary temporal kernel on a 3-node graph, (ii) a non-stationary temporal kernel on a 16-node graph with ring connectivity structure and 2-hop graph influence, and (iii) an exponentially decaying temporal kernel on a 50-node graph. Data sets are simulated using the thinning algorithm [9]. Each data set contains 1,000 sequences with an average length of 50.9, 105.8, and 386.8, respectively. Details regarding synthetic data are presented in Appendix D.
Kernel and intensity recovery.The first row of Figure 2 displays the true graph influence kernel in the 50-node synthetic data set and the learned graph kernels by GraDK, SAHP-G, and DNSK. Our method and DNSK directly learn the kernel, and the graph kernel in SAHP-G is constructed as in the original paper [40] by computing the empirical mean of the learned attention weights between nodes. While SAHP-G exaggerates the self-exciting influence of graph nodes and DNSK only learns some semblance of the graph kernel behavior, our method accurately recovers both self- and inter-node influence patterns, resulting in a faithful model of the true graph kernel. The conditional intensity via each method for one testing trajectory is displayed in the third row of Figure 2, which demonstrates the capability of our model to capture the temporal dynamics of events.
Similarly, Figure 3 contains the recovered graph kernel by each method for the synthetic data generated by the kernel on a 16-node ring graph with 2-hop influence. It is worth noting that our model learns an accurate representation, reconstructing the self-exciting and multi-hop influential structures in the ring graph, while SAHP-G only recovers the mutual dependencies within one-hop neighbors, restricted by their model formulation. The multi-hop influence along the graph structure is also reflected in the true and recovered event intensity by GraDK (the bottom row of Figure 3). The conditional intensities of SAHP-G and DNSK, however, either fail to capture the magnitude of this interdependence or do not accurately decay node dependence along the ring-structure connections.
Event dependency.Our model also exhibits exceptional performance in capturing sequential event dependencies. The second row of Figure 2 visualizes the learned inter-event dependency given a sample sequence from the testing set. The dependency between a prior and a future event
is characterized by the influence kernel (2) in GraDK, DNSK, and the true model. For SAHP-G, the event dependency is indicated by the scaled self-attention weight (Equation 10 [40]). While SAHP-G is capable of discovering long-term dependencies, the decaying influence of recent events is not represented. The event dependency of DNSK does well to capture the decaying influence of recent events, but fails to capture long-term effects by certain event types. Our method learns both of these features, capturing long-term dependence and decaying influence similar to that of the true model. Similarly, the second row of Figure 3 shows the inter-event dependency for the data on the 16-node ring graph with 2-hop influence. Still, SAHP-G erroneously presents some long-term effects and DNSK fails to capture intermediate-time influence from past events, whereas GraDK captures the influence at all proper timescales.
Predictive ability.The superior predictive performance of GraDK is further substantiated through a comprehensive evaluation. Apart from assessing the fitted log-likelihood (\(\ell\)) of the testing data, for each data set, we generate 100 event sequences using each learned model (one of three independent runs) and provide two metrics: (i) the mean absolute error of predicted event frequency (_Time MAE_) compared to that in the testing data, and (ii) the Kullback-Leibler
Figure 2: Graph kernel, inter-event dependence, and conditional intensity recovery for the 50-node synthetic data set. The first column reflects the ground truth, while the subsequent columns reflect the results obtained by GraDK (our method), SAHP-G, and DNSK, respectively.
Divergence of predicted event types (_Type KLD_), which compares the empirical distributions of event types (nodes) in the testing data and generated sequences. These metrics, proposed in a previous study [17], reflect the model's predictive capacity for future events, as opposed to individual event prediction accuracy, which tends to be noisy when applied to large graphs. The quantitative results in Table 1 demonstrate that GraDK method excels in fitting sequential data on a latent graph. It achieves the highest log-likelihood across all datasets and significantly outperforms all baseline methods in predicting future events, which holds immense importance within the domain of point process modeling.
### Real data
Traffic congestion data.The Georgia Department of Transportation (GDOT) provides traffic volume data for sensors embedded on roads and highways in the state. We have access to such data for 5 sensors at the interchange of interstates 75 and 85 in Midtown Atlanta from September 2018 to March 2019. Traffic volume is measured in 15-minute intervals, and congestion events are detected when the traffic count exceeds the third quartile of the daily traffic volume. The result is
Figure 3: Graph kernel, inter-event dependence, and conditional intensity recovery for the 16-node synthetic data set with 2-hop graph influence. The first column reflects the ground truth, while the subsequent columns reflect the results obtained by GraDK, SAHP-G, and DNSK, respectively.
3,830 events which are split into 24-hour trajectories (with an average of 24 events per day). The latent graph connects 5 sensors based on the flow of traffic and proximity.
Wildfire data.The California Public Utilities Commission (CPUC) maintains a large-scale multi-modal wildfire incident dataset. We extract a total of 2,428 wildfire occurrences in California from 2014 to 2019. The latitude-longitude coordinates of incidents are bounded by the rectangular region [34.51, -123.50] \(\times\) [40.73, -118.49]. Note that the majority of the region has no fire in the 5-year horizon due to the fact that fire incidents are likely to cluster in space. Therefore, we apply the K-means algorithm to extract 25 clusters of wildfire incidents. The latent graph is constructed such that each node represents one cluster and is connected to geographically adjacent nodes. The entire dataset is split into one-year sequences with an average length of 436 events.
Theft data.The proprietary crime data collected by the police department in Valencia, Spain records the crime incidents that happened from 2015 to 2019, including incident location, crime category, and distance to various landmarks within the city. We analyze 9,372 sustraccions (smooth thefts) that happened near 52 pubs within the Valencia city area. The graph is constructed from the street network, with each node representing a pub. Two pubs are connected if the distance between them along the street is less than 1 km. Each sustraccion is assigned to the closest pub. We partition the data into quarter-year-long sequences with an average length of 469 events.
Results in Table 2 underscore the efficacy of the GraDK approach in acquiring knowledge about graph point processes across a diverse array of real-world domains, including a small traffic graph of 5 nodes up to a large crime network of 52 nodes. These settings encompass diverse event dependency dynamics, as the influence mechanisms include infrastructure (roadways for traffic patterns), nature (weather and climate for wildfire patterns), and social dynamics (criminal behavior for theft patterns). Despite the complexity inherent in these scenarios, our method excels in providing a robust framework capable of capturing the intricate dependencies and facilitating accurate predictions, demonstrated by the low Time MAE and Type KLD from our method in each setting, which is better than or comparable to the best baselines in each of the three real data sets.
In Figure 4, the learned graph kernels of (a) GraDK, (b) SAHP-G, and (c) DNSK are visualized for the theft data set. The second panel reveals that SAHP-G learns a very noisy graph kernel, resulting in a conditional intensity that depends very slightly on inter-event influence. In fact, this approach learns a homogeneous Poisson process for each node with a relatively high likelihood.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{3-node graph with negative influence} & \multicolumn{3}{c}{16-node graph with 2-hop influence} & \multicolumn{3}{c}{50-node graph} \\ \cline{2-10} Model & Testing \(\ell\) & Time MAE & Type KLD & Testing \(\ell\) & Time MAE & Type KLD & Testing \(\ell\) & Time MAE & Type KLD \\ \hline RMTPP & \(-3.473_{(0.087)}\) & 0.528 & 0.093 & \(-7.239_{(0.193)}\) & 0.301 & 0.142 & \(-27.915_{(1.251)}\) & 37.666 & 0.103 \\ FullyNN & \(-2.086_{(0.000)}\) & 0.291 & 0.006 & \(-3.347_{(0.018)}\) & 0.198 & 0.018 & \(-1.736_{(0.019)}\) & 13.295 & 0.058 \\ DNSK & \(-2.127_{(0.003)}\) & 0.149 & 0.012 & \(-3.005_{(0.002)}\) & 0.085 & 0.002 & \(-1.165_{(0.003)}\) & 1.074 & 0.076 \\ \hline THP-S & \(-2.089_{(0.008)}\) & 0.413 & 0.006 & \(-3.079_{(0.004)}\) & 0.108 & 0.011 & \(-1.091_{(0.005)}\) & 3.940 & 0.019 \\ SAHP-G & \(-2.113_{(0.005)}\) & 0.172 & 0.003 & \(-3.036_{(0.008)}\) & 0.155 & 0.005 & \(-1.099_{(0.004)}\) & 1.119 & 0.014 \\ \hline GraDK & \(-2.055_{(0.003)}\) & **0.123** & **0.001** & \(-2.990_{(0.002)}\) & **0.054** & \(<\)**0.001** & \(-1.058_{(0.002)}\) & **0.453** & **0.003** \\ \hline \hline \end{tabular}
*Numbers in parentheses are standard errors for three independent runs.
\end{table}
Table 1: Synthetic data results.
The third panel shows that DNSK fails to present meaningful or discernible patterns of self-influence or event-type interdependence. Lastly, GraDK captures self-influence and inter-node dependencies with the aid of the flexible influence kernel, indicating the complex inhomogeneous dynamics in real data with great model interpretability.
## 5 Discussion
We develop a novel deep kernel for graph point processes using graph convolution filters in convolutional GNNs. This construction permits the efficient learning of intricate and non-stationary event dynamics on a latent graph structure. The modeling of the kernel enhances model interpretability, as one can parse the learned kernel to understand event type interdependence. We empirically demonstrate that our approach outperforms existing methods in terms of dependency recovery and event prediction across various data settings, including large graph structures and those with multi-hop influence.
While our approach adopts convolutional GNNs via local filters, we provide a flexible framework that can conveniently incorporate alternative GNN architectures. Extensions can explore the advantages of such other architectures, _e.g._, recurrent graph neural networks [16] and graph attention networks [35]. Furthermore, our method has the potential to be integrated into different problem formulations to serve diverse research objectives, such as Granger causality for point processes and latent graph structure inference without access to the existing graph.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**Traffic congestion**} & \multicolumn{3}{c}{**Wildfire**} & \multicolumn{3}{c}{**Theft**} \\ \cline{2-9} Model & Testing \(\ell\) & Time MAE & Type KLD & Testing \(\ell\) & Time MAE & Type KLD & Testing \(\ell\) & Time MAE & Type KLD \\ \hline RTPP & \(-5.197_{(0.062)}\) & \(2.348\) & \(0.021\) & \(-6.155_{(1.589)}\) & \(1.180\) & \(0.178\) & \(-11.496_{(1.474)}\) & \(5.871\) & \(0.124\) \\ FullyNN & \(-3.292_{(0.108)}\) & \(0.511\) & \(0.012\) & \(-4.717_{(0.119)}\) & \(0.817\) & \(0.026\) & \(-3.468_{(0.068)}\) & \(6.457\) & \(1.169\) \\ DNSK & \(-2.401_{(0.011)}\) & \(0.934\) & \(0.010\) & \(-3.706_{(0.008)}\) & \(0.711\) & \(0.083\) & \(-3.347_{(0.012)}\) & \(0.507\) & \(0.177\) \\ \hline THP-S & \(-2.254_{(0.007)}\) & \(0.378\) & \(0.003\) & \(-4.523_{(0.018)}\) & \(1.183\) & \(0.134\) & \(-2.982_{(<0.001)}\) & \(0.739\) & \(0.189\) \\ SAMP-G & \(-2.453_{(0.013)}\) & \(0.729\) & \(0.021\) & \(-3.919_{(0.040)}\) & \(0.551\) & \(0.032\) & \(-2.970_{(0.032)}\) & \(0.464\) & \(0.096\) \\ \hline GraDK & \(-2.286_{(0.002)}\) & \(\mathbf{0.314}\) & \(\mathbf{0.001}\) & \(-\mathbf{3.625}_{(0.009)}\) & \(\mathbf{0.358}\) & \(\mathbf{0.024}\) & \(-3.039_{(0.011)}\) & \(1.052\) & \(\mathbf{0.082}\) \\ \hline \hline \end{tabular}
*Numbers in parentheses are standard errors for three independent runs.
\end{table}
Table 2: Real data results.
Figure 4: Learned graph kernels for theft data set. The columns present the recovered kernels on the theft data set of GraDK, SAHP-G, and DNSK, respectively.
## Acknowledgement
The work is supported by NSF DMS-2134037. Z.D., M.R. and Y.X. are partially supported by an NSF CAREER CCF-1650913, NSF DMS-2134037, CMMI-2015787, CMMI-2112533, DMS-1938106, and DMS-1830210. X.C. is partially supported by NSF CAREER DMS-2237842, Simons Foundation 814643, and the Alfred P. Sloan Foundation.
|
2307.06546 | Forward and inverse energy cascade and fluctuation relation in fluid
turbulence adhere to Kolmogorov's refined similarity hypothesis | We study fluctuations of the local energy cascade rate $\Phi_\ell$ in
turbulent flows at scales ($\ell$) in the inertial range. According to the
Kolmogorov refined similarity hypothesis (KRSH), relevant statistical
properties of $\Phi_\ell$ should depend on $\epsilon_\ell$, the viscous
dissipation rate locally averaged over a sphere of size $\ell$, rather than on
the global average dissipation. However, the validity of KRSH applied to
$\Phi_\ell$ has not yet been tested from data. Conditional averages such as
$\langle \Phi_\ell|\epsilon_{\ell}\rangle$ as well as of higher-order moments
are measured from Direct Numerical Simulations data, and results clearly adhere
to the predictions from KRSH. Remarkably, the same is true when considering
forward ($\Phi_\ell>0$) and inverse ($\Phi_\ell<0$) cascade events separately.
Measured ratios of forward and inverse cascade probability densities further
show that a fluctuation relation adhering to the KRSH can be observed, raising
the hope that important features of turbulence may be described using concepts
from non-equilibrium thermodynamics. | H. Yao, P. K. Yeung, T. A. Zaki, C. Meneveau | 2023-07-13T03:44:42Z | http://arxiv.org/abs/2307.06546v3 | # Forward and inverse energy cascade and fluctuation relation in fluid turbulence
###### Abstract
We study fluctuations of the local energy cascade rate \(\Phi_{\ell}\) in turbulent flows at scales (\(\ell\)) in the inertial range. According to the Kolmogorov refined similarity hypothesis (KSRH), relevant statistical properties of \(\Phi_{\ell}\) should depend on \(\epsilon_{\ell}\), the viscous dissipation rate locally averaged over a sphere of size \(\ell\), rather than on the global average dissipation. However, the validity of KRSH applied to \(\Phi_{\ell}\) has not yet been tested from data. Conditional averages such as \(\langle\Phi_{\ell}|\epsilon_{\ell}\rangle\) as well as of higher-order moments are measured from Direct Numerical Simulations data, and results clearly adhere to the predictions from KRSH. Remarkably, the same is true when considering forward (\(\Phi_{\ell}>0\)) and inverse (\(\Phi_{\ell}<0\)) cascade events separately. Measured ratios of forward and inverse cascade probability densities further show that a fluctuation relation adhering to the KRSH can be observed, raising the hope that important features of turbulence may be described using concepts from non-equilibrium thermodynamics.
The classic description of the energy cascade in turbulent flows states that the turbulent kinetic energy is extracted from large-scale eddies, transferred to smaller scale eddies (the forward cascade), and finally dissipated into heat due to viscous friction [1]. What is known from the Navier-Stokes equations is that a global averaging of the Kolmogorov equation for two-point longitudinal velocity increments connects third-order moments to the overall mean rate of viscous dissipation via the celebrated \(-4/5\) law [2; 3], \(\langle\delta u_{L}^{3}\rangle\equiv\langle([{\bf u}({\bf x}+{\bf r})-{\bf u}( {\bf x})]\cdot{\bf r}/r)^{3}\rangle=-(4/5)\,r\,\langle\epsilon\rangle\). Here \(\langle..\rangle\) denotes statistical averaging, \(\delta u_{L}\) is the longitudinal velocity increment, \(\epsilon\) is the viscous dissipation rate, and \(r=|{\bf r}|\) is assumed to be well inside the inertial range of turbulence. The 4/5th law means that in the inertial range, \(-(5/4)\langle\delta u_{L}^{3}\rangle/r\) can be interpreted as the energy transfer rate, that it is constant with \(r\), and that the average transfer direction is from large to small scales. However, it is well known that this relation does not hold locally, as \(\delta u_{L}\) and the dissipation rate display strong variability and intermittency [3; 4; 5]. In order to describe intermittency and anomalous scaling, Kolmogorov's second refined similarity hypothesis (KRSH) [4] connects the statistical distributions of velocity increments to those of the local rate of dissipation \(\epsilon_{r}\), defined as the point-wise dissipation averaged in a ball of diameter \(r\). Specifically, the KRSH states that velocity increments and \(\epsilon_{r}\) are connected via a universal random variable \(V\) according to \(\delta u_{L}=V(r\epsilon_{r})^{1/3}\). The second hypothesis, for \(r\) in the inertial range, states that the statistics of \(V\) are independent of \(r\) and \(\epsilon_{r}\) when \(r\) is in the inertial range of high Reynolds number turbulence. The validity of KRSH has received strong support from early experimental measurements in which the dissipation \(\epsilon_{r}\) had to be approximated by lower-dimensional data (e.g., [6; 7]) and also from later analyses based on 3D data, in which \(\epsilon_{r}\) could be evaluated fully, from simulations [8; 9; 10] as well as more recently based on 3D experimental data [11].
Most prior studies have started out with the KRSH formulated as a hypothesis inspired by dimensional analysis, but direct connections between KRSH and first-principles Navier-Stokes equations have often been lacking. A connection can be made based on the work by Hill [12; 13] who derived an equation (denoted as "generalized Kolmogorov-Hill equation", GKHE) that is particularly helpful in describing local fluctuations in energy cascade rates. Written with no mean flow, for scales at which forcing can be neglected, and before averaging it reads:
\[\begin{split}\frac{\partial\delta u_{i}^{2}}{\partial t}& +u_{j}^{*}\frac{\partial\delta u_{i}^{2}}{\partial x_{j}}=-\frac{ \partial\delta u_{j}\delta u_{i}^{2}}{\partial r_{j}}-\frac{8}{\rho}\frac{ \partial p^{*}\delta u_{i}}{\partial r_{i}}\\ &\quad+\nu\frac{1}{2}\frac{\partial^{2}\delta u_{i}\delta u_{i}}{ \partial x_{j}\partial x_{j}}+2\nu\frac{\partial^{2}\delta u_{i}\delta u_{i} }{\partial r_{j}\partial r_{j}}-4\epsilon^{*}\end{split} \tag{1}\]
where \(\delta u_{i}=\delta u_{i}({\bf x};{\bf r})=u_{i}^{+}-u_{i}^{-}\) is the velocity increment vector in the \(i^{\rm th}\) Cartesian direction. The superscripts \(+\) and \(-\) represent two points \({\bf x}+{\bf r}/2\) and \({\bf x}-{\bf r}/2\) in the physical domain that have a separation vector \(r_{i}=x_{i}^{+}-x_{i}^{-}\) and middle point \(x_{i}=(x_{i}^{+}+x_{i}^{-})/2\). The superscript \(\pm\) denotes the average value between two points, e.g., the average dissipation is defined as \(\epsilon^{*}=(\epsilon^{+}+\epsilon^{-})/2\). In this paper \(\epsilon\) denotes the "pseudo-dissipation", defined at every point as \(\epsilon=\nu(\partial u_{i}/\partial x_{j})^{2}\). Many variants of
this equation have been studied [14; 15; 16; 17], also focusing on effects of mean shear, anisotropy and spatial inhomogeneity. As noted by Hill [SS3.5 in 13], in order to make connection to the RKSH and \(\epsilon_{\ell}\) at some scale \(r=\ell\), Eq. 1 at any point \({\bf x}\) can be integrated over a sphere in scale \({\bf r}\)-space up to a diameter \(r=\ell\) (radius \(\ell/2\)). The resulting equation describes the evolution of local kinetic energy up to scale \(\ell\) defined as \(k_{\ell}=(1/2\Omega_{\ell})\int_{\Omega_{\ell}}\frac{1}{2}\delta u_{i}^{2}d^{ 3}{\bf r}_{s}\) (the factor 2 in the denominator in front of the integration volume accounts for the double-counting of point pairs). When divided by the volume of the sphere (\(\Omega_{\ell}=\frac{4}{3}\pi(\ell/2)^{3}\)) and a factor of 4, the locally integrated form of Eq. 1 becomes
\[\frac{\tilde{dk}_{\ell}}{dt}=\Phi_{\ell}+P_{\ell}+D_{\ell}-\epsilon_{\ell},\ \ \mbox{where} \tag{2}\]
\[\epsilon_{\ell}({\bf x})\equiv\frac{1}{\Omega_{\ell}}\int\!\!\!\!\!\!\!\!\! \int\limits_{\Omega_{\ell}}\epsilon^{*}({\bf x},{\bf r})d^{3}{\bf r}_{s} \tag{3}\]
is the \(\ell\)-averaged rate of dissipation envisioned in the RKSH with the radius vector \({\bf r}_{s}={\bf r}/2\) being integrated up to magnitude \(\ell/2\), and
\[\Phi_{\ell}\equiv-\frac{3}{4\,\ell}\frac{1}{S_{\ell}}\oint\limits_{S_{\ell}} \delta u_{i}^{2}\,\delta u_{j}\,\hat{n}_{j}dS \tag{4}\]
is interpreted as the local energy cascade rate in the inertial range at position \({\bf x}\). Note that Gauss theorem is used to integrate the first term on the RHS of Eq. 1 over the \(r\)-sphere's surface, with area element \(\hat{n}_{j}dS\), with \(\hat{\bf n}={\bf r}/|{\bf r}|\), and \(S_{\ell}=4\pi(\ell/2)^{2}\) the sphere's overall area (care must be taken as the Gauss theorem applies to the sphere's radius vector \({\bf r}_{s}={\bf r}/2\) and \(\partial_{r}=2\,\partial_{r_{s}}\)). Eq. 2 also includes \(\tilde{dk}_{\ell}/dt\), the local average of advective rate of change of kinetic energy \(k_{\ell}\) defined as
\[\frac{\tilde{dk}_{\ell}}{dt}\equiv\frac{1}{2\,\Omega_{\ell}}\int\!\!\!\!\!\! \!\!\!\int\limits_{\Omega_{\ell}}\left(\frac{\partial\delta u_{i}^{2}/2}{ \partial t}+u_{j}^{*}\frac{\partial\delta u_{i}^{2}/2}{\partial x_{j}}\right) d^{3}{\bf r}_{s}, \tag{5}\]
and
\[P_{\ell}\equiv-\frac{6}{\ell}\frac{1}{S_{\ell}}\oint\limits_{S_{\ell}}\frac{ 1}{\rho}\,p^{*}\,\delta u_{j}\,\hat{n}_{j}\,dS \tag{6}\]
a surface averaged pressure work term at scale \(\ell\). The term \(D_{\ell}=\nu/(8\Omega_{\ell})\int_{\Omega_{\ell}}[\partial^{2}\delta u_{i}^{2 }/\partial x_{j}^{2}+4\,\partial^{2}\delta u_{i}^{2}/\partial r_{j}^{2}]d^{3} {\bf r}_{s}\) represents viscous diffusion of kinetic energy both in position and scale space. We consider \(\ell\) to be in the inertial range and will therefore expect \(D_{\ell}\) to be negligible. Equation 2 is local (valid at any point \({\bf x}\) and time \(t\)), and each of the terms in the equation can be evaluated from data according to their definition using a sphere centered at any middle point \({\bf x}\). Without the viscous, pressure and unsteady terms, Eq. 2 is similar to the "local 4/3-law" obtained by Duchon & Robert [18] and discussed by Eyink [19; 20], relating \(\Phi_{\ell}\) and \(\epsilon_{\ell}\) in the context of energy dissipation in the limit of zero viscosity.
A reformulation of the KRSH in the present context is that the statistics of \(\Phi_{\ell}\) (i.e., of \(-3/(4\ell)\) times the surface average of \(\delta u_{i}^{2}\delta u_{j}\hat{n}_{j}\)) only depend on the statistics of \(\epsilon_{\ell}\) (i.e. that \(\Phi_{\ell}=V_{\Phi}\epsilon_{\ell}\) with \(V_{\Phi}\) being a random variable with universal statistics independent of \(\ell\) and \(\epsilon_{\ell}\)) in the inertial range. In particular, the conditional average of the cascade rate should obey \(\langle\Phi_{\ell}|\epsilon_{\ell}\rangle=\epsilon_{\ell}\). In fact, referring back to the local GKHE (Eq. 2), we may take its conditional average and write (neglecting \(D_{\ell}\))
\[\langle\tilde{dk}_{\ell}/dt\,|\epsilon_{\ell}\rangle=\langle\Phi_{\ell}| \epsilon_{\ell}\rangle+\langle P_{\ell}|\epsilon_{\ell}\rangle-\epsilon_{\ell}, \tag{7}\]
considering that of course \(\langle\epsilon_{\ell}|\epsilon_{\ell}\rangle=\epsilon_{\ell}\). Hence, a consequence of the KRSH together with the GKHE equation is that the conditional average of \(W_{\ell}\equiv\widetilde{dk}_{\ell}/dt-P_{\ell}\) must vanish, i.e. \(\langle W_{\ell}|\epsilon_{r}\rangle=0\), either when both terms are combined in \(W_{\ell}\), or perhaps they vanish also individually, \(\langle\widetilde{dk}_{\ell}/dt|\epsilon_{\ell}\rangle=0\) and \(\langle P_{\ell}|\epsilon_{\ell}\rangle=0\).
Prior measurements of \(\Phi_{\ell}\)[21] as well as many prior ones (see references in [21] for similarly defined local cascade rate definitions) show that \(\Phi_{\ell}\) can be both positive and negative locally, even if on average it must be positive. Negative values of \(\Phi_{\ell}\) can be interpreted as "inverse cascade from small to large scales". Furthermore, a recent analysis [22] shows that the ratio \(\Psi_{\ell}=\Phi_{\ell}/k_{\ell}\) can be understood as an entropy generation (or phase-space contraction) rate, where \(k_{\ell}\) is interpreted as the "temperature of turbulence". A prediction about entropy generation rates in non-equilibrium thermodynamics is the "fluctuation relation" (FR) [23; 24; 25]. When written for turbulence entropy generation rates it states that the ratio of probability densities of positive and negative \(\Psi_{\ell}\) follows the exponential behavior \(P(\Psi_{\ell})/P(-\Psi_{\ell})=\exp(\Psi_{\ell}\tau_{\ell})\), where \(\tau_{\ell}\) is a characteristic time-scale. The recent results of [22] confirm the FR relationship for isotropic turbulence, that is to say, what may appear to be "second-law violations" are entirely consistent with expectations from non-equilibrium thermodynamics. However, in the prior analysis [22] the time-scale \(\tau_{\ell}\) was defined using the average dissipation, \(\tau_{\ell}=\langle\epsilon\rangle^{-1/3}\ell^{2/3}\). In other words, it did not take into account effects of intermittency in which different regions of the flow with different \(\epsilon_{\ell}\) values could behave differently.
In this letter, we investigate whether data support the KRSH in the context of the dynamics of turbulence kinetic energy at scale \(\ell\) as described by the GKHE, i.e., whether predictions from KRSH hold for (i) conditional moments of \(\Phi_{\ell}\), (ii) for the combined unsteady and pressure terms \(W_{\ell}\), and finally, (iii) for positive and negative cascade rates as well as for the fluctuation relation from non-equilibrium thermodynamics. We evaluate terms in the GKHE using data from Direct Numerical Simulation of isotropic turbulence at \(R_{\lambda}\approx\)1,250 that used 8,192\({}^{3}\) grid-points in a \((2\pi)^{3}\) periodic domain (data obtained from the JHTDB database [26; 27]).
Surface averages needed to evaluate \(\Phi_{\ell}\) are measured by discretizing the outer surface of diameter \(\ell\) into 500
point pairs (\(+\) and \(-\) points) that are approximately uniformly distributed on the sphere. Velocities for \(\delta u_{i}\) are downloaded from JHTDB. When this solid angular averaging yields a negative result, i.e. when the average of the \(\delta u_{i}^{2}\)-weighted radial component of \(\delta u_{j}\) is negative and thus points inwards, \(\Phi_{\ell}>0\) and the energy flux transfers energy towards smaller scales within the integration surface, indicating a local forward cascade. Conversely, when \(\Phi_{\ell}<0\), the energy flux transfers outward from the spherical surface, indicating a local inverse cascade. On a global average, \(\langle\Phi_{\ell}\rangle=\langle\epsilon\rangle>0\) as dictated by the -4/5 relation. Volume integrals \(\epsilon_{\ell}\) and \(k_{\ell}\) are evaluated similarly by integrating over five concentric spheres. The accuracy of this method of integration has been tested by increasing the number of points used in the discretization and confirming indistinguishable results are obtained. For \(\epsilon\) we use JHTDB's GetVelocityGradient method with 4th-order centered finite differencing. Taking dissipation as an example, panel (a) in figure 1 shows point-wise normalized dissipation \(\epsilon(\mathbf{x})/\langle\epsilon\rangle\) computed on a planar cut across the data (the plane shown corresponds to \(500\times 500\) grid points). Panel (b) of figure 1 shows a sphere with diameter \(\ell\).
Figure 2 shows the measured conditional average \(\langle\Phi_{\ell}|\epsilon_{\ell}\rangle\) as function of \(\epsilon_{\ell}\). Statistics are computed using 2,000,000 randomly distributed spheres across the entire \(8192^{3}\) isotropic turbulence dataset (isotropic8192) [27]. The analysis considers two length scales in the inertial range and one approaching the viscous range, namely \(\ell=0.024L=60\eta\), \(\ell=0.018L=45\eta\), and \(\ell=0.012L=30\eta\), respectively, where \(L=1.24\) is the integral scale and the Kolmogorov scale is \(\eta=(\nu^{3}/\langle\epsilon\rangle)^{1/4}=4.98\times 10^{-4}\). It is immediately apparent that the dominant terms are \(\langle\Phi_{\ell}|\epsilon_{\ell}\rangle\) (black dots) and \(\epsilon_{\ell}\) itself (red dash line with unit slope). These are equal for most of the range of \(\epsilon_{\ell}\) for which reliable statistics can be collected. The good collapse \(\langle\Phi_{\ell}|\epsilon_{\ell}\rangle\approx\epsilon_{\ell}\) provides further support for the KRSH in the context of terms appearing in the GKHE. Also plotted in Fig. 2 are the conditional averages of the pressure term, \(\langle P_{\ell}|\epsilon_{\ell}\rangle\), and the viscous term, \(\langle D_{\ell}|\epsilon_{\ell}\rangle\). We can see that the contribution of the pressure term (yellow squares) is negligible at all three length scales over most of the range. The viscous term (solid blue lines) is also negligibly small as expected. Approaching the largest values of \(\epsilon_{\ell}/\langle\epsilon\rangle\) we observe saturation of \(\Phi_{\ell}\) that is compensated by small rise of the pressure term. We note that identical results are obtained when using the full viscous dissipation \(\nu(\partial u_{i}\partial x_{j})(\partial u_{i}\partial x_{j}+\partial u_{i} \partial x_{j})\) instead of the pseudo-dissipation \(\nu(\partial u_{i}\partial x_{j})^{2}\) when computing \(\epsilon_{\ell}\). In fact the correlation coefficient between the \(\epsilon_{\ell}\) evaluated either way is very large (0.996 at \(\ell/\eta=45\)).
It should be noted that the isotropic192 dataset does not contain snapshots sufficiently close in time to compute time derivatives to evaluate \(\langle\widetilde{d}k_{\ell}/dt|\epsilon_{\ell}\rangle\). However, based on Eq. 7, we can conclude that \(\langle\widetilde{d}k_{\ell}/dt|\epsilon_{\ell}\rangle\approx 0\), given that \(\langle P_{\ell}|\epsilon_{\ell}\rangle\approx 0\), \(\langle D_{\ell}|\epsilon_{\ell}\rangle\approx 0\), and \(\langle\Phi_{\ell}|\epsilon_{\ell}\rangle\approx\epsilon_{\ell}\). To verify this result via explicit measurement, we computed the terms in Eq. 7 using the isotropic1024 [26] dataset. It has a smaller size of \(1024^{3}\) grid points and a lower Reynolds number \(R_{\lambda}=430\), but includes temporally consecutive snapshots allowing us to calculate time derivatives. Fig. 2 (open symbols) shows that \(\langle\Phi_{\ell}|\epsilon_{\ell}\rangle\approx\epsilon_{\ell}\) still holds very well at this lower Reynolds number, that the pressure and viscous terms are again close to zero and that \(\langle\widetilde{d}k_{\ell}/dt|\epsilon_{\ell}\rangle\approx 0\). We conclude that the data provide strong direct support to the RKSH relating \(\Phi_{\ell}\) and \(\epsilon_{\ell}\) and that the conditional averages of the pressure and unsteadiness terms vanish in the inertial range, as required from the conditionally averaged RKH equation. As mentioned above, at large values, a saturation of the KRSH prediction is observed. Such deviations can be expected since \(\delta u\) cannot exceed sig
Figure 1: (a) Spatial distribution of local dissipation rate normalized by \(\langle\epsilon\rangle\) on a plane in a small subset of isotropic turbulence at \(R_{\lambda}=\)1,250. The left half portion shows \(\epsilon_{\ell}\) distribution obtained from spherical volume averaging. (b) Zoomed-in portion of panel (a) also showing a sphere with a diameter \(\ell=45\eta\) marked as the black circle. The black dash arrow represents the separation vector \(\mathbf{r}\). The center of the sphere is the middle point \(\mathbf{x}\) between two points \(+\) and \(-\).
Figure 2: Conditional averages of terms in the GKH equation (but not including the rate of change term) based on local dissipation \(\epsilon_{\ell}\), i.e., \(Z=\Phi_{\ell}\) (black symbols and lines), \(Z=P_{\ell}\) (yellow symbols and lines), \(Z=D_{\ell}\) (blue symbols and lines). The red line indicates the value of \(\epsilon_{\ell}\). Different symbols denote different scales \(\ell/L=0.012\) (triangles), 0.018 (circles, (45\(\eta\))) and 0.024 (squares). Solid symbols: Data from DNS of forced isotropic turbulence at \(R_{\lambda}=\)1,250. Open circles: Data from DNS at \(R_{\lambda}=\)430 at \(\ell/L=0.092\,(45\eta)\), for which time dependency can be evaluated and thus including the unsteadiness term \(Z=\widetilde{d}k_{\ell}/dt\) (purple diamond, near zero). All terms are normalized with \(\langle\epsilon\rangle\).
nificantly the outer velocity scale, i.e. the rms velocity \(u^{\prime}\sim(\langle\epsilon\rangle\ell\rangle)^{1/3}\). This leads to saturation of KRSH at values of \(\Phi_{\ell}\sim{u^{\prime}}^{3}/r\sim\langle\epsilon\rangle(L/\ell)\). Indeed cases at lower \(L/\ell\) show earlier saturation. Only if both \(\ell\) and \(\delta u\) are well inside the inertial range is KRSH expected to hold.
A further implication of KRSH relates to higher order moments. It implies that \(\langle\Phi_{\ell}^{q}|\epsilon_{\ell}\rangle=\langle V_{\Phi}^{q}\rangle\, \epsilon_{\ell}^{q}\) (and \(\langle V_{\Phi}\rangle=1\) for the case \(q=1\) discussed before). In the inertial range, since \(\Phi_{\ell}=\epsilon_{\ell}+W_{\ell}\) locally and instantaneously, raising to the \(q\)-power, expanding, and taking the conditional average yields
\[\langle\Phi_{\ell}^{q}|\epsilon_{\ell}\rangle=\sum_{n=0}^{q}\binom{q}{n}\ \epsilon_{\ell}^{q-n}\ \langle W_{\ell}^{n}|\epsilon_{\ell}\rangle. \tag{8}\]
Thus, for KRSH to hold (i.e. for \(\langle\Phi_{\ell}^{q}|\epsilon_{\ell}\rangle\propto\epsilon_{\ell}^{q}\)) the conditional moments of \(W_{\ell}\) must follow the same behavior, i.e., \(\langle W_{\ell}^{n}|\epsilon_{\ell}\rangle\propto\epsilon_{\ell}^{n}\). Both the KRSH prediction for \(\langle\Phi_{\ell}^{q}|\epsilon_{\ell}\rangle\) and \(\langle W_{\ell}^{n}|\epsilon_{\ell}\rangle\) can be tested by measuring and plotting \(\langle\Phi_{\ell}^{q}|\epsilon_{\ell}\rangle^{1/q}\) and \(\langle W_{\ell}^{n}|\epsilon_{\ell}\rangle^{1/n}\) as function of \(\epsilon_{\ell}\) and testing for linear behavior. Results are shown in Fig. 3 for \(q=2,3\) and \(n=2,3\). Clearly, the proportionality holds, with linear trends visible for every moment order over the range of dissipation values.
We now turn to further consequences of the KRSH that are directly related to the direction of the energy cascade, i.e. we examine if KRSH may be applicable even to those regions of the flow where \(\Phi_{\ell}<0\), i.e., those displaying local inverse cascading. If any statistical property of velocity increments are only determined by the scale and \(\epsilon_{\ell}\), then a further implication of KRSH is that the conditional average of only positive and only negative values of \(\Phi_{\ell}\) should also be proportional to \(\epsilon_{\ell}\) (with their weighted sum being equal to \(\epsilon_{\ell}\)). To investigate this prediction, we split the samples of \(\Phi_{\ell}\) by its sign and perform conditional averaging based on \(\epsilon_{\ell}\). We first observe that there are about twice as many samples with \(\Phi_{\ell}>0\) than with \(\Phi_{\ell}<0\), specifically, if \(Pr(\Phi_{\ell}>0|\epsilon_{\ell}\rangle)\) and \(Pr(\Phi_{\ell}<0|\epsilon_{\ell}\rangle)\) are the conditional total probabilities associated with signs of \(\Phi_{\ell}\) in any \(\epsilon_{\ell}\) bin, we measure \(Pr(\Phi_{\ell}>0|\epsilon_{\ell}\rangle)/Pr(\Phi_{\ell}<0|\epsilon_{\ell} \rangle\approx 2.0\) (see blue line in Fig. 4, where at large \(\epsilon_{\ell}\) there is evidence of saturation and the ratio decreases below 2). In the inertial range, the ratio is approximately 2.0 (consistent with results from Ref. [28], who defined energy cascade rate by detailed evaluations of intersection of eddies and lifetimes). From normalization we conclude that \(Pr(\Phi_{\ell}>0|\epsilon_{\ell}\rangle\approx 2/3\) and \(Pr(\Phi_{\ell}<0|\epsilon_{\ell}\rangle\approx 1/3\). And since \(\langle\Phi_{\ell}|\epsilon_{\ell}\rangle=\langle\Phi_{\ell}|\epsilon_{\ell},\Phi_{\ell}>0)Pr(\Phi_{\ell}>0|\epsilon_{\ell}\rangle+\langle\Phi_{\ell}| \epsilon_{\ell},\Phi_{\ell}<0\rangle Pr(\Phi_{\ell}<0|\epsilon_{\ell})\) and the data already showed \(\langle\Phi_{\ell}|\epsilon_{\ell}\rangle\approx\epsilon_{\ell}\), the KRSH further implies that \(\langle\Phi_{\ell}|\epsilon_{\ell},\Phi_{\ell}<0\rangle\approx 3\,\epsilon_{\ell}-2 \,\langle\Phi_{\ell}|\epsilon_{\ell},\Phi_{\ell}>0\rangle\). These predictions from RKSH are tested in Fig. 4, showing that \(\langle\Phi_{\ell}|\epsilon_{\ell},\Phi_{\ell}>0\rangle\approx 2\,\epsilon_{\ell}\) and \(\langle\Phi_{\ell}|\epsilon_{\ell},\Phi_{\ell}<0\rangle\approx-1\,\epsilon_{ \ell}\). Clearly KRSH holds even for the positive and negative regions separately. For completeness, we also show the conditional average of the traditional third-order longitudinal structure function, which under the assumption of isotropy is given by the 4/5-law, according to \(\langle\Phi_{\ell}^{(L)}|\epsilon_{\ell}\rangle=-(5/4\ell)\langle[(\delta u_{j }\hat{n}_{j})^{3}]_{S_{\ell}}|\epsilon_{\ell}\rangle\).
Next, following Ref. [22] we examine the fluctuation relation from non-equilibrium thermodynamics, but now conditioned on various values of \(\epsilon_{\ell}\). Figure 5(a) shows the conditional PDF \(P(\Psi_{\ell}|\epsilon_{\ell})\) of the entropy generation rate \(\Psi_{\ell}=\Phi_{\ell}/k_{\ell}\) for \(\ell/\eta=45\), conditioned for various values of \(\epsilon_{\ell}\) ranging from \(\epsilon_{\ell}/\langle\epsilon\rangle=0.15\) to 4.2. As in [22], exponential tails are found, with steeper slopes on the negative side than on the positive one, and approximately twice as steep. Remarkably, when multiplying \(\Psi_{\ell}\) by the corresponding turn-over time-scale \(\tau_{\ell}=\epsilon_{\ell}^{-1/3}\ell^{2/3}\) where \(\epsilon_{\ell}\) is the value used to bin the data, excellent collapse is observed, see Fig. 5 (b). If the PDFs are approximated as pure exponentials, with slope \(\alpha_{-}\) for \(\Psi_{\ell}<0\) and \(\alpha_{+}\) for \(\Psi_{\ell}>0\), it is evident from Fig. 5(b) that \(\alpha_{+}\approx 1\) and \(\alpha_{-}\approx 2\). For such two-sided exponential PDFs, it is easy to show that the ratio of probabilities of negative over positive cascade events is simply \(Pr(\Psi_{\ell}<0)/Pr(\Psi_{\ell}>0)=\alpha_{+}/\alpha_{-}\), consistent with the 1:2 ratio discussed above, independent of \(\epsilon_{\ell}\). Finally, the FR can be tested by plotting \(\log[P(\Psi_{\ell}|\epsilon_{\ell})/P(-\Psi_{\ell}|\epsilon_{\ell})]\) versus \(\Psi_{\ell}\tau_{\ell}\) (see Fig. 6). The result shows good collapse and an approximately linear trend (especially at
Figure 3: Conditional averaged \(Z=\Phi_{\ell}^{q}\) (symbols) for the isotropic8192 (a) and isotropic1024 (b) datasets, and \(Z=W_{\ell}^{n}\) (dashed lines for the isotropic1024 dataset), for \(\ell=45\eta\), plotted as function of the conditioning variable \(\epsilon_{\ell}\). Results for \(q,n=1,2,3\) are shown in black, blue, and green respectively. All terms are normalized with \(\langle\epsilon\rangle\) and display linear trends with \(\epsilon_{\ell}\), consistent with the KRSH.
\(\Psi_{\ell}\tau_{\ell}>1\), thus confirming the fluctuation relation for turbulence even when conditioning on different values of \(\epsilon_{\ell}\), and using \(\epsilon_{\ell}\) to establish the relevant turn-over time-scale. For the exponential approximation of the conditional PDFs, the slope in the FR plot is simply \(\alpha_{-}-\alpha_{+}\), which is nearly unity (as observed originally in [22]), and quite consistent with \(\alpha_{-}\approx 2\) while \(\alpha_{+}\approx 1\).
In summary, we examined the KRSH involving the averaged dissipation \(\epsilon_{\ell}\) in the context of an equation (GKHE) derived exactly from the Navier-Stokes equations, an equation in which \(\epsilon_{\ell}\) appears explicitly. Data from two DNS forced isotropic turbulence, at intermediate and relatively high Reynolds numbers, were analyzed by explicitly evaluating local spherical volume and surface integrations at scales in the inertial range. Results provided strong support for the validity of KRSH for the cascade rate \(\Phi_{\ell}\), its moments, and also for moments of other terms appearing in the dynamical equation, i.e., the combined time rate of change and pressure work affecting local kinetic energy in the inertial range of turbulence. Furthermore the data support a strong version of the KRSH when positive and negative cascade rates are considered separately, each of which scale proportional to \(\epsilon_{\ell}\). Finally, the fluctuation relation from non-equilibrium thermodynamics is shown to hold to a good approximation, relating the probability of forward and backward cascade events using the local time-scale based on \(\epsilon_{\ell}\) and \(\ell\). Present results connecting KRSH directly to a dynamical equation derived from the Navier-Stokes equations (the GKHE) as well as to basic principles of non-equilibrium thermodynamics could help in developing improved theories and models of the turbulence cascade process. Additional questions arise, such as what is the effects of time-averaging in defining the entropy generation rate \(\Psi_{\ell}\), what occurs when \(\ell\) approaches limits of the inertial range, and do the results hold for flows with mean shear and walls?
**Acknowledgements:** We thank R. Hill for comments on an early draft of this work. The work is supported by NSF (CSSI-2103874) and the contributions from the JHTDB team are gratefully acknowledged.
|
2304.10674 | On properties and classification of a class of $4$-dimensional
$3$-Hom-Lie algebras with a nilpotent twisting map | The aim of this work is to investigate the properties and classification of
an interesting class of $4$-dimensional $3$-Hom-Lie algebras with a nilpotent
twisting map $\alpha$ and eight structure constants as parameters. Derived
series and central descending series are studied for all algebras in this class
and are used to divide it into five non-isomorphic subclasses. The levels of
solvability and nilpotency of the $3$-Hom-Lie algebras in these five classes
are obtained. Building up on that, all algebras of this class are classified up
to Hom-algebra isomorphism. Necessary and sufficient conditions for
multiplicativity of general $(n+1)$-dimensional $n$-Hom-Lie algebras as well as
for algebras in the considered class are obtained in terms of the structure
constants and the twisting map. Furthermore, for some algebras in this class,
it has been determined whether the terms of the derived and central descending
series are weak subalgebras, Hom-subalgebras, weak ideals or Hom-ideals. | Abdennour Kitouni, Sergei Silvestrov | 2023-04-20T23:28:56Z | http://arxiv.org/abs/2304.10674v1 | On properties and classification of a class of \(4\)-dimensional \(3\)-Hom-Lie algebras with a nilpotent twisting map
###### Abstract
The aim of this work is to investigate the properties and classification of an interesting class of \(4\)-dimensional \(3\)-Hom-Lie algebras with a nilpotent twisting map \(\alpha\) and eight structure constants as parameters. Derived series and central descending series are studied for all algebras in this class and are used to divide it into five non-isomorphic subclasses. The levels of solvability and nilpotency of the \(3\)-Hom-Lie algebras in these five classes are obtained. Building up on that, all algebras of this class are classified up to Hom-algebra isomorphism. Necessary and sufficient conditions for multiplicativity of general \((n+1)\)-dimensional \(n\)-Hom-Lie algebras as well as for algebras in the considered class are obtained in terms of the structure constants and the twisting map. Furthermore, for some algebras in this class, it has been determined whether the terms of the derived and central descending series are weak subalgebras, Hom-subalgebras, weak ideals or Hom-ideals.
Keywords:Hom-algebra, \(n\)-Hom-Lie algebra, classification
**2020 Mathematics Subject Classification:** 17B61,17A40,17A42,17B30
## 1 Introduction
Hom-Lie algebras and more general quasi-Hom-Lie algebras where introduced first by Hartwig, Larsson and Silvestrov in [51], where the general quasi-deformations |
2307.09675 | Probing new physics through entanglement in diboson production | Pair production of heavy vector bosons is a key process at colliders: it
allows to test our understanding of the Standard Model and to explore the
existence of new physics through precision measurements of production rates and
differential distributions. New physics effects can be subtle and often require
observables specifically designed for their detection. In this study, we focus
on quantum information observables that characterise the spin states of the
final diboson system. We analyse concurrence bounds, purity, and Bell
inequalities for a bipartite qutrit system representing two massive gauge
bosons. Our findings show that quantum spin observables can serve as
complementary probes for heavy new physics as parametrised by higher
dimensional operators in the Standard Model effective field theory. In
particular, we find that these observables offer increased sensitivity to
operators whose contributions do not interfere with the Standard Model
amplitudes at the level of differential cross sections. | Rafael Aoude, Eric Madge, Fabio Maltoni, Luca Mantani | 2023-07-18T23:06:09Z | http://arxiv.org/abs/2307.09675v2 | # Probing new physics through entanglement in diboson production
###### Abstract
Pair production of heavy vector bosons is a key process at colliders: it allows to test our understanding of the Standard Model and to explore the existence of new physics through precision measurements of production rates and differential distributions. New physics effects can be subtle and often require observables specifically designed for their detection. In this study, we focus on quantum information observables that characterise the spin states of the final diboson system. We analyse concurrence bounds, purity, and Bell inequalities for a bipartite qutrit system representing two massive gauge bosons. Our findings show that quantum spin observables can serve as complementary probes for heavy new physics as parametrised by higher dimensional operators in the Standard Model effective field theory. In particular, we find that these observables offer increased sensitivity to operators whose contributions do not interfere with the Standard Model amplitudes at the level of differential cross sections.
## 1 Introduction
Quantum information theory (SM) is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT). The theory of quantum information theory is a powerful tool to study the quantum information theory (QFT).
Introduction
Our current understanding of fundamental interactions of elementary particles is based on relativistic quantum field theories (QFTs) that are built upon symmetry principles. Lorentz and Poincare invariance, for example, not only dictate the possible particle content of the theory (in terms of group irreducible representations), but also strongly limit the possible form of their interactions. Charge conservation and then gauge invariance, provide further constraints and non-trivial connections between interactions among different particles. Finally, the requirement of renormalisability further reduces the number of allowed interactions to a handful. The possible interactions in a renormalisable QFT is therefore very much constrained once the field content of the theory and its (gauge) symmetries are imposed. The most successful and famous example of a renormalisable QFT featuring a very limited number of interactions encoded in a simple Lagrangian, invariant with respect to \(SU(3)\times SU(2)\times U(1)\) gauge symmetries, is the Standard Model (SM) of particle physics.
In addition to symmetries, other fundamental properties of QFTs, such as unitarity and positivity, have shown to provide very important constraints on the form of scattering amplitudes, which can be obscure at the Lagrangian level. More recently, it has been suggested that being at the core of quantum mechanics, the entanglement properties of a system could be used to provide constraints on the underlying dynamics. For example, it was observed that the requirement of maximal entanglement puts constrains on the form of the interactions in QED and the EW theory [1], while in Refs. [2; 3] a very interesting relation between minimisation of entanglement and enhanced symmetries was observed for low energy QCD. These works focus on the entanglement of evolution dictated by the \(S\)-matrix, which acts a quantum logical gate, between the initial and the same final states. A different approach consists in studying the pattern of entanglement of a given final state in a generic scattering amplitude. The simplest example is to consider the spin degrees of freedom, described by a correlation matrix (\(R\)-matrix) which is pertubatively computable in QFT, and see how predictions change depending on the form of the interactions. Although different, the above two approaches allow to explore the relation between symmetries and entanglement.
Recently, the authors of Ref. [4] have pointed out that the quantum information properties of the spin states of top-anti-top quark pairs at proton colliders should be already accessible in current data. Using two measures of entanglement, the concurrence and the Peres-Horodecki criterion, they identified two phase space regions featuring maximal entanglement: at threshold and at high-\(p_{T}\). This has triggered a series of studies on Standard Model \(t\bar{t}\) production, that have further elaborated on the experimental detection strategy [5; 6; 7; 8; 9]. Moreover, other observables have been explored such as quantum steering and discord that allow the top-anti-top spin correlations [10] to be organised in graded sets of quantum correlations characteristic of a two-qubit state:
\[\text{Spin correlations}\supseteq\text{Discord}\supseteq\text{Entanglement} \supseteq\text{Steering}\supseteq\text{Bell Inequalities}\,.\]
In Ref. [11], we have proposed to use quantum observables in \(t\bar{t}\) to search for physics beyond the Standard Model (BSM), _i.e._, to study the structure and properties of fundamental
interactions at very high scales. Working in the SM Effective Field Theory (SMEFT) framework, which allows to "deform" the SM in a consistent way (_i.e._ compatible with the gauge symmetries and the particle content of the theory), we have calculated the new physics contributions proportional to the Wilson coefficients (with a linear and quadratic1 dependence), to the concurrence fully analytically at the tree level. We have found that, in the SMEFT analysis of \(t\bar{t}\) production, higher-dimensional operators reduce the entanglement generated by the SM. The same conclusion was later corroborated also at next-to-leading order (NLO) in \(\alpha_{S}\), confirming the expectation that loop effects do not drastically change the leading-order (LO) picture [12]. BSM effects in \(t\bar{t}\) have also been explored in Ref. [13].
Footnote 1: Strictly speaking, by quadratic we here mean effects from the square of linear EFT amplitudes. In this work, we do not consider double dimension-six insertions or linear dimension eight, which are formally at the same order in the EFT expansion, _i.e._\(\Lambda^{-4}\).
One of the reasons for interest in the top quark pair system is its simplicity: top quarks being fermionic "bare" spin-1/2 states, form bipartite qubit systems. Beyond \(t\bar{t}\) production, other two-particle final states that feature spin correlations described by two qubits have been proposed, from \(\tau\tau\) to diphoton [13; 14].
Qubits can model most of the SM particles - fermions and massless bosons - with the exception of the Higgs scalar and massive gauge bosons. Leaving out the Higgs boson which has no spin, one notes that \(W^{\pm}\) and \(Z\) bosons being massive are characterised by three polarisations and therefore can be described by qutrits. Barr et al. [15; 16; 17] initiated the quantum studies of final states involving massive vector bosons, first by introducing the qutrit formalism for entanglement at colliders and then exploring Higgs boson decays and diboson production, the latter mainly studied using a numerical approach. The same processes have further been studied in Refs. [18; 19; 20; 21]. Recently, entanglement in diboson production was also studied in the context of vector-boson fusion [22] and from decays of top pairs [23]. The latter Ref. studies also for the first time the detection of entanglement between a \(W\) boson and a top quark.
Quantifying entanglement through the concurrence \(\mathcal{C}\)[24; 25] is a challenging analytical task for qutrits, as it involves an optimisation procedure, and closed analytic expressions can be only obtained in special cases or configurations. It turns out, however, that lower and upper bounds for the concurrence can be obtained in closed form,
\[\mathcal{C}_{\rm LB}\leq\mathcal{C}(\rho)\leq\mathcal{C}_{\rm UB}, \tag{1}\]
where \(\mathcal{C}_{\rm LB}\) is the lower bound [26] and \(\mathcal{C}_{\rm UB}\) the upper bound [27], and that in some cases the bounds are so effective that they coincide with the actual values.
In this work we study the spin density matrix of diboson production, in the SM and in SMEFT, with the goal of understanding whether quantum observables may provide a better probe of new interactions than usual "classical" observables. This paper is organised as follows. In Section 2, we review the formalism used to study quantum information observables for spin-1 particles, which slightly differs from the one used for top quarks. This formalism is then applied to electroweak diboson production at present and future colliders. Their interactions in the SM and in the SMEFT are described in Section 3. We proceed by
studying perturbative unitarity and its relation with entanglement in Section 4. We finally present the results for diboson production at lepton and proton colliders in Section 5 and conclude in Section 6. Details on the density matrix coefficients are given in the ancillary Mathematica notebook accompanying the arXiv submission of this manuscript.
## 2 Qutrit formalism
The spin density matrix is a fundamental object in quantum mechanics as its knowledge completely characterises the quantum system. In this context, quantum tomography is the idea of conducting experiments with the objective of determining the density matrix of the system at study. This objective was at the heart of top quark pair spin studies in Refs. [4; 5; 6; 7; 8; 10; 11; 12; 13]. In the aforementioned studies, several quantum information observables have been analysed, from spin correlations to quantum entanglement and tests of Bell inequalities, both in the SM and exploring effects from heavy NP through SMEFT dimension six effects.
### Spin density matrix and quantum observables
In the following we present the theoretical framework that is going to be used throughout the paper to describe the spin density matrix of a bipartite system consisting of two spin-1 massive particles, _i.e._, qutrits. The formalism employed closely follows that of Ref. [17]. While for spin-1/2 particles the measurement of the particle polarisation completely defines the spin density matrix, this is not the case anymore for spin-1 particles, as the polarisation vector determination is not enough to fully characterise the system. For a generic particle of spin \(s\), the Hilbert space has dimension \(d=2s+1\). The density matrix \(\rho\) is therefore a \(d\times d\) matrix with \(d^{2}-1\) free parameters (as \(\mathrm{Tr}[\rho]=1\)). This means that we can always decompose the generic one-particle spin density matrix with the generators of \(SU(d)\), _i.e._, the generalised Gell-Mann matrices:
\[\rho=\frac{1}{d}\mathbb{I}+\sum_{i=1}^{d^{2}-1}a_{i}\lambda_{i}\,. \tag{1}\]
As made explicit from the above decomposition, in order to fully characterise the quantum system one needs to determine the Bloch vector \(a_{i}\). In the case of spin-1/2 particles, the Bloch vector has dimension 3, the same of the spin operator \(\vec{S}\). This means that the density matrix can be recast in terms of the spin operator spatial components and therefore its determination is in one-to-one correspondence with the quantum tomography of the system.
The situation is more complicated for particles of higher spins. For instance, in the case of massive spin-1 particles, the Bloch vector has dimension 8 and the measurement of the spin components of the particle are not sufficient anymore for the complete characterisation of the quantum state. It is however possible to express the spin density matrix in terms of a spin matrix representation. In particular, given the spin-1 matrices
\[S_{x}=\frac{1}{\sqrt{2}}\left(\begin{array}{ccc}0&1&0\\ 1&0&1\\ 0&1&0\end{array}\right),\quad S_{y}=\frac{1}{\sqrt{2}}\left(\begin{array}{ ccc}0&-\mathrm{i}&0\\ \mathrm{i}&0&-\mathrm{i}\\ 0&\mathrm{i}&0\end{array}\right),\quad S_{z}=\left(\begin{array}{ccc}1&0&0 \\ 0&0&0\\ 0&0&-1\end{array}\right)\,, \tag{2}\]
one can build a set of six operators
\[S_{\{ij\}}\equiv S_{i}S_{j}+S_{j}S_{i} \tag{3}\]
that together with the spin matrices allow us to decompose the spin density matrix in the form
\[\rho=\frac{1}{3}\mathbb{I}+\sum_{i=1}^{3}\alpha_{j}S_{i}+\sum_{i,j=1}^{3}\beta_ {ij}S_{\{ij\}}\,. \tag{4}\]
Note that in this formalism not all of the coefficients are free parameters since some of these operators are not linearly independent from the identity matrix, _i.e._
\[S_{\{xx\}}+S_{\{yy\}}+S_{\{zz\}}=2\left(S_{x}^{2}+S_{y}^{2}+S_{z}^{2}\right)=4 \,\mathbb{I}\,. \tag{5}\]
In order for \(\rho\) to have unit trace we therefore have to impose the constraint
\[\sum_{i=1}^{3}\beta_{ii}=0\,. \tag{6}\]
The generalisation to a bipartite system in this formalism is straightforward. The spin density matrix for a pair of qutrits, in the Gell-Mann decomposition, is given by
\[\rho=\frac{1}{9}\,\mathbb{I}\otimes\mathbb{I}+\frac{1}{3}\sum_{i=1}^{8}a_{i} \,\lambda_{i}\otimes\mathbb{I}+\frac{1}{3}\sum_{j=1}^{8}b_{j}\,\mathbb{I} \otimes\lambda_{j}+\sum_{i=1}^{8}\sum_{j=1}^{8}c_{ij}\,\lambda_{i}\otimes \lambda_{j}\,. \tag{7}\]
Similarly, one could express Eq. (7) in terms of the spin matrix representation, but we find the Gell-Mann decomposition more straightforward and neat to use.
The parameters \(a_{i}\), \(b_{j}\) and \(c_{ij}\) (called Fano coefficients) determine the angular distributions of the decay products, and characterise the interactions governing the decay. This feature allows us to perform the quantum tomography of the system experimentally by measuring the angular distributions of the decays and reconstructing the coefficients. The aim of this work is to first explore the properties of the density matrix based on observables2 and the conditions for entanglement, deferring the tomography studies to a later stage.
Footnote 2: By observables, we mean final states angular distributions after the gauge boson decays.
#### Concurrence
We state that a system is entangled if the concurrence \(\mathcal{C}(\rho)\) is non zero. For bipartite qubit systems, this measure is easily evaluated by relating to the eigenvalues of the matrix \(\omega=\sqrt{\sqrt{\tilde{\rho}}}\rho\sqrt{\tilde{\rho}}\) where \(\tilde{\rho}=(\sigma_{2}\otimes\sigma_{2})\rho^{*}(\sigma_{2}\otimes\sigma_{2})\). However, for more complicated bipartite systems, such as the two qutrits explored in this work, conditions for entanglement cannot be calculated analytically [16; 28; 29].
For higher-dimensionality mixed states, the concurrence is obtained by the method of convex roof extension [28]. For a given decomposition of the \(\rho\) matrix in pure states, _i.e._
\[\rho=\sum_{i}p_{i}|\psi_{i}\rangle\langle\psi_{i}|\,,\qquad\sum_{i}p_{i}=1\,, \qquad p_{i}\geq 0\,, \tag{8}\]
the concurrence is defined as
\[\mathcal{C}(\rho)=\inf\left[\sum_{i}p_{i}c(|\psi_{i}\rangle)\right]\,, \tag{9}\]
where the infimum is obtained over all possible ensembles \(\{p_{i},\psi_{i}\}\) for the decomposition in Eq. (8). The particular ensemble to which this infimum is reached is called optimal. The given concurrence is then the average of the optimal ensemble states concurrence.3 The concurrence, however, cannot be calculated in a closed form for systems higher than \(2\times 2\). In these cases, one can rely on lower and upper bounds, \(\mathcal{C}_{\rm LB}\) and \(\mathcal{C}_{\rm UB}\) respectively, to quantify the entanglement.
Footnote 3: It is clear form Eq. (9) that if we have a pure state, the concurrence will be calculable. This will be relevant later for \(WZ\) production.
A lower bound on the concurrence is given by [26]
\[(\mathcal{C}(\rho))^{2}\geq 2\max\left(0,\mathrm{Tr}\left[\rho^{2}\right]- \mathrm{Tr}\left[\rho_{A}^{2}\right],\mathrm{Tr}\left[\rho^{2}\right]-\mathrm{ Tr}\left[\rho_{B}^{2}\right]\right)\equiv\mathcal{C}_{\rm LB}^{2}\,, \tag{10}\]
where \(\rho_{A}=\mathrm{Tr}_{B}(\rho)\) and \(\rho_{B}=\mathrm{Tr}_{A}(\rho)\) are the reduced density matrices, obtained by tracing out one of the subsystems. The function \(\mathcal{C}_{\rm LB}(\rho)\) is a marker that is telling us that in case of positive values, we have an entangled state. However, in the scenario of negative values, the test is inconclusive. In particular, for the qutrit pair, we have
\[\mathcal{C}_{\rm LB}^{2}=-\frac{4}{9}+\max\left(-\frac{8}{3}\sum_{i=1}^{8}a_{i }^{2}+\frac{4}{3}\sum_{j=1}^{8}b_{j}^{2},\frac{4}{3}\sum_{i=1}^{8}a_{i}^{2}- \frac{8}{3}\sum_{j=1}^{8}b_{j}^{2}\right)+8\sum_{i,j=1}^{8}c_{ij}^{2}\,. \tag{11}\]
One can also obtain an upper bound for the concurrence, given in Ref. [30] and recently explored also in Ref. [18],
\[(\mathcal{C}(\rho))^{2}\leq 2\min\,\left(1-\mathrm{Tr}[\rho_{A}^{2}],1-\mathrm{ Tr}[\rho_{B}^{2}]\right)\equiv\mathcal{C}_{\rm UB}^{2}\,, \tag{12}\]
which in terms of the Fano coefficients reads
\[\mathcal{C}_{\rm UB}^{2}=\frac{4}{3}-4\min\left(\sum_{i=1}^{8}a_{i}^{2},\sum_ {j=1}^{8}b_{j}^{2}\right)\,. \tag{13}\]
For a qutrit pair, the maximum value of the concurrence is obtained for a totally symmetric and entangled pure state,
\[|\Psi_{+}\rangle=\frac{1}{\sqrt{3}}\sum_{i=1}^{3}|i\rangle\otimes|i\rangle\,, \tag{14}\]
with \(\mathcal{C}(\rho)=2/\sqrt{3}\). This is different from a qubit pair, in which all entangled pure states have concurrence \(\mathcal{C}(\rho)=1\).
#### Purity
When exploring the density matrices to assess entanglement, it is useful to know if the state is pure or mixed. This is quantified by the purity \(P\) given by
\[P(\rho)\equiv\text{tr}[\rho^{2}]\,, \tag{15}\]
which is one in the case of pure states and bounded to the lower value of \(1/d\) for qudits. Given the Fano decomposition in Eq. (7), this means
\[P(\rho)=\frac{1}{9}+\frac{2}{3}\sum_{i=1}^{8}(a_{i}^{2}+b_{i}^{2})+4\sum_{i,j= 1}^{8}c_{ij}^{2}\,. \tag{16}\]
#### Bell inequalities
Another interesting aspect of quantum system is the possibility of violating Bell inequalities [31]. This allows to distinguish classical local realist theories from quantum mechanical ones. In particular, for a pair of qubits, the Clauser-Horne-Shimony-Holt (CHSH) [32] inequality holds
\[\mathcal{I}_{2}=E(a,b)-E\left(a,b^{\prime}\right)+E\left(a^{\prime},b\right)+ E\left(a^{\prime},b^{\prime}\right)\leq 2\,. \tag{17}\]
Quantum mechanics allows \(\mathcal{I}_{2}\) to have values higher than two.
Analogously, one can define an observable for pairs of qutrits, the Collins-Gisin-Linden-Massar-Popescu (CGLMP) inequality [33; 34]. By defining the quantum operator [35; 36]
\[\mathcal{B}=-\frac{2}{\sqrt{3}}\left(S_{x}\otimes S_{x}+S_{y}\otimes S_{y} \right)+\lambda_{4}\otimes\lambda_{4}+\lambda_{5}\otimes\lambda_{5}\,, \tag{18}\]
one finds the CGLMP inequality
\[\mathcal{I}_{3}=\text{Tr}[\rho\mathcal{B}]\leq 2\,. \tag{19}\]
The above equation is valid in the \(x-y\) plane, but it can be generalised to any direction in the 3-dimensional space and arbitrary bases in spin space. The generalised condition for violation of the Bell inequalities becomes
\[\left\langle\mathcal{B}\right\rangle_{\text{max}}=\max_{U,V}\left(\text{Tr} \left(\rho\left(U^{\dagger}\otimes V^{\dagger}\right)\mathcal{B}\left(U\otimes V \right)\right)\right)\geq 2\,, \tag{20}\]
where \(U,V\in U(3)\) are unitary matrices.
### EW boson production at colliders
We now turn our attention to electroweak production of spin-1 particles at colliders, _i.e._\(W\) and \(Z\) bosons. Following the approach of Ref. [11], we define the \(R\)-matrix from the matrix element amplitude:
\[R^{I}_{\alpha_{1}\alpha_{2},\beta_{1}\beta_{2}}\equiv\frac{1}{N_{a}N_{b}} \sum_{\begin{subarray}{c}\text{colors}\\ \text{a,b spins}\end{subarray}}\mathcal{M}^{*}_{\alpha_{2}\beta_{2}} \mathcal{M}_{\alpha_{1}\beta_{1}} \tag{21}\]
\[\text{with}\quad\mathcal{M}_{\alpha\beta}\equiv\left\langle V\left(k_{1},\alpha \right)\bar{V}\left(k_{2},\beta\right)|\mathcal{T}|a\left(p_{1}\right)b\left(p_ {2}\right)\right\rangle\,,\]
where \(I=ab=\bar{q}q^{\prime},\bar{e}e\) are the possible initial states in proton and lepton colliders at LO with \(N_{a,b}\) degrees of freedom, and \(V=W^{\pm},\,Z\) are the respective vector bosons. For all these diboson amplitudes, we can factor out the polarisation vectors, which carry the spin dependence of the \(R\)-matrix as
\[\mathcal{M}_{\alpha\beta}=\mathcal{M}_{\mu\nu}\;\varepsilon_{\alpha}^{\dagger \mu}(k_{1})\varepsilon_{\beta}^{\dagger\nu}(k_{2}) \tag{22}\]
where both polarisation tensors act as a map between the Lorentz tensor structures in \(\mathcal{M}_{\mu\nu}\) and the spin-space labelled by the index \(\{\alpha,\beta\}\) forming the \(9\times 9\) qutrit matrix. The \(R\)-matrix is in direct relation with the spin density matrix, _i.e._
\[R=\tilde{A}\,\mathbb{I}\otimes\mathbb{I}+\sum_{i=1}^{8}\tilde{a}_{i}\,\lambda _{i}\otimes\mathbb{I}+\sum_{j=1}^{8}\tilde{b}_{j}\,\mathbb{I}\otimes\lambda_ {j}+\sum_{i=1}^{8}\sum_{j=1}^{8}\tilde{c}_{ij}\,\lambda_{i}\otimes\lambda_{j}\,, \tag{23}\]
with the coefficient \(\tilde{A}\) encoding information on the differential cross section
\[\frac{\text{d}\sigma}{\text{d}\Omega}=\frac{9\beta}{64\pi^{2}\hat{s}}\tilde{ A}(\hat{s},\mathbf{k})\,, \tag{24}\]
where \(\mathbf{k}\) is the direction of the \(V\) boson, \(\hat{s}\) the invariant mass of the pair and \(\beta=\sqrt{1-4\frac{m_{V}^{2}}{\hat{s}}}\) the velocity of the \(V\) boson in the centre of mass frame. Each coefficient can be obtained by tracing with the element of decomposition, e.g. \(\tilde{a}_{i}=\text{tr}[R\,\lambda_{i}\otimes\mathbb{I}]\) and similarly for \(\tilde{b}_{j}\) and \(\tilde{c}_{ij}\).
If we consider production at proton colliders, such as the LHC, the total \(R\)-matrix is given by a weighted sum of the various different partonic channels, _i.e._
\[R(\hat{s},\mathbf{k})=\sum_{I}L^{I}(\hat{s})R^{I}(\hat{s},\mathbf{k})\,, \tag{25}\]
with \(L_{I}\) the luminosity functions [37]. The relevant channels for diboson production at a proton collider are the quark annihilation ones. Note that, given that the initial state particles are not identical, both \(q\bar{q}\) and \(\bar{q}q\) channels are to be summed over. This can be also taken into account by a symmetrisation over the polar angle, since it can be shown that \(R^{\bar{q}q}(\hat{s},\theta)=R^{q\bar{q}}(\hat{s},\theta+\pi)\), _i.e._
\[R(\hat{s},\theta)=\sum_{q}L^{q\bar{q}}(\hat{s})(R^{q\bar{q}}(\hat{s},\theta)+ R^{q\bar{q}}(\hat{s},\theta+\pi))\,, \tag{26}\]
where we made explicit that there is no dependence on the azimuthal angle \(\phi\) of the vector \(\mathbf{k}\), given the cylindrical symmetry of the problem. It is clear from the expression that the \(R\)-matrix of a proton collider is by definition symmetric around \(\theta=\pi/2\). The \(R\)-matrix is then related to the spin density matrix simply by an overall normalisation factor, _i.e._\(\rho=R/\,\text{Tr}(R)\). In particular we obtain the decomposition of the spin density matrix in terms of the \(R\)-matrix Fano coefficients
\[a_{i}=\frac{\tilde{a}_{i}}{3\tilde{A}}\,,\qquad b_{i}=\frac{\tilde{b}_{i}}{3 \tilde{A}}\,,\qquad c_{ij}=\frac{\tilde{c}_{ij}}{9\tilde{A}}\,. \tag{27}\]
In terms of these coefficients, the purity condition \(P(\rho)=1\) reads
\[36\tilde{A}^{2}=3\sum_{i=1}^{8}(\tilde{a}_{i}^{2}+\tilde{b}_{i}^{2})+2\sum_{i,j= 1}^{8}\tilde{c}_{i,j}^{2}\,. \tag{28}\]
We conclude this section by commenting on the possible effects of higher-order corrections which in general will change the \(R\)-matrix and the expected entanglement. In specific cases, such as \(ZZ\) and \(W^{+}W^{-}\) final states, IR/UV finite loop-induced processes could also contribute, e.g., \(gg\to ZZ/W^{+}W^{-}\). Even though suppressed, at the LHC gluon fusion production provides interesting information on Higgs properties. The framework presented here could be directly employed to perform a dedicated study. This is left for future investigations. More in general, for QCD or QED corrections, real and virtual contributions need to be considered together. In case of inclusive predictions, the framework presented here can be can be straightforwardly applied by simply tracing out (including integration over the phase space) the unobserved degrees of freedom. Naively, one can expect some degree of decoherence which should lower the entanglement compared to the leading-order \(R\)-matrix. On the other hand, final states characterised by resolvable emissions could be analysed on their own as three-body final states, where all vector bosons are measured and the \(R\)-matrix has dimensionality higher than \(9\times 9\). In this case the an extension of the framework presented here would be needed.
## 3 Diboson interactions
In this section, we discuss the relevant interactions for diboson production at colliders. In particular, we consider the case of the SM as well as its extension in the SMEFT, where higher dimensional operators will lead to the introduction of SM parameter shifts and new Lorentz structures. The objective is to present the structure of the EW couplings dictated by the SM symmetries and how heavy new physics could potentially alter it.
### SM couplings
As we are interested in both lepton and hadron colliders, we now discuss the couplings that EW bosons have with quarks and with electrons. In particular, we consider the processes \(e^{+}e^{-}\to W^{+}W^{-}\) and \(e^{+}e^{-}\to ZZ\), for a lepton collider as well as \(pp\to W^{+}W^{-}\), \(pp\to ZZ\), and \(pp\to W^{+}Z\) for an hadron collider, respectively. In the latter case, the relevant partonic channels at LO are given by \(u\bar{u}\) and \(d\bar{d}\) for the neutral final states, while for \(W^{+}Z\) it is \(u\bar{d}\). We work in the 5-flavour scheme, so \(u\) comprises both up and charm quark, while \(d\) includes down, strange and bottom quarks. We choose to discuss \(W^{+}Z\), in analogy with Ref. [18], as it is the dominant channel in a proton collider, but kinematic features are similar for \(W^{-}Z\).
In Figs. 1 to 3 we show the topology of interest in terms of Feynman diagrams for all the processes considered. The properties of each process depend on the values and form of couplings, which, in the SM, are completely determined by gauge symmetry and the EWSB pattern. Specifically, the relevant couplings of the fermions are the coupling to the \(Z\) boson,
the coupling to the \(W\) boson and the coupling to the \(h\) boson. The latter is proportional to the mass of the fermions, so it will be highly suppressed and substantially irrelevant for the phenomenology at hadron or \(e^{+}e^{-}\) colliders. We will however discuss their role in the high energy limit in Section 4. Moreover, the processes are sensitive to the triple gauge coupling (TGC) between the \(W\) and the \(Z\) boson, as well as the coupling of the \(W\) with a photon. The coupling of EW bosons to the \(h\) boson is always accompanied by the Yukawa coupling of the fermions, therefore our sensitivity to that is negligible4. The couplings of the fermions to the EW bosons read
Footnote 4: However it could be interesting at a future Higgs factory at \(\sqrt{s}=m_{h}\)
\[\mathcal{L}_{\bar{f}fZ}\propto\left(g_{V}^{Z}\right)-\left(g_{A}^{Z}\right) \gamma_{5}\,,\qquad\text{with}\qquad g_{V}^{Z}=\frac{T_{3}}{2}-Q\sin^{2}\theta _{W}\,,\quad g_{A}^{Z}=\frac{T_{3}}{2}, \tag{3.1}\]
for the \(Z\) boson and
\[\mathcal{L}_{\bar{f}fW}\propto g_{W}\left(1-\gamma_{5}\right)\,, \tag{3.2}\]
for the \(W\) boson, with \(T_{3}=-1/2\) for down-type quarks and leptons, \(T_{3}=1/2\) for up-type quarks and the electric charge \(Q=-1,2/3,-1/3\) for \(e\), \(u\) and \(d\) respectively. In the SM, \(g_{V}^{Z}\approx-0.027,0.1,-0.17\) and \(g_{A}^{Z}\approx-0.25,0.25,-0.25\) for \(e\), \(u\) and \(d\) respectively and \(g_{W}=1/2\). Finally, the TGCs are specified by the gauge symmetries of the SM through the gauge Lagrangian
\[\mathcal{L}_{\text{gauge}}\;=-\frac{1}{4}W_{\mu\nu}^{a}W^{a\mu\nu}-\frac{1}{4} B_{\mu\nu}B^{\mu\nu}\,. \tag{3.3}\]
In the SM, the couplings are given by
\[g_{WW\gamma}=e\,,\qquad g_{WWZ}=e\cot\theta_{W}\,, \tag{3.4}\]
where the electric charge \(e\) and the trigonometric functions of the Weinberg angle can be determined in terms of the chosen EW input parameters.
### Modified interactions: SMEFT framework
We now discuss the effects of heavy new physics to the production of diboson at colliders. We do so within the framework of the SMEFT where the SM gauge symmetries are all preserved and the Higgs mechanism is realised linearly. The SMEFT is an extension of the SM in which higher order operators modify the SM interactions, characterised by a Lagrangian of the kind
\[\mathcal{L}_{\text{SMEFT}}=\mathcal{L}_{\text{SM}}+\sum_{n=1}^{N}\frac{c_{n}}{ \Lambda^{2}}\mathcal{O}_{n}+\mathcal{O}\left(\frac{1}{\Lambda^{4}}\right)\,, \tag{3.5}\]
\begin{table}
\begin{tabular}{c c c c} \hline \hline Operator & Coefficient & Definition & \(95\,\%\) CL bounds \\ \hline \hline \multicolumn{4}{c}{two-fermion operators} \\ \hline \(\mathcal{O}_{\varphi u}\) & \(c_{\varphi u}\) & \(i\big{(}\varphi^{\dagger}\overset{\leftrightarrow}{D}_{\mu}\varphi\big{)} \big{(}\bar{u}\gamma^{\mu}u\big{)}\) & \([-0.17,0.14]\) \\ \hline \(\mathcal{O}_{\varphi d}\) & \(c_{\varphi d}\) & \(i\big{(}\varphi^{\dagger}\overset{\leftrightarrow}{D}_{\mu}\varphi\big{)} \big{(}\bar{d}\gamma^{\mu}d\big{)}\) & \([-0.07,0.09]\) \\ \hline \(\mathcal{O}_{\varphi q}^{(1)}\) & \(c_{\varphi q}^{(1)}\) & \(i\big{(}\varphi^{\dagger}\overset{\leftrightarrow}{D}_{\mu}\varphi\big{)} \big{(}\bar{q}\gamma^{\mu}\,q\big{)}\) & \([-0.06,0.22]\) \\ \hline \(\mathcal{O}_{\varphi q}^{(3)}\) & \(c_{\varphi q}^{(3)}\) & \(i\big{(}\varphi^{\dagger}\overset{\leftrightarrow}{D}_{\mu}\tau_{I}\varphi \big{)}\big{(}\bar{q}\gamma^{\mu}\,\tau^{I}q\big{)}\) & \([-0.21,0.05]\) \\ \hline \(\mathcal{O}_{\varphi e}\) & \(c_{\varphi e}\) & \(i\big{(}\varphi^{\dagger}\overset{\leftrightarrow}{D}_{\mu}\varphi\big{)} \big{(}\bar{e}\gamma^{\mu}e\big{)}\) & \([-0.21,0.26]\) \\ \hline \(\mathcal{O}_{\varphi l}^{(1)}\) & \(c_{\varphi l}^{(1)}\) & \(i\big{(}\varphi^{\dagger}\overset{\leftrightarrow}{D}_{\mu}\varphi\big{)} \big{(}\bar{l}\gamma^{\mu}l\big{)}\) & \([-0.11,0.13]\) \\ \hline \(\mathcal{O}_{\varphi l}^{(3)}\) & \(c_{\varphi l}^{(3)}\) & \(i\big{(}\varphi^{\dagger}\overset{\leftrightarrow}{D}_{\mu}\tau_{I}\varphi \big{)}\big{(}\bar{l}\gamma^{\mu}\tau^{I}l\big{)}\) & \([-0.21,0.05]\) \\ \hline \hline \multicolumn{4}{c}{bosonic operators} \\ \hline \(\mathcal{O}_{W}\) & \(c_{W}\) & \(\varepsilon_{IJK}W_{\mu\nu}^{I}W^{J,\nu\rho}W_{\rho}^{K,\mu}\), & \([-0.18,0.22]\) \\ \hline \(\mathcal{O}_{\varphi W}\) & \(c_{\varphi W}\) & \(\left(\varphi^{\dagger}\varphi-\frac{v^{2}}{2}\right)W_{I}^{\mu\nu}W_{\mu\nu}^ {I}\) & \([-0.15,0.30]\) \\ \hline \(\mathcal{O}_{\varphi B}\) & \(c_{\varphi B}\) & \(\left(\varphi^{\dagger}\varphi-\frac{v^{2}}{2}\right)B_{\mu\nu}B^{\mu\nu}\) & \([-0.11,0.11]\) \\ \hline \(\mathcal{O}_{\varphi WB}\) & \(c_{\varphi WB}\) & \((\varphi^{\dagger}\tau_{I}\varphi)B^{\mu\nu}W_{\mu\nu}^{I}\) & \([-0.17,0.27]\) \\ \hline \(\mathcal{O}_{\varphi D}\) & \(c_{\varphi D}\) & \((\varphi^{\dagger}D^{\mu}\varphi)^{\dagger}(\varphi^{\dagger}D_{\mu}\varphi)\) & \([-0.52,0.43]\) \\ \hline \hline \multicolumn{4}{c}{four-fermion operator} \\ \hline \(\mathcal{O}_{ll}\) & \(c_{ll}\) & \(\big{(}\bar{l}\gamma_{\mu}l\big{)}\big{(}\bar{l}\gamma^{\mu}l\big{)}\) & \([-0.16,0.02]\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Definition of the dimension-six SMEFT operators relevant for this analysis. The bounds assume a scale of \(\Lambda=1\,\text{TeV}\) and are taken from the global fit of Ref. [38] at order \(\mathcal{O}(\Lambda^{-4})\). The bound on \(c_{ll}\) comes from EWPO fits and here we quote the result from Ref. [39].
where \({\cal O}_{n}\) indicates a higher dimensional operator, \(c_{n}\) is the associated Wilson coefficient, a free parameter that in a bottom-up approach needs to be determined experimentally, and \(N\) is the number of operators at this order in the expansion.
The operators are suppressed by the scale of new physics \(\Lambda\), assumed to be at least at the order of a TeV. This scale is what allows us to truncate the EFT series by means of its power counting. For this analysis, we focus on the SMEFT dim-6 operators [40; 41] at order \(1/\Lambda^{2}\) and neglect higher order corrections. Note that we can only be sensitive to the ratio \(c_{n}/\Lambda^{2}\) and therefore we will absorb \(\Lambda^{2}\) into the definition of the Wilson coefficient for the rest of the paper, _i.e._
\[\frac{c_{n}}{\Lambda^{2}}\to c_{n}\,. \tag{10}\]
For the sake of simplicity, we assume flavour universality for the few operators that could, in principle, have a more involved structure. Results will be presented in the \(m_{W}\) input parameter scheme, where the experimentally determined EW parameters of choice are \(\{m_{W},m_{Z},m_{h},G_{F}\}\). In Table 1 the relevant CP-even operators for leading-order diboson production, both in lepton and proton colliders are shown. We note that when considering a massless initial state, \({\cal O}_{\varphi W}\) and \({\cal O}_{\varphi B}\) which modify the coupling of the EW bosons to the Higgs, do not enter the processes because of the aforementioned pairing with the Yukawa couplings. We will therefore not discuss further these two operators, but we listed them for completeness.
The operators act in a multitude of ways, leading to different phenomenological consequences. In particular, some operators act by shifting the SM value of the coupling of the fermions to the EW bosons. Specifically we have
\[\delta g_{V}^{Z}(e)= \left(\sin^{2}\theta_{W}-\frac{1}{4}\right)\delta g_{Z}-\delta s _{\theta}^{2}-\frac{c_{\varphi l}^{(3)}+c_{\varphi e}+c_{\varphi l}^{(1)}}{4 \sqrt{2}G_{F}}\,,\] \[\delta g_{A}^{Z}(e)= \frac{\delta g_{Z}}{4}-\frac{c_{\varphi l}^{(3)}-c_{\varphi e}+c _{\varphi l}^{(1)}}{4\sqrt{2}G_{F}}\,,\] \[\delta g_{V}^{Z}(u)= \left(\frac{1}{4}-\frac{2\sin^{2}\theta_{W}}{3}\right)\delta g_{Z }+\frac{2}{3}\delta s_{\theta}^{2}-\frac{c_{\varphi q}^{(-)}+c_{\varphi u}}{4 \sqrt{2}G_{F}}\,, \tag{11}\] \[\delta g_{A}^{Z}(u)= \frac{\delta g_{Z}}{4}-\frac{c_{\varphi q}^{(-)}-c_{\varphi u}}{4 \sqrt{2}G_{F}}\,,\] \[\delta g_{V}^{Z}(d)= \left(\frac{\sin^{2}\theta_{W}}{3}-\frac{1}{4}\right)\delta g_{Z }-\frac{1}{3}\delta s_{\theta}^{2}-\frac{2c_{\varphi q}^{(3)}+c_{\varphi d}+c _{\varphi q}^{(-)}}{4\sqrt{2}G_{F}}\,,\] \[\delta g_{A}^{Z}(d)= \frac{\delta g_{Z}}{4}-\frac{2c_{\varphi q}^{(3)}-c_{\varphi d}+c _{\varphi q}^{(-)}}{4\sqrt{2}G_{F}}\,,\]
where we defined \(c_{\varphi q}^{(-)}=c_{\varphi q}^{(1)}-c_{\varphi q}^{(3)}\), which is the combination of Wilson coefficients that modifies the coupling of up-type quarks to the \(Z\) boson. The variations of the couplings
are written as function of the quantities
\[\begin{split}\delta g_{Z}&=-\frac{4c_{\varphi l}^{(3)}- 2c_{ll}+c_{\varphi D}}{4\sqrt{2}G_{F}}\,,\\ \delta s_{\theta}^{2}&=\frac{c_{\varphi D}\,m_{W}^{2 }}{2\sqrt{2}G_{F}\,m_{Z}^{2}}+\frac{c_{\varphi WB}\,m_{W}\sqrt{1-\frac{m_{W}^{ 2}}{m_{Z}^{2}}}}{\sqrt{2}G_{F}\,m_{Z}}\,,\end{split} \tag{3.8}\]
which are SMEFT induced universal shifts specific to the EW input parameter scheme, see Ref. [39] for more details. For the coupling to the \(W\) boson, we have
\[\begin{split}\delta g_{W}(e)&=\frac{c_{\varphi l}^{(3 )}}{2\sqrt{2}G_{F}}-\frac{\delta G_{F}}{2\sqrt{2}}\,,\\ \delta g_{W}(q)&=\frac{c_{\varphi q}^{(3)}}{2\sqrt{2 }G_{F}}-\frac{\delta G_{F}}{2\sqrt{2}}\,,\end{split} \tag{3.9}\]
with the fractional shift to the Fermi constant originating from the muon decay measurement given by
\[\delta G_{F}=\frac{2c_{\varphi l}^{(3)}-c_{ll}}{2G_{F}}\,. \tag{3.10}\]
Finally, the operators can also alter the TGCs. The modified Lagrangian reads
\[\frac{\mathcal{L}_{WWV}}{-ig_{WWV}}=g_{1}^{V}\left(W_{\mu\nu}^{+}W^{-\mu}V^{ \nu}-W_{\mu}^{+}V_{\nu}W^{-\mu\nu}\right)+\kappa_{V}W_{\mu}^{+}W_{\nu}^{-}V^{ \mu\nu}+\frac{i\delta\lambda_{V}}{m_{W}^{2}}V^{\mu\nu}W_{\nu}^{+\rho}W_{\rho \mu}^{-}\,, \tag{3.11}\]
where \(V=Z,\gamma\) and we have defined \(V_{\mu\nu}=\partial_{\mu}V_{\nu}-\partial_{\nu}V_{\mu}\) and \(W_{\mu\nu}^{\pm}=\partial_{\mu}W_{\nu}^{\pm}-\partial_{\nu}W_{\mu}^{\pm}\). The dimension-6 SMEFT operators introduce a dependence on the couplings \(g_{1}^{V}=1+\delta g_{1}^{V}\) and \(\kappa_{V}=1+\delta\kappa_{V}\), _i.e._
\[\begin{split}\delta g_{1}^{\gamma}&=\frac{1}{4 \sqrt{2}G_{F}}\left(c_{\varphi D}\frac{m_{W}^{2}}{m_{W}^{2}-m_{Z}^{2}}-4c_{ \varphi l}^{(3)}+2c_{ll}-c_{\varphi WB}\frac{4m_{W}}{\sqrt{m_{Z}^{2}-m_{W}^{2} }}\right)\,,\\ \delta g_{1}^{Z}&=\frac{1}{4\sqrt{2}G_{F}}\left(c_ {\varphi D}-4c_{\varphi l}^{(3)}+2c_{ll}+4\frac{m_{Z}}{m_{W}}\sqrt{1-\frac{m_{ W}^{2}}{m_{Z}^{2}}}c_{\varphi WB}\right)\,,\\ \delta\kappa_{\gamma}&=\frac{1}{4\sqrt{2}G_{F}}\left( c_{\varphi D}\frac{m_{W}^{2}}{m_{W}^{2}-m_{Z}^{2}}-4c_{\varphi l}^{(3)}+2c_{ll} \right)\,,\\ \delta\kappa_{Z}&=\frac{1}{4\sqrt{2}G_{F}}\left(c_ {\varphi D}-4c_{\varphi l}^{(3)}+2c_{ll}\right)\,.\end{split} \tag{3.12}\]
Note that while \(c_{\varphi l}^{(3)}\), \(c_{ll}\) and \(c_{\varphi D}\) universally shift the TGC of the SM, \(c_{\varphi WB}\) does not contribute to \(\kappa_{V}\) but only to \(g_{1}^{V}\), changing the symmetrical structure of the SM interactions. Only one operator in Table 1 leads to a new Lorentz structure by generating a term proportional to \(\delta\lambda_{V}\), with a dependence given by
\[\begin{split}\delta\lambda_{\gamma}&=-6\sin\theta _{W}\frac{m_{W}^{2}}{g_{WW\gamma}}c_{W}\,,\\ \delta\lambda_{Z}&=-6\cos\theta_{W}\frac{m_{W}^{2}}{g _{WWZ}}c_{W}\,.\end{split} \tag{3.13}\]
\({\cal O}_{W}\) is therefore of particular interest, since it modifies the interactions among EW bosons in a way that could potentially induce different helicity structures. This is of relevance for this study, given that the density matrix of the diboson system could markedly change if the EW bosons are produced in configurations not present in the SM.
## 4 Perturbative unitarity and entanglement
Perturbative unitarity and the role of the Higgs boson in the SM can be effectively studied by considering multi-boson longitudinal scattering amplitudes and their cross-section [42]. Conservation of probability in QFT requires the unitarity of the \(S\)-matrix, which imposes bounds on the energy dependence of the corresponding terms in the perturbative expansion. The scattering amplitudes and cross-sections involving weak bosons violate the perturbative bound if gauge invariance is not respected or Higgs-mediated interactions are not included. How these violations appear and then cancel among different contributions in usual observables has been studied extensively. However, to the best of our knowledge, the perturbative unitarity constraints on entanglement and quantum information observables has not been explored. Before proceeding, we note that for this study and at variance with what done in the previous sections, here we keep the fermion masses non-zero, to allow non-vanishing Higgs couplings and investigate their role too.
It is possible to express 2\(\rightarrow\)2 scattering amplitudes in powers of the normalised energy \(\sqrt{s}/(2m_{V})\):
\[{\cal M}={\cal M}^{(2)}\left(\frac{\sqrt{s}}{2m_{V}}\right)^{2}+{\cal M}^{(1 )}\left(\frac{\sqrt{s}}{2m_{V}}\right)+{\cal M}^{(0)}+{\cal O}\left(\frac{2m_ {V}}{\sqrt{s}}\right)\,, \tag{4.1}\]
noting that this expansion is relevant also for the cross-section and the \(R\)-matrix [42]. Perturbative unitarity requires the bad high-energy behaviour to cancel and the \(2\to 2\) amplitude to go at most as a constant in the high-energy limit (\(s\sim-t\sim-u\gg M_{V}\))
\[\lim_{\frac{\sqrt{s}}{2m_{V}}\rightarrow\infty}\hskip-10.0pt{\cal M}={\cal M} ^{(0)}, \tag{4.2}\]
_i.e._, any amplitude respecting perturbative unitarity should have vanishing \({\cal M}^{(2)}\) and \({\cal M}^{(1)}\) at high energies.
### \(Ww\) final state
Let us consider \(e^{+}e^{-}\to W^{+}W^{-}\). We have four amplitudes contributing to the process, depicted in Fig. 1: \({\cal M}_{\gamma}\) and \({\cal M}_{Z}\) with an \(s\)-channel photon or \(Z\)-boson (left), \({\cal M}_{\nu}\) with a \(t\)-channel neutrino (centre), and \({\cal M}_{h}\) with a Higgs boson in the \(s\)-channel (right). The sum of the \({\cal M}_{\gamma}\), \({\cal M}_{Z}\), and \({\cal M}_{\nu}\) amplitudes grows with energy, displaying a unitarity violating behaviour. At high energies we find
\[{\cal M}_{\gamma+Z+\nu}(\pm\pm\to 00)\sim-{\cal M}_{h}(\pm\pm\to 0 0)\sim-\frac{e^{2}}{2\sin^{2}\theta_{W}}\frac{m_{e}}{m_{W}}\;\frac{\sqrt{s}}{2 m_{W}}\;. \tag{4.3}\]
However, the Higgs exchange \({\cal M}_{h}\) has the same behaviour in the high-energy limit and exactly cancels the growth of the amplitude, recovering perturbative unitarity. As one might expect, this cancellation occurs only for the longitudinal final states, since the other cases do not grow with energy. At the cross-section level, a similar cancellation occurs (see Ref. [42] for an easy-to-access review).
Let us now turn our attention to the Fano coefficients. As one might expect, the cancellation for the \(\tilde{A}\) is the same as that happening at the cross-section level. At high energy and keeping the electron mass finite, the squared contribution of the sum of the \(\gamma\), \(Z\) and \(\nu\) mediated diagrams to the Fano coefficient is
\[\tilde{A}_{|\gamma+Z+\nu|^{2}}=\frac{e^{4}}{288\sin^{4}\theta_{W}}\frac{m_{e}^ {2}}{m_{W}^{2}}\frac{s}{m_{W}^{2}}+{\cal O}(s^{0})\,. \tag{4.4}\]
Adding the contribution of the Higgs diagram, we obtain three more terms,
\[\tilde{A}_{|h|^{2}}\sim-\tilde{A}_{(\gamma+Z+\nu)^{*}h}\sim-\tilde{A}_{(\gamma +Z+\nu)h^{*}}\sim\tilde{A}_{|\gamma+Z+\nu|^{2}}\quad\Longrightarrow\quad \tilde{A}_{|\gamma+Z+\nu+h|^{2}}\sim{\cal O}(s^{0})\,, \tag{4.5}\]
where now, due to the sign of the amplitudes, the interference terms of \((\gamma+Z+\nu)\) with the Higgs cancel with the matrix-element-squared contributions, resulting in a \({\cal O}(s^{0})\) dependence. This cancellation happens similarly for the \(\tilde{a}_{i}\), \(\tilde{b}_{i}\) and \(\tilde{c}_{ij}\) coefficients. Specifically, we obtain the following non-zero results at order \(s/m_{W}^{2}\) for the \((\gamma+Z+\nu)\) diagrams
\[\tilde{a}_{3,|\gamma+Z+\nu|^{2}}\sim-\frac{1}{\sqrt{3}}\tilde{a}_{8,|\gamma+Z +\nu|^{2}}\sim\frac{e^{4}}{288\sin^{4}\theta_{W}}\frac{m_{e}^{2}}{m_{W}^{2}} \frac{s}{m_{W}^{2}}+{\cal O}(s^{0})\,, \tag{4.6}\]
which in the case of the third component cancels when summed with the Higgs-mediated-squared and interference contributions, \(\tilde{a}_{3,(\gamma+Z+\nu)^{*}h}+\tilde{a}_{3,(\gamma+Z+\nu)h^{*}}+\tilde{a}_ {3,|h|^{2}}\), and likewise for the eighth component. The same expressions hold when considering \(\tilde{b}_{i}\). With respect to the correlation matrix \(\tilde{c}_{ij}\), we have that for the \(\gamma+Z+\nu\) diagrams
\[\begin{split}\tilde{c}_{33,|\gamma+Z+\nu|^{2}}&=- \frac{1}{\sqrt{3}}\tilde{c}_{38,|\gamma+Z+\nu|^{2}}=-\frac{1}{\sqrt{3}}\tilde{ c}_{83,|\gamma+Z+\nu|^{2}}=\frac{1}{3}\tilde{c}_{88,|\gamma+Z+\nu|^{2}}\\ &=\frac{e^{4}}{288\sin^{4}\theta_{W}}\frac{m_{e}^{2}}{m_{W}^{2}} \frac{s}{m_{W}^{2}}+{\cal O}(s^{0})\,.\end{split} \tag{4.7}\]
Adding the Higgs contribution leads to the same cancellation of all the coefficients at \(s/m_{W}^{2}\) order, and perturbative unitarity is restored in the \(R\)-matrix. It is interesting to note that this cancellation occurs only for the coefficients of the third and eighth Gell-Mann matrix, while the others do not exhibit an energy growing behaviour. This occurs because in the high energy limit the \(R\)-matrix is dominated by the longitudinal polarisations and these are the Fano coefficients sensitive to those.
### \(Zz\) final state
We now turn to \(ZZ\) production. As in the previous case, let us focus on the lepton-initiated channel. The story for \(ZZ\) is similar but now we only have two types of diagrams, \(t\) and \(u\)-channel mediated by electrons and the Higgs \(s\)-channel. The cancellation in the amplitude
occurs for the same helicities as before
\[\mathcal{M}_{e}(\pm\pm\to 00)\sim\mathcal{M}_{h}(\pm\pm\to 00)\sim-e^{2}\csc^{2}2 \theta\,\frac{m_{e}}{2m_{Z}}\,\frac{\sqrt{s}}{2m_{Z}}+\mathcal{O}(s^{0})\,, \tag{4.8}\]
where \(\mathcal{M}_{e}\) represents both \(t\) and \(u\) channels. At the \(R\)-matrix level, the cancellation happens in a similar fashion to the \(WW\) case. The \(\tilde{A}\) coefficient for the electron diagrams,
\[\tilde{A}_{e}=\frac{e^{4}}{288\cos^{4}\theta_{W}\sin^{4}\theta_{W}}\frac{m_{e} ^{2}}{m_{Z}^{2}}\frac{s}{m_{Z}^{2}}+\mathcal{O}(s^{0})\,, \tag{4.9}\]
cancel when adding the Higgs diagram. For the coefficients \(\tilde{b}_{i}\) and \(\tilde{c}_{ij}\), we again only have contributions to the third and eighth components and the cancellations repeat the previous pattern.
### Non-interference EFT effects
The study of perturbative unitarity fits well in renormalisable theories where one does not expect growing amplitudes in the high-energy limit. This discussion is different for EFTs, in which higher-dimensional operators are included and they are allowed to grow with energy. This is actually a feature used in SMEFT analyses to probe deviations from the SM in tails of distributions. This growth can happen due to the particular new Lorentz structure of the operators or because of the spoiling of the SM cancellations. To understand the high-energy limit of the \(R\)-matrix and spin-related observables, let us first look at the amplitudes.
At high energy, the SM and the SMEFT induce a specific helicity pattern for diboson production. In Table 2, as representative, we report the helicity amplitudes for \(e^{+}e^{-}\to W^{+}W^{-}\). The helicity states are specified by the notation \(\mathcal{M}(\lambda_{1}\lambda_{2}|\alpha\beta)\), where \(\lambda_{1},\lambda_{2}\) are the helicities of the initial state electrons and \(\alpha,\beta\) are the helicities of the final state EW bosons. We retain contributions up to order \(\mathcal{O}(x^{0})\) (see also Refs.[43] and [44]).
It is clear that the SM and the \(\mathcal{O}_{W}\) operator induce different helicity amplitudes. When computing a cross-section, where the on-shell final states are the gauge bosons, this leads to a vanishing EFT linear correction due to non-interference of the amplitudes. The cancellation for massless particles can be proven by applying helicity selection rules, see Ref. [45]. However, one can show that the interference can be recovered exploiting the angular distributions of the decay products, e.g. considering the full process \(e^{+}e^{-}\to VV\to 4f\), with \(f\) either a lepton or a quark. For in-depth phenomenological studies of this aspect we refer the reader to the literature [46; 47; 48; 49; 50; 51; 52; 51].
In the \(R\)-matrix formulation, this translates into the fact that the diagonal terms at the linear EFT level vanish, while the off-diagonal terms allow for a resurrection of the interference between the SM and the operator \(\mathcal{O}_{W}\). In the high energy and massless limit, the Fano coefficient \(\tilde{A}\) has a vanishing linear EFT contribution
\[\tilde{A}(\mathcal{O}_{W})\sim 0\, \tag{4.10}\]
but the other Fano coefficients can be different from zero and potentially allow for increased sensitivity. Defining \(x=\sqrt{s}/(2m_{V})\) and in the limit \(m_{W}\sim m_{Z}\sim m_{V}\) we have
\[\tilde{a}_{1}(\mathcal{O}_{W})\simeq\tilde{b}_{1}(\mathcal{O}_{W})\simeq \bar{c}_{W}\,2^{5/4}\,x\;\cos^{4}(\theta/2)(\cos\theta+3)\csc\theta\,, \tag{4.11a}\]
\[\tilde{a}_{4}({\cal O}_{W})\simeq\tilde{b}_{4}({\cal O}_{W})\simeq- \bar{c}_{W}\,2^{3/4}(\cos\theta((4x^{2}-3)\cos\theta+4x^{2}+1)+2)\,, \tag{4.11b}\] \[\tilde{a}_{6}({\cal O}_{W})\simeq\tilde{b}_{6}({\cal O}_{W})\simeq \bar{c}_{W}\,2^{1/4}\,x\,\sin^{2}(\theta/2)\sin\theta\,, \tag{4.11c}\]
with \(\bar{c}_{W}=c_{W}\,G_{F}^{3/2}\,m_{V}^{5}\). The spin-spin Fano coefficients \(\tilde{c}_{ij}(=\tilde{c}_{ji})\) for the operator \({\cal O}_{W}\) are given by
\[\tilde{c}_{13}\simeq 3\,\bar{c}_{W}\cdot 2^{3/4}\cos^{2}(\theta/2) (3\cos\theta+1)\cot(\theta/2)\,x \tag{4.12a}\] \[\tilde{c}_{14}\simeq-\tilde{c}_{25}\simeq\tilde{c}_{46}\simeq- \tilde{c}_{57}\simeq-3\,\bar{c}_{W}\cdot 2^{3/4}\sin\theta(1+\cos\theta)\,x\,,\] (4.12b) \[\tilde{c}_{16}\simeq\tilde{c}_{27}\simeq 3\,\bar{c}_{W}\cdot 2^{- 5/4}\sin^{2}(\theta/2)((3-4x^{2})\cos\theta-4x^{2}+1)\,,\] (4.12c) \[\tilde{c}_{18}\simeq\bar{c}_{W}\,\sqrt{3}\,\cdot 2^{-3/4}\cos^{2 }(\theta/2)(\cos\theta+2)\cot(\theta/2)\,x\,,\] (4.12d) \[\tilde{c}_{35}\simeq\bar{c}_{W}\,\sqrt{3}\,\cdot 2^{-3/4}\sin^{2 }(\theta/2)\sin\theta\,x\,,\] (4.12e) \[\tilde{c}_{48}\simeq\sqrt{3}\,\bar{c}_{W}\cdot 2^{-9/4}(8(1-x^{2}) \cos\theta+(4x^{2}-3)\cos(2\theta)-5(4x^{2}-1))\,,\] (4.12f) \[\tilde{c}_{68}\simeq-5\sqrt{3}\,\bar{c}_{W}\cdot 2^{-3/4}\sin^{2 }(\theta/2)\sin\theta\,. \tag{4.12g}\]
The omitted ones are vanishing, showing that this matrix is rather sparse and several of the energy growing terms are closely related between each other. Here, we note also that the relevant coefficients for the perturbative unitarity cancellation vanish in the non-interference analysis, as they are pertinent to the longitudinal polarisations, while the \({\cal O}_{W}\) induces energy growing behaviour in transverse helicity configurations.
## 5 Entanglement at particle colliders
In the following we will study the spin density matrix and the presence of entanglement in diboson production for both lepton and proton colliders. In the case of the lepton collider,
\begin{table}
\begin{tabular}{c c c} \hline \hline \((\lambda_{1}\lambda_{2}|\alpha\,\beta)\) & SM & EFT \(\Lambda^{-2}:c_{WWW}\) \\ \hline \(+-00\) & \(-2\sqrt{2}G_{F}m_{Z}^{2}\sin\theta\) & - \\ \(+--+\) & \(2\sqrt{2}G_{F}m_{W}^{2}\sin\theta\) & - \\ \(+-+-\) & \(-\frac{1}{\sqrt{2}}G_{F}m_{W}^{2}\sin^{3}\theta\csc^{4}(\theta/2)\) & - \\ \(+-\pm\pm\) & - & \(3\cdot 2^{1/4}\sqrt{G_{F}}m_{W}\sin\theta\,(4m_{W}^{2}x^{2}-m_{Z}^{2})\) \\ \(+-\,0\pm\) & - & \(-3\cdot 2^{3/4}\sqrt{G_{F}}m_{W}^{3}(\pm 1+\cos\theta)\,x\) \\ \(+-\pm 0\) & - & \(-3\cdot 2^{3/4}\sqrt{G_{F}}m_{W}^{3}(\mp 1+\cos\theta)\,x\) \\ \hline \(-+00\) & \(2\sqrt{2}G_{F}(m_{Z}^{2}-m_{W}^{2})\sin\theta\) & - \\ \(-++\pm\) & - & \(6\cdot 2^{1/4}\sqrt{G_{F}}m_{W}(m_{Z}^{2}-m_{W}^{2})\sin\theta\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Helicity pattern in the high-energy limit of electron-initiated diboson production amplitudes for the SM and the SMEFT amplitudes induced by the \({\cal O}_{W}\) operator, which has a distinguished Lorentz structure. The \(\pm\pm\alpha\beta\) case has all zero entries, regardless of \(\alpha\beta\). The short-hand notation \(x=\sqrt{s}/(2m_{W})\) is used. Contributions to the helicity amplitude that are sub-leading and energy suppressed are indicated with -.
we consider centre of mass energies up to \(1\,\mathrm{TeV}\), while in the case of the proton collider we focus on the LHC setup with a centre of mass energy of \(13\,\mathrm{TeV}\). In the latter we work in the 5 flavour scheme, _i.e._, all quarks are massless aside for the top quark, and we use the latest NNPDF4.0 NNLO PDF set [52]. The input parameters used for the calculations are the following
\[m_{W} =80.377\,\mathrm{GeV}\,, m_{Z} =91.1876\,\mathrm{GeV}\,,\] \[m_{h} =125.35\,\mathrm{GeV}\,, G_{F} =1.166\,378\,8\times 10^{-5}\,\mathrm{GeV}^{-2}\,.\]
Calculations are performed analytically, taking advantage of the Feynrules[53, 54], FeynArts[55] and FeynCalc[56] tool chain. Amplitude computations are also validated numerically with MadGraph5_aMC@NLO[57] and SMEFT@NLO[58]. The results are presented in the centre of mass frame of the diboson pair, as function of the kinematical variables \(m_{VV}\), the invariant mass of the system, and \(\cos\theta\), the angle between the initial state anti-particle and the \(W^{+}\) or \(Z\) boson. In the case of the SM calculations, we have validated our results against the ones obtained in Ref. [18], finding excellent agreement.
For each process considered, we computed the density matrix including EFT corrections (both linear and quadratic) and consequently the various Fano coefficients, from which we determine analytically the entanglement related markers \(\mathcal{C}_{\mathrm{LB}}\) and \(\mathcal{C}_{\mathrm{UB}}\), the purity \(P\) and the indicator \(\langle\mathcal{B}\rangle_{\mathrm{max}}\) for Bell inequality violation.5
Footnote 5: The minimisation in the calculation of the latter is, however, done numerically.
### Lepton collider
In this section we discuss the entanglement pattern in diboson production at a lepton collider, focusing in particular on its dependence on the couplings in the context of the SMEFT. In a lepton collider, the collision energy is fixed. This means that at LO, without considering initial state QED (or more in general EW) radiation, the diboson pair is produced with an invariant mass \(m_{VV}\) which is identical to the initial state energy. In this setup, we can therefore study the behaviour of the entanglement by fixing the collider energy and looking at its angular dependence. This remains true if we consider lepton colliders with energies up to a few \(\mathrm{TeV}\). In the scenario of a multi-TeV lepton collider, such as a circular muon collider, the machine would effectively behave as an EW boson collider and the main mode of production for diboson pair will be vector boson scattering [59], see also Refs. [60, 61, 62, 63, 64] for in depth studies of muon collider physics. The study of this kind of processes, which would be phenomenologically quite different and would add considerable sensitivity on dimension-8 operators, is left for future work.
#### \(Ww\) production
The production of a pair of \(W\) bosons is of most interest in a lepton collider, as it is characterised by a considerably high cross section even at high energies. For instance, at \(1\,\mathrm{TeV}\), the total cross section is still of the order of a few pb. This means that the process is going to be particularly advantageous in terms of statistics and could prospectively allow to detect entanglement with strong significance [18].
The relevant couplings for the process are the coupling of the lepton to the \(W\) boson \(g_{W}\), the coupling to the \(Z\) boson \(g_{V}^{Z}\) and \(g_{A}^{Z}\), and the TGCs. The process is therefore sensitive to several of the EW interactions. Furthermore, in the SMEFT framework strong correlations are present among the parameters.
In Fig. 4 we show the entanglement pattern we can expect from the production of a pair of \(W\) bosons in the SM. Results are shown as a function of the kinematical variables \(m_{WW}\) and \(\cos\theta\). Note that there is no symmetry around \(\theta=\pi/2\), as the EW interactions are not parity invariant. This is not the case when considering for instance \(t\bar{t}\) production, where the process is dominated by QCD [11]. In the upper left figure we show the values for the purity indicator \(P\), which depicts a scenario where the majority of the phase space is characterised by a density matrix close to maximal purity. In the bottom left and bottom
Figure 4: Entanglement in the \(e^{+}e^{-}\to W^{+}W^{-}\) channel in the SM. We show the lower bound \(\mathcal{C}_{\rm LB}\) (bottom left) and upper bound \(\mathcal{C}_{\rm UB}\) (bottom right) on the concurrence \(\mathcal{C}\), the purity \(P\) (top left) as well as the indicator \(\langle\mathcal{B}\rangle_{\rm max}\) for Bell inequality violation (top right) as a function of the invariant mass \(m_{WW}\) (or equivalently the collider energy) and the cosine of the angle between the positron and the \(W^{+}\) in the centre of mass frame.
right figure, we show the value of the lower and upper bound for the concurrence. The plots demonstrate that the diboson pair has in general a very high value of the concurrence across the phase space, indicating that entanglement is present almost everywhere, with the exception of the collinear region with \(\theta=0\), where low values of entanglement are expected.
This can be seen more explicitly in the left plot of Fig. 5, where four collider energies, 170, 250, 500 and 1000 GeV, are chosen as benchmarks. The plot displays bands for the concurrence, where the lower and the upper bound are given by \(\mathcal{C}_{\rm LB}\) and \(\mathcal{C}_{\rm UB}\) respectively, as a function of \(\cos\theta\). Entanglement is high and stable for most of the angles, but decreases sharply as we approach the forward collinear limit at high energies. Also, the closer we are to threshold the tighter the bands get, indicating that the quantum state of the process goes towards maximal purity, as confirmed from the purity plot in Fig. 4. In fact, as previously discussed, in this limit \(\mathcal{C}_{\rm LB}\) and \(\mathcal{C}_{\rm UB}\) coincide.
Finally, in the upper right plot of Fig. 4 we display the Bell inequality violation indicator \(\langle\mathcal{B}\rangle_{\rm max}\). The pattern in the figure closely resembles that shown for the concurrence marker \(\mathcal{C}_{\rm LB}\) and as expected, the violation is ubiquitous. In particular, Bell inequalities are severely violated when high values of entanglement are present, _i.e._, at high energy in the central region.6
Footnote 6: Note that we validated our results against Ref. [18] finding excellent agreement for most of phase space but some discrepancy in the forward region \(\cos\theta=1\), where the authors find a slight violation of Bell inequalities, _i.e._\(\langle\mathcal{B}\rangle_{\rm max}>2\). However, as explicitly discussed above, we find that in the phase space region in question the density matrix of the system is described by a pure separable state. Therefore, no Bell inequality violation is expected, in agreement with Fig. 4. This property has been also verified with a numerical simulation in MadGraph5_aMC@NLO, finding good agreement with the analytical calculation.
In order to gain a better insight, it is useful to see how the density matrix decomposes in terms of quantum states at particular phase space points. For instance, we find that the diboson pair is produced at threshold in a pure and entangled quantum state, _i.e._,
\[\left|\Psi(m_{WW}=2\,m_{W})\right\rangle=\frac{1}{\sqrt{2}}\left(\left|+0 \right\rangle_{\mathbf{p}}+\left|0+\right\rangle_{\mathbf{p}}\right)=\left|\Psi_{0+} \right\rangle_{\mathbf{p}}\,, \tag{10}\]
Figure 5: For benchmark fixed collider energies, we show the expected value for the concurrence as a function of \(\cos\theta\) in the SM. The bands are determined by the lower and upper bounds of the concurrence, _i.e._, \(\mathcal{C}_{\rm LB}\) and \(\mathcal{C}_{\rm UB}\). Left: \(e^{+}e^{-}\to W^{+}W^{-}\). Right: \(e^{+}e^{-}\to ZZ\).
where \(\left|s_{1}s_{2}\right\rangle_{\mathbf{p}}=\left|s_{1}\right\rangle_{\mathbf{p}}\otimes \left|s_{2}\right\rangle_{\mathbf{p}}\) and \(\left|s\right\rangle_{\mathbf{p}}\) is the eigenstate of the spin operator in the direction of the beam line \(\mathbf{p}\) with eigenvalue \(s\). This means that at threshold, the diboson pair is produced in an entangled state, characterised by total spin 2 and spin component along \(\mathbf{p}\) equal to 1. However, despite the fact that this is an entangled state, the value of the concurrence at threshold is 1, not reaching the maximal value of \(2/\sqrt{3}\). We find that this quantum state is unaffected by the presence of new physics. Even when EFT effects are taken into account, the quantum state of the system is still expressed by \(\left|\Psi_{0+}\right\rangle_{\mathbf{p}}\). The reason for that is that most of the contributions are identically zero at threshold and the ones that are not, are simply shifting the absolute value of \(g_{W}\), resulting in an increased total cross section but not affecting the spin correlation patterns. It turns out indeed that at threshold, the only relevant coupling for the process is \(g_{W}\). Equally interesting is the situation at high energy. In particular, in the central region, we find that the density matrix is characterised by a mixed quantum state but dominated by the presence of a pure entangled quantum state, which explains the high concurrence. Specifically, the density matrix can be defined with respect to the \(\mathbf{k}\) direction, the momentum of the \(W^{+}\) boson, in the following way
\[\rho(m_{WW}\rightarrow\infty,\cos\theta=0)=p_{1}\,\left|1\right\rangle\left| 1\right|+p_{2}\,\left|2\right\rangle\left\langle 2\right|\,, \tag{100}\]
with
\[\left|1\right\rangle =0.64\left|++\right\rangle_{\mathbf{k}}-0.64\left|--\right\rangle_{ \mathbf{k}}+0.43\left|00\right\rangle_{\mathbf{k}}\,, \tag{101}\] \[\left|2\right\rangle =0.3\left|++\right\rangle_{\mathbf{k}}-0.3\left|--\right\rangle_{ \mathbf{k}}-0.9\left|00\right\rangle_{\mathbf{k}}\,,\]
and \(p_{1}\approx 0.97\) and \(p_{2}\approx 0.03\).
On the other hand, in the collinear region, \(\theta=0\), the concurrence goes to zero (see Fig. 4) and we find that the diboson pair is produced in a separable state, _i.e._
\[\left|\Psi(m_{WW}\rightarrow\infty,\cos\theta=1))=\left|++\right\rangle_{\mathbf{ k}}\,. \tag{102}\]
We now move to discuss the effects of dimension-6 operators on the spin density matrix. In Fig. 6, we show the effects of a selected number of operators on the marker \(\mathcal{C}_{\text{LB}}\). For each operator we choose values that are currently allowed or at the boundary of the most up to date global fit studies (see Table 1 and for example Refs. [38; 65]). In the context of the EFT, here and in the following sections, we decide to limit ourselves mostly to show the effects only for the \(\mathcal{C}_{\text{LB}}\) indicator, as a good representative metric for the entanglement pattern across phase space. In some cases, we also show the purity for comparison. Note however that the calculation performed and the expressions provided in the ancillary files allow for a complete determination of the density matrix in the SMEFT and consequently for the calculation of every quantum observable derived from it. We show the effects of the operators by turning on only one of them at the time, in order to display how different modifications of the couplings alter the entanglement pattern. It is particularly interesting to see that not only the effects of the operators are substantial for the value of choice of the Wilson coefficients, but that the pattern of modification severely changes from operator to operator. The ultimate reason for that has to be tracked down to the way they induce shifts to the EW couplings defined in Section 3. For instance, the operator \(\mathcal{O}_{ll}\) (not displayed)
is affecting the SM couplings in a universal way, inducing an overall rescaling factor, and therefore does not affect the density matrix in any way, leaving the \(\mathcal{C}_{\rm LB}\) marker unchanged. On the other hand, the four Wilson coefficients shown in Fig. 6, \(c_{\varphi e}\), \(c_{\varphi l}^{(1)}\), \(c_{\varphi WB}\) and \(c_{W}\), alter the EW interactions in such a way that the density matrix of the final state spins is sensibly affected. We can see for instance that a positive value of \(c_{\varphi e}\), which shifts the value of the right-handed coupling to the \(Z\) boson, induces an augmented level of entanglement at high energy both in the central region and in the backward one. Very different behaviour is instead produced by a positive \(c_{\varphi l}^{(1)}\), which by shifting the left-handed coupling to the \(Z\) boson decreases the value of the \(\mathcal{C}_{\rm LB}\) marker in the same region. The effects of the \(c_{\varphi WB}\) and \(c_{W}\) operators is instead milder. In particular, one could have potentially expected a big impact from the \(\mathcal{O}_{W}\) operator given that it induces the presence of the non-SM Lorentz structure \(\delta\lambda_{V}\), but that does not seem to be the case. Note that in general, to an opposite sign value of the Wilson coefficient corresponds an opposite effect, _i.e._, if for \(c_{W}=0.25\,{\rm TeV}^{-2}\) the high entanglement region increases, for \(c_{W}=-0.25\,{\rm TeV}^{-2}\) we would
Figure 6: The changes in the marker \(\mathcal{C}_{\rm LB}\) is shown for a selection of operators and benchmark Wilson coefficient values for the production of \(W^{+}W^{-}\) at a lepton collider. Only one operator at the time is switched on. Top left: \(c_{\varphi e}=0.1\,{\rm TeV}^{-2}\), top right: \(c_{\varphi l}^{(1)}=0.1\,{\rm TeV}^{-2}\), bottom left: \(c_{\varphi WB}=0.25\,{\rm TeV}^{-2}\), bottom right: \(c_{W}=0.25\,{\rm TeV}^{-2}\).
see a decrease. Finally, it is worth noticing that all of the operators leave the entanglement pattern in the forward region unchanged and have mostly sensible effects in the high energy region, as one would have naively expected.
Finally, in Fig. 7, we depict the relative change of \(\mathcal{C}_{\rm LB}\) (lower triangle) and the purity (upper triangle) as a function of the Wilson coefficients, varying two coefficients at a time and considering the fixed phase space point \(m_{WW}=500\,\)GeV and \(\theta=\pi/2\). Here, \(\Delta\mathcal{C}_{\rm LB}\) and \(\Delta P\) denote the difference between the marker \(\mathcal{C}_{\rm LB}\) and the purity, respectively, calculated within the SMEFT, and the SM values, \(\mathcal{C}_{\rm LB}^{\rm SM}=1.0\) and \(P^{\rm SM}=0.94\). The SMEFT values are calculated including dimension-6 and dimension-6 squared contributions. In addition, the contours depict the relative change in \(\tilde{A}\), _i.e._, the relative change of the differential
cross-section with respect to the SM. Notably, we see that that the spin-related observables generally probe different parameter directions than the cross-section, potentially offering complementary probes of NP. This could be of fundamental importance both for discovery, enhancing the sensitivity to EFT corrections, and for characterisation in the event of a clear deviation from the SM. Additionally, one can clearly see from the \(c_{W}\) plots that the spin-related observables display a resurrection of the interference, as expected, while the differential cross-section contour lines are mostly dominated by quadratic corrections.
#### \(Zz\) production
One key difference between \(ZZ\) and \(W^{+}W^{-}\) production is the fact that they probe complementary couplings of the fermions to the diboson system. In particular, in the case of \(ZZ\),
Figure 8: Entanglement in the \(e^{+}e^{-}\to ZZ\) channel in the SM. We show the lower bound \(\mathcal{C}_{\rm LB}\) (bottom left) and upper bound \(\mathcal{C}_{\rm UB}\) (bottom right) on the concurrence \(\mathcal{C}\), the purity \(P\) (top left) as well as the indicator \(\langle\mathcal{B}\rangle_{\rm max}\) for Bell inequality violation (top right) as a function of the invariant mass \(m_{ZZ}\) (or equivalently the collider energy) and the cosine of the angle between the positron and the \(Z\), in the centre of mass frame.
only the coupling of the fermions to the \(Z\) boson are relevant, while in the case of \(W^{+}W^{-}\) a more intricate coupling dependence is present, including the triple gauge coupling. For \(Z\) pair production, the scattering amplitudes are completely determined by the values of the vectorial and axial couplings to the \(Z\) boson in Eq. (10).
In Fig. 8 we show the entanglement pattern in the SM. Contrary to the \(W^{+}W^{-}\) case, we see a symmetry with respect to \(\cos\theta=0\), given that this time the final state has identical particles and consequently the system exhibits a symmetry under parity transformations. The plot in the upper left corner further indicates that, in contrast to \(W^{+}W^{-}\) production, the majority of the phase space for \(ZZ\) production is characterised by a mixed quantum state. High purity \(P\) is reached only in the high energy central region. The plot on the lower left depicts the entanglement pattern in terms of the lower bound marker \(\mathcal{C}_{\rm LB}\). According to that, the entanglement is expected to be high at high energy in the central region, but quite low in the forward region. However, the picture gets more complicated if we look at the right panel of Fig. 5, which displays bands for the concurrence, making use of both the lower and the upper bound, for benchmark collider energies as a function of \(\cos\theta\). In the figure, we see that the lower bound goes towards zero in the collinear regime, but the upper bound does not, giving us enormous uncertainty on the determination of the entanglement with this approach. This is also confirmed by the plot in the lower right panel of Fig. 8 displaying the upper bound marker \(\mathcal{C}_{\rm UB}\) across phase space. Almost all of the phase space is characterised by \(\mathcal{C}_{\rm UB}\) close to maximal. Finally, in the upper right plot of Fig. 8 we report on the expected value for the Bell inequality violation marker \(\left\langle\mathcal{B}\right\rangle_{\rm max}\). In contrast to the \(W^{+}W^{-}\) final state, the region of phase space with \(\left\langle\mathcal{B}\right\rangle_{\rm max}>2\) is rather limited and slight violations are only present in the high energy central region.
More information on the entanglement can be gathered by directly inspecting the density matrix and its decomposition in terms of quantum states. In particular, we find that at threshold the \(Z\) boson pair is produced in a mixed state
\[\rho(m_{ZZ}=2\,m_{Z})=p_{1}\,\left|\Psi_{0+}\right\rangle_{\boldsymbol{p}} \left\langle\Psi_{0+}\right|_{\boldsymbol{p}}+p_{2}\,\left|\Psi_{0-}\right\rangle _{\boldsymbol{p}}\left\langle\Psi_{0-}\right|_{\boldsymbol{p}}\,, \tag{12}\]
with \(p_{1}=0.7\) and \(p_{2}=0.3\) and
\[\begin{split}\left|\Psi_{0+}\right\rangle_{\boldsymbol{p}}& =\frac{1}{\sqrt{2}}\left(\left|+0\right\rangle_{\boldsymbol{p}}+ \left|0+\right\rangle_{\boldsymbol{p}}\right)\,,\\ \left|\Psi_{0-}\right\rangle_{\boldsymbol{p}}&= \frac{1}{\sqrt{2}}\left(\left|-0\right\rangle_{\boldsymbol{p}}+\left|0- \right\rangle_{\boldsymbol{p}}\right)\,.\end{split} \tag{13}\]
The two states are both fully entangled, but, given the fact that the density matrix is in a mixed state, the value of the concurrence is not maximal.
On the other hand, at high energy the picture is different. In the central region, \(\theta=\pi/2\), the diboson pair is produced in a pure spin-2 maximally entangled state
\[\left|\Psi_{+-}\right\rangle_{\boldsymbol{k}}=\frac{1}{\sqrt{2}}\left(\left| ++\right\rangle_{\boldsymbol{k}}-\left|--\right\rangle_{\boldsymbol{k}}\right)\,. \tag{14}\]
This is fully consistent with what we observe in Fig. 8. The concurrence study based on the lower and upper bounds is inconclusive in the forward region. On the other hand, by
inspecting the density matrix directly at \(\theta=0\) in the high energy limit, we find that the \(Z\) pair is produced in a mixed ensemble of separable quantum states, _i.e._
\[\rho(m_{ZZ}\rightarrow\infty,\cos\theta=1)=p_{1}\,\left|++\right\rangle_{\mathbf{p} }\left\langle++\right|_{\mathbf{p}}+p_{2}\,\left|--\right\rangle_{\mathbf{p}}\left\langle-- \right|_{\mathbf{p}}\,, \tag{100}\]
with \(p_{1}=0.7\) and \(p_{2}=0.3\). We are therefore able to conclude that in the forward region, the diboson pair is indeed not entangled as suggested by the \(\mathcal{C}_{\rm LB}\) marker behaviour.
As for the case of \(W^{+}W^{-}\) production, in Fig. 9 we show the effects of a selected number of operators on the marker \(\mathcal{C}_{\rm LB}\). Since for \(ZZ\) production the dependence on the couplings is considerably simpler, the only defining parameter of the entanglement pattern is the balance between the vector and the axial coupling, or, to be more precise, the ratio between the two. We observe that, aside for the operators that universally rescale both couplings leaving the density matrix unchanged, the behaviour of all the operators is phenomenologically identical. In the figure, we show the two possible deviating patterns from the SM, choosing as benchmark Wilson coefficients \(c_{\varphi D}=0.5\,\mathrm{TeV}^{-2}\) and \(c_{\varphi l}^{(3)}=-0.25\,\mathrm{TeV}^{-2}\). In the case of the former, the entanglement marker is augmented almost everywhere, indicating that the Wilson coefficient shifts the couplings in such a way that the dominant coupling prevails a bit more and the density matrix is slightly "less mixed". On the other hand, for \(c_{\varphi l}^{(3)}=-0.25\,\mathrm{TeV}^{-2}\) we observe a decrease of the entanglement across the board. Note that at variance with \(W^{+}W^{-}\) production, the effects close to threshold are a bit more pronounced. We stress again that none of these behaviours is pertinent to a specific operator, but both the increase and decrease of entanglement can be produced by any of the operators by switching on the corresponding Wilson coefficient (with negative and positive values for opposite effects). Finally, in Fig. 10 we show the relative change of \(\mathcal{C}_{\rm LB}\) and the purity as a function of a selection of Wilson coefficients, considering the fixed phase space point \(m_{ZZ}=500\,\mathrm{GeV}\) and \(\theta=\pi/2\). As for the case of \(W\) pair production, the complementarity
Figure 9: The changes in the marker \(\mathcal{C}_{\rm LB}\) is shown for a selection of operators and benchmark Wilson coefficient values for the production of \(ZZ\) at a lepton collider. Only one operator at the time is switched on. Left: \(c_{\varphi D}=0.5\)\(\mathrm{TeV}^{-2}\), right: \(c_{\varphi l}^{(3)}=-0.25\,\mathrm{TeV}^{-2}\).
between differential cross section and spin-related observables is self-evident. Additionally, it is interesting to see that purity and probe the same direction in the Wilson coefficient parameter space, indicating that there is a strong correlation between the two observables at the spin density matrix level.
### Hadron collider
We now present results for a hadron collider, specifically for a proton collider corresponding to the LHC in Run 2. Studying the processes at LO, the relevant channels are the quark annihilation ones. At the level of the hard scattering, the kinematic dependence and therefore the density matrices are fairly similar to the corresponding ones at a lepton collider. The main differences are given by the different charges, which induce different couplings to the EW bosons. We find these effects to be relevant but not particularly disruptive of
Figure 10: The relative changes in the marker (lower triangle) and purity (upper triangle) compared to the SM values and as a function of the Wilson coefficients,, and for production at a lepton collider, at and. The lines indicate the relative change of the cross section.
the entanglement pattern. In particular, one can see that the explicit expressions for the density matrix in the benchmark regions analysed in the previous section are fairly similar.
What really changes the picture in a proton collider are the contributions from different partonic channels, weighted by the corresponding parton luminosity. Also, since we are now colliding identical particles, the system will be invariant under the transformation \(\theta\to\pi+\theta\), showing therefore a symmetry with respect to \(\cos\theta=0\). This can also be understood at the level of the \(R\)-matrix from the symmetrisation in Eq. (26). As a general remark, the most evident effect of having to sum over the different partonic channels is that the presence of entanglement is considerably diluted.
#### \(Ww\) production
As in the case of a lepton collider, the production of a pair of \(W\) bosons is the dominant diboson production channel. The high cross section makes it an ideal candidate to probe the EW interactions and potentially uncover signs of NP in the tails of the distributions. With this aim, the study of the spin density matrix can offer a complementary approach which could be helpful to disentangle degeneracies and characterise potential signals.
In the bottom panel of Fig. 11, we show the value of the marker \(\mathcal{C}_{\rm LB}\) across phase space for the two independent channels in proton collisions, \(u\,\bar{u}\to W^{+}\,W^{-}\) and \(d\,\bar{d}\to W^{+}\,W^{-}\). As opposed to the lepton collider, we decide to show results up to \(2\,\mathrm{TeV}\) in invariant mass. The physics regulating the production of \(W\) bosons is fairly similar for all initial states and the only differences are given by the different couplings the particles have with the \(Z\) boson, _i.e._ the \(s\)-channel diagram. As a consequence, the balance between the right-handed and left-handed couplings of the various initial states is slightly different and this translates into a slight difference for the density matrix. Note that the mirroring of the \(u\bar{u}\) channel is fictitious and simply given by the conventional choice of the angle between the anti-up quark and the \(W^{+}\). In fact, in the case of the positron and the \(d\) quark, that angle is the one between the initial and final state particles that share the same-sign charge, while in the \(u\bar{u}\) case it is the angle between the opposite charge particles.
In the lower panel of Fig. 12, we plot the lower limit of the concurrence \(\mathcal{C}_{\rm LB}\) for the parton luminosity weighted combination of the channels, as described in Eq. (26). As expected, the result is symmetric with respect to \(\cos\theta=0\), given the symmetrisation over the polar angle. It is interesting to observe the strong dilution of the entanglement pattern, which is caused precisely by summing over the initial state and considering both \(q\bar{q}\) and \(\bar{q}q\) channels. This can be intuitively understood by looking at the plots of the individual channels, where we can observe that the two collinear regions, \(\cos\theta=1\) and \(\cos\theta=-1\) are characterised by opposite behaviours and therefore the high entanglement of one region is washed out by the other when summing over the two different polar angles in Eq. (26). Because of this, the diboson pair produced will mostly be in a mixed state, and a high level of entanglement will only be found in the central region and at high energy, around \(\theta=\pi/2\).
As for the lepton collider case, in Fig. 12 we also report on the value of the quantum observable indicators for the purity \(P\), the Bell inequality violation marker \(\langle\mathcal{B}\rangle_{\rm max}\) and the upper bound on the concurrence \(\mathcal{C}_{\rm UB}\). The latter is not bringing much information to the
Figure 11: The lower bound for the concurrence \({\cal C}_{\rm LB}\) in the partonic channels as a function of the invariant mass \(m_{VV}\) of the diboson pair and the cosine of the angle between the \(\bar{q}\) and the \(W^{+}\) (or \(Z\) for \(ZZ\)), in the centre of mass frame, in the SM. Bottom: \(W^{+}W^{-}\) production in the \(u\bar{u}\) (left) and \(d\bar{d}\) (right) channel. Middle: \(ZZ\) production in the \(u\bar{u}\) (left) and \(d\bar{d}\) (right) channel. Top: \(u\bar{d}\to ZW^{+}\) production.
table, since it shows values of order 1 across phase space. On the other hand, from the purity plot we learn that the density matrix is characterised by a highly mixed state and it goes towards a purer state in the high-energy central region. We find that this is a common feature across all proton collider processes, mostly an effect of the state mixing dictated by Eq. (26). Finally, in the upper right plot of Fig. 12 we display the marker for Bell inequality violation, which follows closely the pattern indicated by the marker \(\mathcal{C}_{\rm LB}\), sign once again that states violating Bell inequalities are a subset of entangled states.
We now move on to the study of the EFT effects to the density matrix of the \(W\) pair produced in a proton collider. In Fig. 13 we display the changes in the \(\mathcal{C}_{\rm LB}\) marker for some benchmark Wilson coefficients. As can be seen in the two top figures, the effects of \(c_{\varphi u}\) and \(c_{\varphi d}\) are very similar, enhancing the right-handed coupling of the \(Z\) boson with the
Figure 12: Entanglement in the \(pp\to W^{+}W^{-}\) channel in the SM. We show the lower bound \(\mathcal{C}_{\rm LB}\) (bottom left) and upper bound \(\mathcal{C}_{\rm UB}\) (bottom right) on the concurrence \(\mathcal{C}\), the purity \(P\) (top left) as well as the indicator \(\langle\mathcal{B}\rangle_{\rm max}\) for Bell inequality violation (top right) as a function of the invariant mass \(m_{WW}\) and the cosine of the angle between the proton and the \(W^{+}\), in the centre of mass frame.
quarks and consequently decreasing the level of entanglement at high energy. The different intensity of the two operators has to be traced back to the different weight at the level of the PDFs, _i.e._, the \(u\bar{u}\) luminosity is higher than the \(d\bar{d}\) one. As expected, switching on the \(c_{\varphi q}^{(3)}\), which instead modifies the left-handed coupling to the \(Z\) of the \(d\)-quarks and the coupling to the \(W\) boson of both \(u\) and \(d\), has the opposite effect, increasing the value of the concurrence in the central region where we see the emergence of maximal level of entanglement. A similar effect is found for the \(c_{\varphi q}^{-}\) Wilson coefficient (not displayed) which modifies the left-handed coupling of the \(u\) quarks with the \(Z\) boson. Finally, in the lower-right plot in Fig. 13 we show the effects of the \(\mathcal{O}_{W}\) operator. Interestingly, we find that the presence of the pure BSM coupling \(\delta\lambda_{V}\) can be quite disruptive, especially at high energy, inducing a decrease of the level of entanglement. The effects of the \(c_{\varphi D}\) and \(c_{\varphi WB}\), which modify the SM TGCs, are found to be sensibly smaller in this case. We observe that the EFT effects are mostly in the central region while the collinear regions keep being characterised by a substantial absence of entanglement.
Figure 13: The changes in the marker \(\mathcal{C}_{\rm LB}\) is shown for a selection of operators and benchmark Wilson coefficient values for \(W^{+}W^{-}\) production at a proton collider. Only one operator at the time is switched on. Top left: \(c_{\varphi u}=0.05\,\mathrm{TeV}^{-2}\), top right: \(c_{\varphi d}=0.05\,\mathrm{TeV}^{-2}\), bottom left: \(c_{\varphi q}^{(3)}=0.05\,\mathrm{TeV}^{-2}\), bottom right: \(c_{W}=0.03\,\mathrm{TeV}^{-2}\).
Finally, in Fig. 14, we depict the relative change of \({\cal C}_{\rm LB}\) (lower triangle) and the purity (upper triangle) as a function of the Wilson coefficients, varying two coefficients at a time and considering the fixed phase space point \(m_{WW}=500\,{\rm GeV}\) and \(\theta=\pi/2\). Contrary to the case of a lepton collider, we do not find that we gain much from the chosen spin-related observables, _i.e._, the probed directions in parameter space are very similar to the ones probed by the differential cross section. This has to be traced back to the fact that in a proton collider we sum over the different partonic channels and in doing so we lose sensitivity to the spin related Fano coefficients. The quantum spin observables are indeed highly dependent on the Fano coefficient \(\tilde{A}\), which controls the abundance of a channel over the other, and therefore affects the spin density matrix of the total system. We verified that the spin density matrix of the total system is a function of the Wilson coefficients, which is a function of the Wilson coefficients, \(c_{\varphi d}\), \(c_{\varphi u}\), \(c_{\varphi u}\), and \(c_{W}\) for \(W^{+}W^{-}\) production at a proton collider, at \(m_{WW}=500\,{\rm GeV}\) and \(\cos\theta=0\). The values of the Wilson coefficients are shown in Fig. 14.
Figure 14: The relative changes in the marker \({\cal C}_{\rm LB}\) (lower triangle) and purity \(P\) (upper triangle) compared to the SM values \({\cal C}_{\rm LB}^{\rm SM}=0.73\) and \(P^{\rm SM}=0.64\) as a function of the Wilson coefficients \(c_{\varphi d}\), \(c_{\varphi q}^{(3)}\), \(c_{\varphi u}\) and \(c_{W}\) for \(W^{+}W^{-}\) production at a proton collider, at \(m_{WW}=500\,{\rm GeV}\) and \(\cos\theta=0\). The lines indicate the relative change of the cross section.
indeed that if one were to single out a specific channel (for example only \(u\bar{u}\) and its \(\bar{u}u\) counterpart), one would find results much more similar to those observed in Fig. 7 for the case of a lepton collider.
#### \(Zz\) production
We now discuss \(ZZ\) production at a proton collider. In the middle panel of Fig. 11, we show the entanglement pattern for the case of quark annihilation and in Fig. 15 their combination in a proton collider (lower left plot). It is interesting to notice that the quark channels present a different pattern of entanglement with respect to the lepton collider, which is caused by the simple fact that the value of the \(\bar{g}_{V}\) coupling depends on the charge of the particle and is sensibly different in the three cases, _i.e._\(\bar{g}_{V}\approx-0.027,0.1,-0.17\) for
Figure 15: Entanglement in the \(pp\to ZZ\) channel in the SM. We show the lower bound \(\mathcal{C}_{\rm LB}\) (bottom left) and upper bound \(\mathcal{C}_{\rm UB}\) (bottom right) on the concurrence \(\mathcal{C}\), the purity \(P\) (top left) as well as the indicator \(\langle\mathcal{B}\rangle_{\rm max}\) for Bell inequality violation (top right) as a function of the invariant mass \(m_{ZZ}\) and the cosine of the angle between the proton and the \(Z\), in the centre of mass frame.
\(e\), \(u\) and \(d\) respectively. This is non-trivial as one could have naively expected the patterns to be the same given that the EW couplings involved are the same. Also in the case of a proton collider, the general feature remains that high entanglement in \(ZZ\) production can be found at high invariant mass and in the central region. Once again the plot of the upper bound marker \(\mathcal{C}_{\rm UB}\) does not deliver any information, as the indicator reaches close to maximal values everywhere in phase space. The plot of the Bell violating marker \(\langle\mathcal{B}\rangle_{\rm max}\) in the upper right corner of Fig. 15 confirms that the density matrix in the central high-energy region is characterised by highly entangled quantum states.
Moving on to the EFT effects, as we already discussed in the corresponding section on \(ZZ\) production at a lepton collider, the only possible modifications are given by the shift of the vectorial and axial couplings to the \(Z\) boson, in particular from operators that spoil the balance between the two. We already saw that the effects are more subtle compared to the case of \(W^{+}W^{-}\) production. However, rather surprisingly we observe that in the case of the proton collisions there is even more sturdiness towards EFT effects. For values of the Wilson coefficients within the current bounds coming from global fits, no visible effect on the pattern of entanglement is present. For this reason we believe that \(Z\) pair production is the least promising process to probe dimension-6 effects at the level of the spin density matrix.
### \(Wz\) production
Contrary to the previous diboson production modes, \(WZ\) cannot be produced at a \(e^{+}e^{-}\) collider without the emission of additional charged particles. At a proton collider, only one relevant partonic channel exists, _i.e._, \(u\bar{d}\to W^{+}Z\) and the charge conjugated one. In the following we will focus on \(W^{+}Z\) production as representative of the two processes, which can distinguished in experiments. In this process, the relevant couplings are those of the \(W\) and \(Z\) to the fermions, as well as the triple gauge coupling. Note that in the case of \(WZ\), at the partonic level depicted in the top panel of Fig. 11, the expressions for \(\mathcal{C}_{\rm LB}\) and \(\mathcal{C}_{\rm UB}\) are identical (in the SM), indicating that \(\mathcal{C}_{\rm LB}\) is precisely the value of the concurrence and not just a lower bound. This is ultimately due to the fact that the trace of \(\rho^{2}\) is equal to 1 (the system is a pure state everywhere in phase space) and therefore the expressions in Eqs. (11) and (13) coincide. The statement does not necessarily hold true anymore in the presence of modified interactions, as we verified for the dimension-6 operators considered in this work. However, this property is not maintained once the symmetrisation of the \(\theta\) angle in Eq. (26) is performed and consequently the density matrix for proton collisions is ultimately highly mixed, as can be seen in the upper left plot in Fig. 16. To improve the purity, one could consider events boosted in the forward/backward regions and try to infer on a statistical basis the directions of the quark and anti-quark in the initial states.
In Fig. 11 (top plot) we show the \(\mathcal{C}_{\rm LB}\) pattern for the individual channel, while the proton collider one is displayed in the bottom left plot in Fig. 16, where the main difference is due to the fact that we have to take into account both \(q\bar{q}\) and \(\bar{q}q\) initial states, cf. Eq. (26). We notice once again, that the entanglement is much lower in the proton collider with respect to the individual channel. Surprisingly, we find high entanglement at threshold as well as for \(\cos\theta\approx\pm 0.5\), while in the central region, for \(\cos\theta=0\), the
value of the concurrence is low. The overall pattern differs substantially with respect to the previously analysed diboson processes. As a matter of fact, as can be seen from the marker \(\langle\mathcal{B}\rangle_{\text{max}}\) the violation of Bell inequalities is really weak even at high energy, mostly a consequence of the fact that the density matrix is in a high mixture of states (see upper left plot in Fig. 16).
With respect to the quantum state produced by the \(u\bar{d}\) channel, we find that at threshold, the density matrix is described by a pure state
\[\left|\Psi(m_{WZ}=m_{W}+m_{Z})\right\rangle\approx 0.75\left|0+\right\rangle_{ \boldsymbol{p}}+0.66\left|+0\right\rangle_{\boldsymbol{p}}\,. \tag{110}\]
Notably, the quantum state is not symmetric under label exchange, as a consequence of the fact that we are not dealing any more with pairs of particles sharing the same mass and
Figure 16: Entanglement in the \(pp\to W^{+}Z\) channel in the SM. We show the lower bound \(\mathcal{C}_{\text{LB}}\) (bottom left) and upper bound \(\mathcal{C}_{\text{UB}}\) (bottom right) on the concurrence \(\mathcal{C}\), the purity \(P\) (top left) as well as the indicator \(\langle\mathcal{B}\rangle_{\text{max}}\) for Bell inequality violation (top right) as a function of the invariant mass \(m_{WZ}\) and the cosine of the angle between the proton and the \(W^{+}\), in the centre of mass frame.
interactions. At high energy, in the collinear limit \(\theta=0\), the density matrix is described by a pure separable state, _i.e._, \(\left|++\right\rangle_{\mathbf{p}}\). On the other hand, in the central region, we still have a pure but partially entangled state
\[\left|\Psi(m_{WZ}\rightarrow\infty,\cos\theta=0)\right\rangle\approx 0.164 \left|++\right\rangle_{\mathbf{k}}-0.973\left|00\right\rangle_{\mathbf{k}}-0.164\left| --\right\rangle_{\mathbf{k}}\,. \tag{111}\]
Finally, we discuss the effects of the dimension-6 EFT operators. We point out that, for this specific process, we set \(m_{W}=m_{Z}\) when computing the EFT corrections. This approximation considerably speeds up the computation, and given that higher dimensional operator corrections are mostly relevant at high energy, we expect the difference \(m_{Z}-m_{W}\) to be negligible. We have verified the former statement for a subset of the operators, finding that the naive expectation holds true.
In Fig. 17 we display the changes in the \(\mathcal{C}_{\rm LB}\) marker for some benchmark Wilson coefficients. We find that the density matrix is particularly sensitive to higher dimensional operators for this process, even within the current bounds set by global EFT fits. In particular in the plot on the right, we observe a strong effect coming from the presence of the Wilson coefficient \(c_{W}\), which is unique in its kind inducing the BSM coupling \(\delta\lambda_{V}\). The effect of this operator is to increase the entanglement in the central region at high energy, indicating that the dominant configuration in that phase space region is enhanced by the presence of the operator. As previously discussed, the \(\mathcal{O}_{W}\) operator is of particular interest from an EFT perspective and we find that the \(W^{+}Z\) final state seems to be the best probe among the diboson final states with respect to the spin density matrix. In the plot on the left in Fig. 17 we instead show the changes coming from a modification of the left-handed coupling to the EW bosons by the \(\mathcal{O}_{\varphi q}^{(3)}\) operator. The effect in this case is opposite and
Figure 17: The changes in the marker \(\mathcal{C}_{\rm LB}\) is shown for a selection of operators and benchmark Wilson coefficient values for \(W^{+}Z\) production at a proton collider. Only one operator at the time is switched on. Left: \(c_{\varphi q}^{(3)}=0.03\,\mathrm{TeV}^{-2}\), Right: \(c_{W}=0.01\,\mathrm{TeV}^{-2}\).
the value of the \(\mathcal{C}_{\rm LB}\) marker is generally decreased in the high energy phase space region, indicating that the two operators enhance different spin configurations. We find that other operators affecting the process (\(\mathcal{O}^{(1)}_{\varphi q}\), \(\mathcal{O}_{\varphi D}\) and \(\mathcal{O}_{\varphi WB}\)) have close to negligible effects on the entanglement pattern when considering non-excluded values of the Wilson coefficients. We therefore do not show the corresponding plots.
To conclude the section, in Fig. 18 we display the relative change of \(\mathcal{C}_{\rm LB}\) (left) and the purity (right) as a function of the Wilson coefficients in two-dimensional planes. Note that contrary to previously discussed processes, here the phase space point chosen is different as in the central phase space region \(\theta=\pi/2\), low values of entanglement are found and specifically for \(m_{WZ}=500\,\mathrm{GeV}\), the value of \(\mathcal{C}_{\rm LB}\) is equal to 0. We therefore opt for showing plots for \(\cos\theta=1/2\). Once again, the added value of the spin observables in searches for new heavy physics is evident and the plot displays a significant sensitivity both at the cross section level and at the level of the spin density matrix, with deviations from the SM predictions as big as \(30\,\%\) for values of the Wilson coefficients well within the current limits from global fits.
## 6 Conclusions
In this work, we have explored the sensitivity of quantum observables to the strength and structure of couplings entering diboson production, both in the context of the SM and of the SMEFT. Our main objective was to gauge the power of the quantum spin density matrix and related observables to probe the existence of NP.
Figure 18: The relative changes in the marker \(\mathcal{C}_{\rm LB}\) (lower triangle) and purity \(P\) (upper triangle) compared to the SM values \(\mathcal{C}_{\rm LB}^{\rm SM}=0.38\) and \(P^{\rm SM}=0.55\) as a function of the Wilson coefficients \(c_{W}\) and \(c_{\varphi q}^{(3)}\) for \(WZ\) production at a proton collider, at \(M_{WZ}=500\,\mathrm{GeV}\) and \(\cos\theta=1/2\). The lines indicate the relative change of the cross section.
After setting up the formalism, we studied the behaviour of scattering amplitudes in the high energy regime, where longitudinal polarisations dominate the production mode, assessing the consequences for the spin density matrix. In particular, we exploited the known fact that spin observables allow to study effects that are subdominant at the level of single particle distributions, and depend on the interference between higher-dimensional operators and the SM. These observables display a sensitivity to deviations from the SM predictions in the high-energy tails of the distributions, which will be further explored with Run-3 and the HL-LHC in the coming decade.
The main results of our study are reported in Section 5, where different processes have been analysed. We have considered both lepton and proton colliders, finding that the former offer a much cleaner setup for spin density matrix probes. This is mostly due to the fact that in a proton collider the quantum state of the system is the incoherent sum of different partonic channels and therefore tends to be mixed. Nonetheless, considerable sensitivity to NP is also found at proton colliders, which, featuring higher centre of mass energies, can take full advantage of the energy growth of the dimension-6 amplitudes.
In general, we find that the \(ZZ\) production is the least interesting process when it comes to NP sensitivity, as the phenomenology is completely determined by only two possibly anomalous couplings (the right-handed and the left-handed coupling to the \(Z\) boson) and the dimension-6 operators do not introduce new Lorentz structures. We note, however, the potential interest in studying the effects of the neutral TGC which arise at dimension-8. In this case, spin-observables could help in gaining sensitivity, especially because of the possibility to fully reconstruct the final state, something which is experimentally more challenging for final states involving \(W\) bosons. On the other hand, we find that \(WW\) and \(WZ\) production show a rather large sensitivity to heavy NP effects in the spin density matrix already at dimension-6 with significant changes expected in the entanglement pattern across phase space. For example, interference effects due to the triple gauge operator \(\mathcal{O}_{W}\) are clearly identified by quantum observables.
Our results motivate an experimental feasibility study for performing the detailed quantum tomography of the four-fermion final states arising from \(VV\) production in the SMEFT framework at the LHC and at future lepton colliders.
## Acknowledgements
RA's research was supported by the F.R.S.-FNRS project no. 40005600 and the FSR Program of UCLouvain. FM is partially supported by the F.R.S.- FNRS under the "Excellence of Science" EOS be.h project no. 30820817. LM is supported by the European Research Council under the European Union's Horizon 2020 research and innovation Programme (grant agreement n.950246).
|
2303.15973 | Do Neural Topic Models Really Need Dropout? Analysis of the Effect of
Dropout in Topic Modeling | Dropout is a widely used regularization trick to resolve the overfitting
issue in large feedforward neural networks trained on a small dataset, which
performs poorly on the held-out test subset. Although the effectiveness of this
regularization trick has been extensively studied for convolutional neural
networks, there is a lack of analysis of it for unsupervised models and in
particular, VAE-based neural topic models. In this paper, we have analyzed the
consequences of dropout in the encoder as well as in the decoder of the VAE
architecture in three widely used neural topic models, namely, contextualized
topic model (CTM), ProdLDA, and embedded topic model (ETM) using four publicly
available datasets. We characterize the dropout effect on these models in terms
of the quality and predictive performance of the generated topics. | Suman Adhya, Avishek Lahiri, Debarshi Kumar Sanyal | 2023-03-28T13:45:39Z | http://arxiv.org/abs/2303.15973v1 | # Do Neural Topic Models Really Need Dropout?
###### Abstract
Dropout is a widely used regularization trick to resolve the overfitting issue in large feedforward neural networks trained on a small dataset, which performs poorly on the held-out test subset. Although the effectiveness of this regularization trick has been extensively studied for convolutional neural networks, there is a lack of analysis of it for unsupervised models and in particular, VAE-based neural topic models. In this paper, we have analyzed the consequences of dropout in the encoder as well as in the decoder of the VAE architecture in three widely used neural topic models, namely, contextualized topic model (CTM), ProLDA, and embedded topic model (ETM) using four publicly available datasets. We characterize the dropout effect on these models in terms of the quality and predictive performance of the generated topics.
## 1 Introduction
Dropout Hinton et al. (2012) is used while training neural networks, by stochastically dropping out the activation of neurons to prevent complex co-adaptations of feature vectors Baldi and Sadowski (2013). The working of dropout is attributed to the implicit averaging over an ensemble of neural networks Labach et al. (2019); Warde-Farley et al. (2014). It has been shown to be effective on supervised learning tasks to prevent overfitting Srivastava et al. (2014).
As the volume of digital documents significantly increases with time, organizing them manually is becoming quite an inconvenient task. Because of the ability of topic models to learn a thematic structure from a set of documents in an unsupervised manner and label the documents with their corresponding dominant topics, the significance of topic models is enormous in this area Hall et al. (2008); Adhya and Sanyal (2022). But in the traditional topic models, not only the computation cost of the approximate posterior is very high but also for a small change in the modeling assumption, re-derivation of the inference method is needed. With greater flexibility and scalability than traditional topic models, a class of Neural Topic Models (NTMs) aim to leverage the potential of neural networks using the AEVB Kingma and Welling (2014) based inference technique. Following Zhao et al. (2021), we refer to this class of models as VAE-NTMs where the training objective is to maximize the log-likelihood of the reconstruction of the input document while minimizing the KL-divergence of the learned posterior distribution of the latent space from a known prior distribution.
An earlier study by Ha et al. (2019) of the dropout effect on two traditional topic models LDA Blei et al. (2003) and BTM Yan et al. (2013) shows that the correct choice of the dropout rate not only decreases the learning time of the models but also significantly improves the predictive performance and generalization for short texts. However, the study does not consider neural topic models.
In this work, we propose the use of dropout on VAE-NTMs as a hyperparameter in order to achieve much better performance in terms of topic coherence, topic diversity, and topic quality. We test this proposition on a range of standard VAE-NTM architectures. To the best of our knowledge, there has been no other study focusing specifically on the use of dropout in neural topic models. We have made our analysis publicly available1.
Footnote 1: [https://github.com/AdhyaSuman/NTMs_Dropout_Analysis](https://github.com/AdhyaSuman/NTMs_Dropout_Analysis)
In summary, our contributions are as follows:
1. We comprehensively show both quantitatively and qualitatively that topic quality undergoes a massive improvement with either very low or zero dropout settings in both the encoder and the decoder of a VAE-NTM.
2. We show that for VAE-NTMs the systematic choice of low dropout rates can lead to a
significant improvement in downstream tasks like document classification.
3. We study the dependence of dropout on the length of the input documents.
4. We present an empirical analysis for the increase in performance of VAE-NTMs with a decrease in dropout.
## 2 Task Formulation
Given a corpus \(\{D_{1},D_{2},\dots,D_{N}\}\) of \(N\) documents with vocabulary \(\{w_{1},w_{2},...,w_{V}\}\) of \(V\) words, topic models describe a document \(D_{i}\) as a distribution over \(K\) topics \(\{\mathbf{\beta}_{1},\mathbf{\beta}_{2},...,\mathbf{\beta}_{K}\}\), where an individual topic \(\mathbf{\beta}_{k}\) is a distribution over \(V\)-words.
### VAE Framework in Neural Topic Models
Given an input sample \(\mathbf{x}\), a VAE encoder learns the approximate posterior distribution \(q_{W}(\mathbf{z}|\mathbf{x})\) where \(W\) is the encoder's weights that are to be learned and \(\mathbf{z}\) is a latent variable. Given a sample \(\mathbf{z}\thicksim q_{W}(\mathbf{z}|\mathbf{x})\), the VAE decoder learns the likelihood \(p_{W^{\prime}}(\mathbf{x}|\mathbf{z})\) where \(W^{\prime}\) is the learnable decoder's weights.
In VAE-NTMs the input to the encoder is a document representation (e.g., bag-of-words) \(\mathbf{x}_{V\times 1}\). The encoder then returns the Gaussian parameters \(\big{(}\mathbf{\mu}_{K\times 1},\mathbf{\Sigma}_{K\times 1}\big{)}\) that approximate the true posterior where \(K\) is the dimension of latent (topic) space, \(\mathbf{\mu}_{K\times 1}\) is the mean, and \(\mathbf{\Sigma}_{K\times 1}\) is the diagonal covariance matrix. Upon taking these Gaussian parameters as input, the decoder samples a latent representation \(\mathbf{z}_{K\times 1}\) from \(\mathcal{N}(\mathbf{\mu}_{K\times 1},\mathbf{\Sigma}_{K\times 1})\) using the reparametrization trick as follows:
\[\mathbf{z}_{K\times 1}=\mathbf{\mu}_{K\times 1}+\mathbf{\Sigma}_{K\times 1}^{\frac{1} {2}}\odot\mathbf{\epsilon}_{K\times 1}\]
where \(\mathbf{\epsilon}_{K\times 1}\thicksim\mathcal{N}(\mathbf{0},\mathbf{I})\) and \(\odot\) represents the element-wise product. Then the document-topic distribution vector (\(\mathbf{\theta}_{K\times 1}\)) is generated such that \(\mathbf{\theta}_{K\times 1}=\sigma(\mathbf{z}_{K\times 1})\) where \(\sigma(\cdot)\) is a softmax function. The input document-term distribution vector is reconstructed with the product of \(\mathbf{\theta}_{K\times 1}\) and \(\mathbf{\beta}_{K\times V}\), the topic-word matrix, in the following manner:
\[\tilde{\mathbf{x}}_{V\times 1}=\begin{cases}\mathbf{\beta}^{T}\mathbf{\theta}&\text{if $ \mathbf{\beta}$ is normalized.}\\ \sigma\left(\mathbf{\beta}^{T}\mathbf{\theta}\right)&\text{if $\mathbf{\beta}$ is unnormalized.}\end{cases}\]
As shown in Figure 1, in the encoder, dropout is applied with probability \(E_{p}\) on the output of the hidden layer(s) of the multi-layer feed-forward neural network (FFNN). This output is then fed to two separate layers to get the approximate posterior \(q_{W}(z|x)\). In the decoder, dropout is applied with probability \(D_{p}\) on the document-topic distribution vector (\(\mathbf{\theta}_{K\times 1}\)), just before the reconstruction process.
### Task Description
The goal is to measure the effect that dropout has on the _performance_ of VAE-NTMs by varying the dropout rates from \(0.0\) to \(0.6\) in steps of \(0.1\), in both the encoder and the decoder. We have chosen \(0.6\) as the upper bound of the dropout rates for our experiments because it is the highest dropout rate used in any VAE-NTMs that we have considered as a baseline in this work. We measure performance using: _topic coherence_, _topic diversity_, and _topic quality_. We use NPMI Lau et al. (2014); Roder et al. (2015) to measure topic coherence. Topic diversity Dieng et al. (2020) shows the uniqueness of topics. Topic quality is the product of coherence and diversity Dieng et al. (2020). As the automated topic model measures do not always accurately capture the quality of the topics Hoyle et al. (2021), we also perform a manual evaluation of the topics and study their predictive performance on the document classification task.
## 3 Empirical Study
We perform all experiments in OCTIS Terragni et al. (2021), which is an integrated framework for topic modeling.
### Datasets
We have used four publicly available datasets in our experiments. Among them, **20NG2** and **BBC
Figure 1: VAE framework in neural topic models.
(Greene and Cunningham, 2006) are already available in OCTIS in the pre-processed format while we added **Wiki40B**Guo et al. (2020) and **AllNews**Zhu et al. (2018) datasets further. The statistical descriptions of these datasets are mentioned in Table 1. Each corpus is split into train/valid/test sets in the ratio \(70\colon 15\colon 15\). The validation set is used for early stopping.
### Models
We use the following three VAE-NTMs: **CTM**Bianchi et al. (2021) which incorporates the contextualized documents embeddings with the neural topic models; **ProdLDA**Srivastava and Sutton (2017) which, unlike LDA, relaxes the simplex constraint over the topic-word matrix; **(ETM)**Dieng et al. (2020) which incorporates word-embeddings in topic modeling to increase robustness in presence of stopwords.
For each of the four datasets, we compute the dropout rate that optimizes the topic quality of each model on that dataset. We train all three topic models for topic-count \(K\in\{20,50,100\}\) with 30 epochs while keeping all hyperparameter values, except dropout, the same as in their original implementations. To ensure robustness, we average scores over 10 independent runs of each model. For comparison, we use the default dropout rates for each model as mentioned in the original papers that proposed the corresponding model. In Table 2, we show the default and the optimal dropout rates.
### Results and Analysis
#### 3.3.1 Quantitative Evaluation of Topic Quality
In Figure 2, we compare, for each dataset and each model, the topic quality and the NPMI respectively between the dropout-optimized model that gives the highest topic quality and the model with default dropout rates as mentioned in Table 2.
On **20NG**, the topic quality score for (CTM, ProdLDA, ETM) is improved from \((0.056,-0.051,0.004)\) to \((0.065,0.039,0.009)\) by optimizing the dropout rate. For CTM, the increase in performance is around \(16.07\%\) whereas for the other two models it is over \(100\%\). This is because the original implementation of CTM already uses a relatively low dropout rate, i.e.,
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Dataset** & **\#Docs** & **Avg. \#words** & **Vocab** \\ \hline \hline
**20NG** & \(16309\) & \(48.02\) & \(1612\) \\
**BBC** & \(2225\) & \(120.12\) & \(2949\) \\
**Wiki40B** & \(24774\) & \(541.08\) & \(2000\) \\
**AllNews** & \(49754\) & \(229.53\) & \(2000\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the used datasets.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Model** & **20NG** & **BBC** & **Wiki40B** & **AllNews** \\ \hline \hline
**CTM** & \((0.2,\ 0.2)\) & \((0.0,\ 0.0)\) & \((0.0,\ 0.0)\) & \((0.2,\ 0.1)\) & \((0.0,\ 0.1)\) \\ \hline
**ProdLDA** & \((0.6,\ 0.6)\) & \((0.1,\ 0.1)\) & \((0.0,\ 0.0)\) & \((0.1,\ 0.1)\) & \((0.1,\ 0.1)\) \\ \hline
**ETM** & & & & \\ \((0.5,\ 0.0)\) & \((0.0,\ 0.0)\) & \((0.1,\ 0.0)\) & \((0.0,\ 0.0)\) & \((0.1,\ 0.0)\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: For each of the datasets, the optimal dropout rates of all the models considering the highest topic quality are mentioned in the \((E_{p},D_{p})\) format in the second through last columns. The default dropout rate is also specified for each model in the first column.
Figure 2: Topic quality and NPMI for different topic models with optimal dropout rate and default dropout rate.
\(0.2\), for both the encoder and the decoder. The other two models show a significant increase in performance due to their large dropout in the baseline models.
Figure 3 shows that the topic quality on the **20NG** dataset for the VAE-NTMs generally produces better results on keeping the dropout rate for both the encoder and the decoder either to be zero or close to it, especially values like \(\{0.0,0.1\}\). Similar results have been found for the other datasets. Based on these observations, the topic quality is found to reduce with an increase in dropout rates in the encoder and decoder.
#### 3.3.2 Qualitative Evaluation of Topic Quality
To qualitatively evaluate the models, we trained all of them for a topic count of \(100\) on the **20NG** dataset. We then aligned the topics for each pair of _(optimal-dropout model, default-dropout model)_ for all three different models. We followed a two-step strategy for topic alignment. For a given pair of models, namely, one with optimal dropout and another with default dropout, with topics lists \(P\) and \(Q\), respectively, we first construct a similarity matrix of the topic lists using Rank-biased Overlap (Webber et al., 2010) (RBO) which computes the similarity between two ordered lists by taking into consideration the rank of the individual elements. For example, for \(100\) topics, we get a matrix, \(\mathbf{A}=(a_{ij})_{1\leq i,j\leq 100}\) such that, \(a_{i,j}=\mathrm{RBO}\left(P[i],Q[j]\right)\). The RBO score lies in \([0,1]\), where \(0\) represents no overlap and \(1\) implies exact overlap. In the final step, we iteratively select the pair of topics for which the similarity score is maximum and simultaneously exclude these two topics from further consideration, i.e. if \(\left(P[i_{1}],Q[j_{1}]\right)\) and \(\left(P[i_{2}],Q[j_{2}]\right)\) are two selected pairs then \(\left(i_{1}\neq i_{2}\wedge j_{1}\neq j_{2}\right)\).
In Table 3 we show the top words from _aligned_ topics of all the models. '\(\ast\)' marked models have dropout optimized to give the highest topic quality while others use the default dropout rates as mentioned in Table 2. We see that dropout-optimized models output more interpretable topics.
#### 3.3.3 Effect of Dataset Length
Among the input datasets on which we have experimented, the **20NG** dataset contains relatively short texts, while the others contain longer texts. (Ha et al., 2019) find that their dropout methods are not effective on long texts. But here we see that the performance of all VAE-NTMs decreases uniformly with the increase in the dropout rate, irrespective of the length of the dataset.
#### 3.3.4 Document Classification
We test the predictive performance of the topics produced by the models on a document classification task. We train the models on **20NG** and **BBC** corpora for \(K\) topics using the training subset. We represent each document as a \(K\)-dimensional document-topic vector and train an SVM, which is then tested on the test subset. We average the
Figure 4: Accuracy for different topic models with optimal dropout and default dropout from Table 2.
accuracy scores over \(K\in\{20,50,100\}\). Figure 4 shows that accuracy increases when we use the optimized dropout rates.
## 4 Theoretical Understanding of Results
Our experiments show that by tuning the dropout carefully, we can achieve a significant improvement in the performance of VAE-NTMs. Therefore, we argue that the dropout rate should be treated as an important hyperparameter and carefully selected based on the choice of the model as well as the dataset, especially in the case of VAE-NTMs. More precisely, in most cases, low dropout rates in the encoder and the decoder lead to higher performance than that achieved for higher dropout rates.
Standard dropout and other types of dropout have been extensively used in supervised learning techniques Srivastava et al. (2014); Wu and Gu (2015); Tompson et al. (2015); Devries and Taylor (2017); Cai et al. (2019). The main prerogative of using dropout in the supervised scenario is to introduce noise while training so that the model can recognize the outliers in the testing phase. The drop in performance with high dropout that we see in our experiments is perhaps due to the fact that we are trying to learn a generative model of the data. Dropout makes the model robust against perturbations in the input data and thereby also prevents it from learning the characteristics of the input distribution accurately. This is probably why we see a drop in topic coherence and quality. In the case of document classification, if the topic model is trained with a high dropout, the document-topic vectors are of poor quality and the classifier gets trained on these vectors; this results in poor accuracy on the test documents. This setting is different from the usual supervised learning of neural classifiers where dropout is introduced directly in the classifier to prevent overfitting. We intend to analyze these aspects in more depth in the future.
## 5 Conclusion
We present a detailed study of the effect of the dropout rate on VAE-NTMs. We find that the model performance generally reduces with the increase in dropout rate in the encoder as well as the decoder.
## Limitations
The following limitations are known and should be considered when applying the results of this work or relying on them in future studies: (1) Other variants of dropout can be applied to the VAE-NTMs. (2) Analysis of the dropout effect may be done for other VAE-NTMs as well. (3) Other downstream tasks may be formulated for further analysis.
\begin{table}
\begin{tabular}{c p{341.4pt}} \hline \hline
**Model** & **Topics** \\ \hline \hline
**CTM* (\(0.0,0.0\))** & **monitor, card, video, port, vga, apple, connector, serial, slot, output** \\
**TTAM, weapon, dangerous, military, license**, _file, state_, **gun, police**, _issue_ \\
**CRHSISI, truth, scripture**, _exist_, **belief, accept**, _understand_, _word_, **human, doctrine** \\ \hline
**CTM** (\(0.2,0.2\)) & **card, monitor, video,**_offer, sale, upgrade_, **mouse, vga, port, parallel** \\
**TTAM, dangerous, license, weapon, section, file, division, device, manufacture, carry interpretation, truth, scripture, christian, agree, moral, understand, human, faith, claim_ \\ \hline \hline
**ProdLDA* (\(0.1,0.1\))** & **window, driver**, _mode_, **run, mouse, session, server, program, manager, install** \\
**CRHSI, online, buy, company, vehicle,** _make_, **brake, tire, dealer, road** \\
**CRHSI, online, output, circuit, noise, power, switch, wire, connector**, _degree_ \\ \hline
**ProdLDA** (\(0.6,0.6\)) & _line, window, gun_, **read, space, run**, _statement_, **datum**, **drive**, _make_ \\
**CRHSI, online, battery, engine**, _homosexual, assault, reason, place, single, large, attempt_ \\
**CRHSI, online, dynamic, signal**, _usual, label, hour, bio, leg, bullet, hundred_ \\ \hline \hline
**ETM* (\(0.0,0.0\))** & **version, software, program, file**, _include_, **image, application**, _set_, **server**, _support_ \\
**TTAM** (\(0.0,0.0\)) & **armenian, turkish, village, people, israeli, population, muslim, kill, russian, genocide** \\
**CRHSI, online, online, system, run, work, window, problem, include, set, good, support, information** \\ \hline
**ETM** (\(0.5,0.0\)) & **file, application**, _set_, **program**, _support_, **image, display**, _list_, **version, bit** \\
**CRHSI, online, system, run, work, window, problem, include, set, good, support, information** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Some selected topics among 100 topics from **20NG**. ‘*’ indicates models with optimal dropout. The dropout rate is mentioned in the \((E_{p},D_{p})\) format. The more related words in a topic are highlighted in bold while less related ones are italicized.
## Acknowledgments
This work is partially supported by the SERB-DST Project CRG/2021/000803 sponsored by the Department of Science and Technology, Government of India at Indian Association for the Cultivation of Science, Kolkata.
|
2305.18101 | Non-disjoint strong external difference families can have any number of
sets | Strong external difference families (SEDFs) are much-studied combinatorial
objects motivated by an information security application. A well-known
conjecture states that only one abelian SEDF with more than 2 sets exists. We
show that if the disjointness condition is replaced by non-disjointness, then
abelian SEDFs can be constructed with more than 2 sets (indeed any number of
sets). We demonstrate that the non-disjoint analogue has striking differences
to, and connections with, the classical SEDF and arises naturally via another
coding application. | Sophie Huczynska, Siaw-Lynn Ng | 2023-05-29T14:13:12Z | http://arxiv.org/abs/2305.18101v1 | # Non-disjoint strong external difference families can have any number of sets
###### Abstract
Strong external difference families (SEDFs) are much-studied combinatorial objects motivated by an information security application. A well-known conjecture states that only one abelian SEDF with more than 2 sets exists. We show that if the disjointness condition is replaced by non-disjointness, then abelian SEDFs can be constructed with more than 2 sets (indeed any number of sets). We demonstrate that the non-disjoint analogue has striking differences to, and connections with, the classical SEDF and arises naturally via another coding application.
## 1 Introduction
Difference families are long-studied combinatorial objects, with many applications. A family of (not necessarily disjoint) \(k\)-sets in a group \(G\) is a difference family if the multiset of pairwise internal differences (between distinct elements of each set) comprises each non-identity element of \(G\)\(\lambda\) times. External difference families (EDFs) were introduced in [10], motivated by the information security problem of constructing optimal secret sharing schemes; the mathematical link between EDFs and algebraic manipulation detection (AMD) codes was formalized in [11]. EDFs are a generalisation of difference families, in which the pairwise external differences (between distinct sets) are considered; the inter-set differences correspond to possible manipulations of an encoded message. The sets are disjoint (to ensure unique decoding) and the multiset of external differences comprises each non-identity element of \(G\)\(\lambda\) times (the identity is ignored as it would correspond to "no manipulation").
An EDF is an optimal example of a weak AMD code ([11]). There is a stronger security model (strong AMD code) which motivates the definition of a strong external difference family (SEDF) ([11]). An SEDF is an EDF such that, for any set in the family, the pairwise differences between elements of this set and those of all other sets in the family comprise each non-identity element \(\lambda\) times. An SEDF is an EDF, but not every EDF is an SEDF.
Many constructions for SEDFs exist in the combinatorial literature (see [1, 3, 5, 13]). Surprisingly, all known infinite families have two sets, and just one example is known with more than two sets (having 11 sets in a group of size 243, obtained independently in [5] and [13]). It is conjectured in [6] that no other abelian SEDF exists with more than two sets; many theoretical results and computational searches support this (eg [5, 6, 7, 8]). It is also notable that the only known infinite families with fixed \(\lambda\) have \(\lambda=1\).
In this paper we introduce the non-disjoint analogue of the SEDF (the sets are no longer disjoint and the multiset condition requires \(\lambda\) occurences of every group element), and demonstrate that an abelian infinite family exists whose members can have any number of sets. We also obtain infinite families of two-set non-disjoint SEDFs with any fixed frequency value \(\lambda\in\mathbb{N}\). For \(\lambda=1\), we show that our two-set construction corresponds to a known two-set construction for classical SEDFs, and indicate why our non-disjoint constructions do not yield classical SEDFs with \(\lambda>1\) or more than two sets.
The type of non-disjoint SEDFs which we construct satisfies a stronger condition on their external difference properties, namely that the multiset of external differences between any pair of distinct sets in the family comprises each element of \(G\) precisely \(\lambda\) times. We call these _pairwise strong external difference families_ (PSEDFs). Each PSEDF is a non-disjoint SEDF but not vice versa. We demonstrate that PSEDFs are useful and indeed optimal for a different application in communications theory, that of optical orthogonal codes and conflict-avoiding codes ([2], [12]). Here, the external differences between sets correspond to cross-correlation of binary sequences; there is no requirement for disjointness and the identity is treated in the same way as any other group element (since collision with the zero-shift of another sequence is just as significant as collision with any other shift).
## 2 Background
All groups are written additively. For two sets \(A,B\) in a group \(G\), define the multisets \(\Delta(A,B)=\{a-b:a\in A,b\in B\}\) and \(\Delta(A)=\{a_{1}-a_{2}:a_{1},a_{2}\in A,a_{1}\neq a_{2}\}\). The notation \(\lambda A\) denotes the multiset consisting of \(\lambda\) copies of a set \(A\). A translate of a set \(A\) is denoted by \(t+A=\{t+a:a\in A\}\).
The existing definition of an SEDF is as follows.
**Definition 2.1**.: _Let \(G\) be a group of order \(v\) and let \(m>1\). A family of disjoint \(k\)-sets \(\{A_{1},\ldots,A_{m}\}\) in \(G\) is a \((v,m,k,\lambda)\)-SEDF if, for each \(1\leq i\leq m\), the multiset equation \(\bigcup_{j\neq i}\Delta(A_{i},A_{j})=\lambda(G\setminus\{0\})\) holds._
**Example 1**.: \((\{0,1,\ldots,k-1\},\{k,2k,\ldots,k^{2}\})\) _is a \((k^{2}+1,2,k,1)\)-SEDF in \(\mathbb{Z}_{k^{2}+1}\) ([11])._
The following is a summary of known existence results for abelian SEDFs.
**Proposition 2.2**.: _A \((v,m,k,\lambda)\)-SEDF exists in \(G\) in the following cases:_
1. \((v,m,k,\lambda)=(k^{2}+1,2,k,1)\)_,_ \(G=\mathbb{Z}_{k^{2}+1}\) _([11]);_
_._
2. \((v,m,k,\lambda)=(v,2,\frac{v-1}{2},\frac{v-1}{4})\)_,_ \(v\equiv 1\pmod{4}\)_,_ \(G\) _contains a_ \((v,\frac{v-1}{2},\frac{v-5}{4},\frac{v-1}{4})\) _partial difference set (__[_4_]__);_
3. \((v,m,k,\lambda)=(q,2,\frac{q-1}{4},\frac{q-1}{16})\)_,_ \(q=16t^{2}+1\) _a prime power,_ \(t\in\mathbb{Z}\)_,_ \(G=(GF(q),+)\) _(__[_1_]__);_
4. \((v,m,k,\lambda)=(p,2,\frac{p-1}{6},\frac{p-1}{36})\)_,_ \(p=108t^{2}+1\) _is prime,_ \(t\in\mathbb{Z}\)_,_ \(G=(GF(p),+)\) _(__[_1_]__)._
There are very many non-existence results for SEDFs in abelian groups with more than two sets ([6]); we summarize some key ones:
**Proposition 2.3**.: _Let \(G\) be abelian. No \((v,m,k,\lambda)\)-SEDF with \(m>2\) exists if:_
1. \(m\in\{3,4\}\) _(__[_8_]__); or_
2. \(v\) _is a product of distinct primes and_ \(\gcd(mk,v)=1\) _(__[_1_]__); or_
3. \(G\) _is cyclic of prime power order_ _[_6_]__; or_
4. \(v\) _is a product of at most three (not necessarily distinct) primes, except possibly when_ \(G=C_{p}^{3}\) _and_ \(p\) _is a prime greater than_ \(3\times 10^{12}\)___[_6_]__; or_
5. \(G\) _has order_ \(p^{2}\) _(__[_7_]__)._
Other non-existence conditions for \(m>2\) include: \(\lambda\geq k\) ([4]); \(\lambda>1\) and \(\frac{\lambda(k-1)(m-2)}{(\lambda-1)k(m-1)}>1\) ([4]); \(k|v\) ([8]); \(\gcd(k,v-1)=1\) ([5]) and \(v-1\) is squarefree ([3]). In [7], a result is given for groups of order \(pq\) for \(p\) sufficiently large.
We adapt the classical definition of SEDF in Definition 2.1 by removing the disjointness condition, so the identity becomes a valid external difference. For consistency we require that the identity occurs at the same frequency as the other group elements. This in fact means the sets _must_ be non-disjoint, so this structure is genuinely distinct from the classical SEDF, i.e. it is the non-disjoint analogue rather than a generalisation.
**Definition 2.4**.: _Let \(G\) be a group of order \(v\) and let \(m>1\). We say that a family of \(k\)-sets \(\{A_{1},\ldots,A_{m}\}\) in \(G\) is a non-disjoint \((v,m,k,\lambda)\)-SEDF if, for each \(1\leq i\leq m\), the multiset equation \(\bigcup_{j\neq i}\Delta(A_{i},A_{j})=\lambda G\) holds._
This may be viewed as mathematically more "natural" than the existing SEDF definition, as it treats every element of the group equally.
A non-disjoint SEDF consisting of two sets \(\{A,B\}\) satisfies the condition that \(\Delta(A,B)=\Delta(B,A)=\lambda G\). This motivates the following definition of a non-disjoint structure with a stronger condition on its external differences.
**Definition 2.5**.: _Let \(G\) be a group of order \(v\) and let \(m>1\). We say a family of \(k\)-sets \(\{A_{1},\ldots,A_{m}\}\) in \(G\) is a \((v,m,k,\lambda)\)-PSEDF if, for each \(1\leq i\neq j\leq m\), the multiset \(\Delta(A_{i},A_{j})\) comprises \(\lambda\) copies of each element of \(G\)._
**Theorem 2.6**.:
1. \(A\) \((v,m,k,\lambda)\)_-PSEDF is a non-disjoint_ \((v,m,k,(m-1)\lambda)\)_-SEDF._
_._
2. _A non-disjoint_ \((v,2,k,\lambda)\)_-SEDF is a_ \((v,2,k,\lambda)\)_-PSEDF._
**Lemma 2.7**.:
1. _For a non-disjoint_ \((v,m,k,\lambda)\)_-SEDF,_ \(\lambda v=k^{2}(m-1)\)_._
2. _For a_ \((v,m,k,\lambda)\)_-PSEDF,_ \(\lambda v=k^{2}\)_._
**Example 2**.: _In \(\mathbb{Z}_{18}\), the sets \(\{0,1,2,3,4,5\}\) and \(\{0,1,6,7,12,13\}\) form an \((18,2,6,2)\)-PSEDF which is a non-disjoint \((18,2,6,2)\)-SEDF._
A classical \((v,m,k,1)\)-SEDF exists in an abelian group if and only if \(m=2\) and \(v=k^{2}+1\) or \(k=1\) and \(m=v\)[11]. An analogous result holds for non-disjoint SEDFs (for these, by Lemma 2.7(i), \(k=1\) cannot occur).
**Theorem 2.8**.: _In an abelian group \(G\), a non-disjoint \((v,m,k,1)\)-SEDF exists if and only if \(m=2\) and \(v=k^{2}\)._
Proof.: Suppose \(A_{1},\ldots,A_{m}\) is a non-disjoint \((v,m,k,1)\)-SEDF with \(m\geq 3\). We have \(k\geq 2\). Then \(\cup_{i\neq j}\Delta(A_{i},A_{j})=mG\); removing all differences to/from \(A_{1}\), \(\cup_{1\neq i\neq j\neq 1}\Delta(A_{i},A_{j})=(m-2)G\). Let \(x,y\in A_{1}\), \(x\neq y\). Then \(x-y\) must occur in \(\Delta(A_{i},A_{j})\) for some \(1\neq i\neq j\neq 1\), i.e. \(x-y=u-v(\neq 0)\) for some \(u\in A_{i}\), \(v\in A_{j}\), \(i\neq j\). But then \(x-u=y-v\) occurs twice in \(\cup_{k\neq 1}\Delta(A_{1},A_{k})\), a contradiction. Hence \(m=2\). Theorem 3.1 establishes the reverse.
When working in cyclic groups, we will use the following well-known correspondence with binary sequences (for background on sequences, see [9, Section 5.4]). A _binary sequence_ of length \(v\) is a sequence \(X=x_{0}\ldots x_{v-1}\) where each \(x_{i}\in\{0,1\}\). We also denote this by \(X=(x_{i})_{i=0}^{v-1}\). We call a contiguous subsequence \(x_{\delta}x_{\delta+1}\ldots x_{\delta+r-1}\) of \(X\) a _substring_ of \(X\) of length \(r\). We take indices modulo \(v\) unless otherwise stated. The _weight_ of a binary sequence is the number of occurrences of the symbol \(1\) in the sequence. We call \(X+s=(x_{i+s})_{i=0}^{v-1}\) a (cyclic) shift of \(X\) by \(s\) places.
**Definition 2.9**.:
1. _For a_ \(k\)_-subset_ \(A\) _of_ \(\mathbb{Z}_{v}=\{0\,\ldots,v-1\}\)_, we associate a binary sequence_ \(X_{A}=(x_{i})_{i=0}^{v-1}\) _of weight_ \(k\) _whose_ \(i\)_th entry_ \(x_{i}\) _is_ \(1\) _if_ \(i\in A\) _and_ \(0\) _if_ \(i\not\in A\)_._
2. _For a binary sequence_ \(X_{A}=(x_{i})_{i=0}^{v-1}\) _of weight_ \(k\)_, we associate a_ \(k\)_-subset_ \(A\) _of_ \(\mathbb{Z}_{v}\)_, comprising all elements_ \(i\in\{0,\ldots,v-1\}\) _such that_ \(x_{i}=1\)_._
In \(\mathbb{Z}_{7}\), the set \(A=\{1,2,4\}\) corresponds to the sequence \(0110100\).
Using this correspondence, we have the following useful relationship.
**Proposition 2.10**.: _Let \(X_{A}=(x_{i})_{i=0}^{v-1}\), \(X_{B}=(y_{i})_{i=0}^{v-1}\) (\(x_{i},y_{i}\in\{0,1\}\)), with indices taken modulo \(v\), be the sequences corresponding to \(k\)-subsets \(A\) and \(B\) in \(\mathbb{Z}_{v}\). Then:_
1. _For_ \(\delta\in\{0,\ldots,v-1\}\)_,_ \(\sum_{t=0}^{v-1}x_{t}y_{t+\delta}\) _equals the number of occurrences of_ \(\delta\) _in_ \(\Delta(B,A)\)_._
2. \(\sum_{t=0}^{v-1}\!\!x_{t}y_{t+\delta}=\lambda\) _for all_ \(0\leq\delta\leq v-1\) _if and only if_ \(\Delta(B,A)=\lambda\mathbb{Z}_{n}\)__
Proof.: For fixed \(\delta\in\{0\,\ldots,v-1\}\) the sum \(\sum_{t=0}^{v-1}x_{t}y_{t+\delta}\) counts the number of positions \(t\) such that \(x_{t}=1=y_{t+\delta}\). This is the number of \(t\in\mathbb{Z}_{v}\) such that \(t\in A\) and \(t+\delta\in B\), i.e. the number of times \(\delta\) occurs in \(\Delta(B,A)\).
We end the section with some further sequence terminology. Let \(X\) be a sequence. Following [9], we define a _run_ of \(X\) to be a substring of \(X\) consisting of consecutive \(0\)s or consecutive \(1\)s which is neither preceded nor succeeded by the same symbol. We call a run of \(0\)s a _gap_ and a run of \(1\)s a _block_. For example in the length-\(9\) sequence \(111100010\), \(1111\) is a substring which is a block of length \(4\), and \(000\) is a substring which is a gap of length \(3\).
## 3 Constructions for PSEDFs
In this section, we present results for PSEDFs and non-disjoint SEDFs which demonstrate their differences and similarities with classical SEDFs. We use binary sequences. For a sequence \(X=(x_{i})_{i=0}^{v-1}\), indices are taken modulo \(v\).
We first construct infinite two-set families of non-disjoint SEDFs for any \(\lambda\)-value. For classical SEDFs, all known families with fixed \(\lambda\) have \(\lambda=1\).
**Theorem 3.1**.: _If \(v|k^{2}\) and \(k|v\) then_
1. _there exists an infinite family of_ \((v,2,k,\frac{k^{2}}{v})\)_-PSEDFs in_ \(\mathbb{Z}_{v}\)_._
2. _there exists an infinite family of non-disjoint_ \((v,2,k,\frac{k^{2}}{v})\)_-SEDFs in_ \(\mathbb{Z}_{v}\)_._ _Specifically, the sets of the PSEDF (and SEDF) are_ \(A_{X}=\{0,1,2,\ldots,k-1\}\) _and_ \(A_{Y}=\big{\{}ak,ak+1,\ldots,ak+\lambda-1\;:\;a=0,1,\ldots,\big{(}\frac{v}{k}-1 \big{)}\big{\}}\)_._
Proof.: Let \(\lambda=\frac{k^{2}}{v}\). As \(k\leq v\), we have \(\lambda\leq k\). The sequences corresponding to the sets \(A_{X}\), \(A_{Y}\) are:
\[X=\overbrace{11\ldots 1}^{k}\overbrace{00\ldots 0}^{k}\ldots\overbrace{00 \cdots 0}^{k},\quad Y=\overbrace{\underbrace{1\ldots 1}^{k}0\ldots 0}^{k}\overbrace{\underbrace{1\ldots 1}^{k}0 \ldots 0}^{k}\ldots\overbrace{\underbrace{1\ldots 1}^{k}0\ldots 0}^{k}.\]
Here \(X\) is a block of length \(k\) then a gap of length \(v-k\), while \(Y\) comprises a block of length \(\lambda\) then a gap of length \(k-\lambda\), repeated \(\frac{v}{k}\) times.
Write \(X=(x_{t})\), \(Y=(y_{t})\). Let \(\delta\in\{0,\ldots,v-1\}\). Consider \(\Sigma_{t=0}^{v-1}x_{t}y_{t+\delta}\). We see that \(\Sigma_{t=0}^{v-1}x_{t}y_{t+\delta}=\Sigma_{t=0}^{k-1}x_{t}y_{t+\delta}\), since \(x_{t}=0\) for \(t=k,\ldots,v-1\), so we need only consider the length-\(k\) substring \(Y_{\delta}=y_{\delta}y_{1+\delta}\ldots y_{k-1+\delta}\) of \(Y\). The value of \(\Sigma_{t=0}^{k-1}x_{t}y_{t+\delta}\) is exactly the number of \(1\)s in \(Y_{\delta}\). By construction, if any length-\(k\) substring \(W\) of \(Y\) starts with some \(s\leq\lambda\)\(1\)s, it is followed by a gap of length \(k-\lambda\), which is then followed by a block of length \(\lambda-s\). If it starts with some \(s\leq k-\lambda\)\(0\)s, it is followed by a block of length \(\lambda\), which is then followed by a gap of length \(k-\lambda-s\). In either case there are always \(\lambda\)\(1\)s in \(W\). Hence \(\Sigma_{t=0}^{v-1}x_{t}y_{t+\delta}=\Sigma_{t=0}^{k-1}x_{t}y_{t+\delta}=\lambda\). This applies to any \(\delta\), and hence by Proposition 2.10, \(\{A_{X},A_{Y}\}\) is a non-disjoint \((v,2,k,\frac{k^{2}}{v})\)-PSEDF.
**Corollary 3.2**.:
1. _For any_ \(a,r\in\mathbb{N}\)_, there exists a_ \((ra^{2},2,ra,r)\)_-PSEDF in_ \(\mathbb{Z}_{ra^{2}}\)_._
2. _Let_ \(\lambda\in\mathbb{N}\)_. Then in_ \(\mathbb{Z}_{\lambda a^{2}}\)_, there exists a non-disjoint_ \((\lambda a^{2},2,\lambda a,\lambda)\)_-SEDF for all_ \(a\in\mathbb{N}\)_._
3. _When_ \(\lambda=1\)_, the sets_ \(\{0,1,\ldots,k-1\},\{0,k,2k,\ldots,(k-1)k\}\) _form a non-disjoint_ \((k^{2},2,k,1)\)_-SEDF in_ \(\mathbb{Z}_{k^{2}}\)_._
Note the similarity between the non-disjoint SEDFs of Corollary 3.2(iii) and the SEDFs of Example 1; this will be explored subsequently.
**Example 3**.:
1. _In_ \(\mathbb{Z}_{9}\)_, the sets_ \(\{0,1,2\}\) _and_ \(\{0,3,6\}\) _form a_ \((9,2,3,1)\)_-PSEDF corresponding to sequences_ \(\{111000000,100100100\}\)_._
2. _In_ \(\mathbb{Z}_{8}\)_, the sets_ \(\{0,1,2,3\}\) _and_ \(\{0,1,4,5\}\) _form an_ \((8,2,4,2)\)_-PSEDF corresponding to sequences_ \(\{11110000,11001100\}\)_._
We have the following generalisation of Theorem 3.1.
**Theorem 3.3**.: _Suppose \(v|k^{2}\) and \(k|v\). Let \(X=(x_{t})_{i=0}^{v-1}\) be defined by \(x_{i}=1\) for \(0\leq i\leq k-1\) and \(x_{i}=0\) for \(k\leq i\leq v-1\). If \(Y=(y_{t})_{t=0}^{v-1}\) is any sequence such that \((y_{t+k})=(y_{t})\) and \(y_{0}\ldots y_{k-1}\) has weight \(\lambda=\frac{k^{2}}{v}\), then \(\{X,Y\}\) corresponds to a \((v,2,k,\frac{k^{2}}{v})\)-PSEDF in \(\mathbb{Z}_{v}\) and a non-disjoint \((v,2,k,\frac{k^{2}}{v})\)-SEDF in \(\mathbb{Z}_{v}\)._
We next show non-disjoint SEDFs exist with any number of sets.
**Theorem 3.4**.: _Let \(N>1\). There exists a \((2^{N},N,2^{N-1},2^{N-2})\)-PSEDF in \(\mathbb{Z}_{2^{N}}\)._
Proof.: For \(1\leq i\leq N\), define the binary sequence \(X_{i}=(x_{t})_{t=0}^{v-1}\) as follows:
\[x_{0}=\cdots=x_{2^{i-1}-1}=1,\] \[x_{2^{i-1}}=\cdots=x_{2^{i}-1}=0,\] \[x_{t}=x_{t+2^{i}},t\geq 2^{i}.\]
So for each \(X_{i}\) we have:
\[X_{i}=\underbrace{1\ldots 1}_{2^{i-1}}\underbrace{0\ldots 0}_{2^{i-1}}\underbrace{ 11\ldots 100\ldots 0}_{2^{i}}\cdots\underbrace{11\ldots 100 \ldots 0}_{2^{i}}.\]
\(X_{i}\) consists of a block of length \(2^{i-1}\), followed by a gap of length \(2^{i-1}\), and this length-\(2^{i}\) substring is repeated \(2^{N-i}\) times. By construction, since \(x_{t}=x_{t+2^{i}}\) for \(t\geq 2^{i}\), every substring of length \(2^{i}\) has an equal number of \(1\)s and \(0\)s, i.e. has weight \(2^{i-1}\). Therefore any substring of length \(r\) where \(2^{i}|r\) has weight \(\frac{r}{2}\). In particular, a substring of length \(2^{j}\), \(j\geq i\), has weight \(2^{j-1}\).
We claim that these sequences \(X_{i}\)\((1\leq i\leq N)\) correspond to sets \(A_{i}\)\((1\leq i\leq N)\) which form a PSEDF in \(\mathbb{Z}_{2^{N}}\) with the given parameters. We determine \(\Delta(A_{i},A_{j})\) for \(1\leq i\neq j\leq N\); by symmetry we may assume \(i<j\).
Let \(X_{i}=(z_{t})\), \(X_{j}=(y_{t})\), with \(i<j\). Let \(\delta\in\{0,1,\ldots,v-1\}\). We determine \(S=\Sigma_{t=0}^{v-1}y_{t}z_{t+\delta}\). Observe that \(S=2^{N-j}\times\Sigma_{t=0}^{2^{j}-1}y_{t}z_{t+\delta}\) since \(z_{t}=z_{t+2^{j}}\) and \(y_{t}=y_{t+2^{j}}\) (\(t\geq 2^{j}\)). Moreover, since \(y_{2^{j-1}}=\cdots=y_{2^{j}-1}=0\), \(S=2^{N-j}\times\Sigma_{t=0}^{2^{j-1}-1}y_{t}z_{t+\delta}\). Since, from above, any substring of \(X_{i}\) of length \(2^{j-1}\) has weight equal to half its length, we have \(S=2^{N-j}\times\frac{2^{j-1}}{2}=2^{N-2}\). Since \(X_{i}\) and \(X_{j}\) (\(i<j\)) were arbitrary (and using symmetry), we have that \(\Delta(A_{j},A_{i})=2^{N-2}\) for all \(i\neq j\). Hence by Proposition 2.10, the sequences correspond to a \((2^{N},N,2^{N-1},2^{N-2})\)-PSEDF in \(\mathbb{Z}_{2^{N}}\) as required.
The above theorem demonstrates a significant difference between the classical SEDF and its non-disjoint analogue:
**Corollary 3.5**.: _For any \(N>1\), there exists a non-disjoint \((2^{N},N,2^{N-1},(N-1)2^{N-2})\)-SEDF, i.e. a non-disjoint SEDF with \(N\) sets._
**Example 4**.:
1. _In_ \(\mathbb{Z}_{8}\)_, the sets_ \(\{0,2,4,6\}\)_,_ \(\{0,1,4,5\}\) _and_ \(\{0,1,2,3\}\) _form a_ \((8,3,4,2)\)_-PSEDF and disjoint_ \((8,3,4,4)\)_-SEDF, corresponding to sequences_ \(10101010\)_,_ \(11001100\) _and_ \(11110000\)_._
2. _In_ \(\mathbb{Z}_{16}\)_, the sets_ \(\{0,2,4,6,8,10,12,14\}\)_,_ \(\{0,1,4,5,8,9,12,13\}\)_,_ \(\{0,1,2,3,8,9,10,11\}\) _and_ \(\{0,1,2,3,4,5,6,7\}\) _form a_ \((16,4,8,4)\)_-PSEDF and disjoint_ \((16,4,8,12)\)_-SEDF._
## 4 Relationship to classical SEDFs
We next explain the similarity between the family of non-disjoint SEDFs in Corollary 3.2(iii) and the family of SEDFs in Example 1.
**Proposition 4.1**.: _Let \(v|k^{2}\) and \(k|v\). As subsets of \(\mathbb{Z}_{v+1}\), the sets of the non-disjoint \((v,2,k,\frac{k^{2}}{v})\)-SEDF in \(\mathbb{Z}_{v}\) of Theorem 3.1 given by_
\[A_{X^{\prime}} =\{0,1,2,\ldots,k-1\}\text{; and}\] \[A_{Y^{\prime}} =\big{\{}ak,ak+1,\ldots,ak+\lambda-1\;:\;a=0,1,\ldots,\big{(} \tfrac{v}{k}-1\big{)}\big{\}}\]
_satisfy_
\[\Delta(A_{X^{\prime}},A_{Y^{\prime}})=\lambda(G\setminus\{v-k+1,\ldots,v-k+ \lambda\})+(\lambda-1)\{v-k+1,\ldots,v-k+\lambda\}.\]
Proof.: Take the two length-\(v\) sequences \(X,Y\) which correspond to the sets \(A_{X}\) and \(A_{Y}\) in Theorem 3.1. By appending an additional \(0\) at the end of each sequence, the new length-\((v+1)\) sequences \(X^{\prime},Y^{\prime}\) correspond to the sets \(A_{X^{\prime}},A_{Y^{\prime}}\) in \(\mathbb{Z}_{k^{2}+1}\). \(X^{\prime}\) is a block of length \(k\) followed by a gap of length \(v-k+1\), while \(Y\) is a block of length \(\lambda=k^{2}/v\) followed by a gap of length \(k-\lambda\), which is repeated \(v/k\) times, except that the final gap now has length \(k-\lambda+1\). Write \(X^{\prime}=(x^{\prime}_{t})\), \(Y^{\prime}=(y^{\prime}_{t})\).
Let \(\delta\in\{0,\ldots,v\}\). Consider \(S=\Sigma_{t=0}^{v}x_{t}^{\prime}y_{t+\delta}^{\prime}\). As before, \(S=\Sigma_{t=0}^{k-1}x_{t}^{\prime}y_{t+\delta}^{\prime}\), since \(x_{t}^{\prime}=0\) for \(t=k,\ldots,v\), so we need only consider the length-\(k\) substring \(Y_{\delta}^{\prime}=y_{\delta}^{\prime}y_{1+\delta}^{\prime}\ldots y_{k-1+\delta} ^{\prime}\) of \(Y^{\prime}\). The value of \(\Sigma_{t=0}^{k-1}x_{t}^{\prime}y_{t+\delta}^{\prime}\) is exactly the number of \(1\)s in \(Y_{\delta}^{\prime}\), i.e. its weight.
We determine \(Y_{0}^{\prime},\ldots,Y_{v}^{\prime}\). For \(0\leq\delta\leq v-1\), let \(Y_{\delta}=y_{\delta}y_{1+\delta}\ldots y_{k-1+\delta}\), the substring of the original sequence \(Y\). We have \(Y_{v}^{\prime}=0y_{0}\ldots y_{k-2}=Y_{v-1}\), and for \(0\leq\delta\leq v-k\), \(Y_{\delta}^{\prime}=Y_{\delta}\).
For the remaining \(v-k+1\leq\delta\leq v-1\), write \(\delta=v-k+i\), \(1\leq i\leq k-1\). In \(Y\), the substring \(Y_{v-k+i}=y_{v-k+i}y_{v-k+i+1}\ldots y_{i-2}y_{i-1}\). The substring \(Y_{v-k+i}^{\prime}\) is obtained from \(Y_{v-k+i}\) by inserting a \(0\) between its \((k-1-i)\)th and \((k-i)\)th entries (\(y_{v-1}\) and \(y_{0}\)), then deleting its final entry \(y_{i-1}\). Overall
\[Y_{v-k+i}^{\prime}=y_{v-k+i}y_{v-k+i+1}\ldots y_{v-1}0y_{0}\ldots y_{i-2}\]
with the entries after the zero being present only for \(2\leq i\leq k-1\). Hence the overall change in symbols, going from \(Y_{v-k+i}\) to \(Y_{v-k+i}^{\prime}\) (\(1\leq i\leq k-1\)), is to replace \(y_{i-1}\) by \(0\). Now, by construction \(y_{0}=\cdots=y_{\lambda-1}=1\) and \(y_{\lambda}=\cdots y_{k-1}=0\). So, from the substrings \(Y_{v-k+i}^{\prime}\) with \(1\leq i\leq k-1\), only those with \(i-1\in\{0\ldots,\lambda-1\}\) undergo a change in weight (a reduction by \(1\) to weight \(\lambda-1\)); the rest have weight \(\lambda\).
Hence by Proposition 2.10, \(\Delta(A_{Y^{\prime}},A_{X^{\prime}})\) (and by symmetry \(\Delta(A_{X^{\prime}},A_{Y^{\prime}})\)) comprises \(\lambda\) copies of \(\mathbb{Z}_{v+1}\setminus\{v-k+1,\ldots,v-k+\lambda\}\) and \(\lambda-1\) copies of \(\{v-k+1,\ldots,v-k+\lambda\}\).
**Theorem 4.2**.:
* _The non-disjoint_ \((k^{2},2,k,1)\)_-SEDF in_ \(\mathbb{Z}_{k^{2}}\) _of Theorem_ 3.1 _may be converted by set-translation to a classical_ \((k^{2}+1,2,k,1)\)_-SEDF in_ \(\mathbb{Z}_{k^{2}+1}\)_._
* _Non-disjoint SEDFs of Theorem_ 3.1 _with_ \(\lambda>1\) _cannot be so converted._
Proof.: By Proposition 4.1, in \(\mathbb{Z}_{k^{2}+1}\) the sets \(A_{X^{\prime}}=\{0,1,\ldots,k-1\},A_{Y^{\prime}}=\{0,k,2k,\ldots,(k-1)k\}\) satisfy \(\Delta(A_{X^{\prime}},A_{Y^{\prime}})=G\setminus\{v-k+1\}\). Take the cyclic shift \(Y^{\prime\prime}=Y^{\prime}+(v-k+1)\) of \(Y^{\prime}\). The set in \(\mathbb{Z}_{k^{2}+1}\) corresponding to this new sequence is \(A_{Y^{\prime\prime}}=\{k,2k,\ldots,k^{2}\}\), the translate \(k+A_{Y^{\prime}}\) of \(A_{Y^{\prime}}\). The pair \(\{X,Y^{\prime\prime}\}\) correspond to sets \(A_{X},A_{Y^{\prime\prime}}\) which are disjoint and satisfy \(\Delta(A_{X},A_{Y^{\prime\prime}})=\lambda(G\setminus\{0\})\) with \(\lambda=1\). This is the \((k^{2},2,k,1)\)-SEDF in \(\mathbb{Z}_{k^{2}+1}\) of Example 1. For (ii), observe the sets of Theorem 3.1 can be made disjoint by translation in \(\mathbb{Z}_{v+1}\), only if the sequence \(Y^{\prime}\) has a gap of at least the size of the block in \(X^{\prime}\). This is possible only if \(k-\lambda+1\geq k\), i.e. \(\lambda\leq 1\).
Similarly, it is not possible to convert the non-disjoint SEDFs of Theorem 3.4 to classical SEDFs in \(\mathbb{Z}_{2^{N}+1}\), except when \(N=2\) (giving the \((5,2,2,1)\)-SEDF \(\{0,1\},\{2,4\}\) in \(\mathbb{Z}_{5}\)). For \(N>2\), appending \(0\) gives a structure with more than two frequencies and disjointness is impossible.
## 5 Motivation from communications systems
While classical SEDFs arise from AMD codes, non-disjoint SEDFs and PSEDFs have a different communications motivation. _Optical orthogonal codes_ (OOCs) are sets of binary sequences with good auto- and cross-correlation properties for use in optical multi-access
communication. The _auto-correlation_ of a sequence \(X\) measures how much it collides with its shifts; its _cross-correlation_ with sequence \(Y\) measures how much \(X\) collides with the shifts of \(Y\) (two sequences _collide_ in position \(i\) if both have \(1\)'s in the \(i\)th position).
**Definition 5.1**.: _Let \(v,w,\lambda_{a},\lambda_{c}\) be non-negative integers with \(v\geq 2\), \(w\geq 1\). Let \(\mathcal{C}=\{X_{0},\ldots,X_{N-1}\}\) be a family of \(N\) binary sequences of length \(v\) and weight \(w\). Then \(\mathcal{C}\) is a \((v,w,\lambda_{a},\lambda_{c})\)-OOC of size \(N\geq 1\) if, writing \(X=(x_{i})_{i=0}^{v-1}\), \(Y=(y_{i})_{i=0}^{v-1}\) (indices modulo \(v\)):_
1. \(\sum_{t=0}^{v-1}\!x_{t}x_{t+\delta}\leq\lambda_{a}\) _for any_ \(X\in\mathcal{C},0<\delta\leq v-1\)_, and_
2. \(\sum_{t=0}^{v-1}\!x_{t}y_{t+\delta}\leq\lambda_{c}\) _for any_ \(X,Y\in\mathcal{C},0\leq\delta\leq v-1\)_,_
_i.e. if auto-correlation values are at most \(\lambda_{a}\) and cross-correlation values are at most \(\lambda_{c}\)._
Although called "codes", OOCs are used as sets of periodic sequences, with \(X_{i}\) being repeated. A correlation value gets a contribution of \(1\) precisely if both sequences have a \(1\) in the same position. In using OOCs for communication, information can be sent only when there is a \(1\) in the sequence; if two sequences are used and there is a \(1\) in both sequences then interference occurs, which can result in errors in both received signals. So a key design principle is to have low cross-correlation values. For more on OOCs see [2].
By Definition 2.9, OOCs can be reformulated as subsets of \(\mathbb{Z}_{v}\). Let \(\{X_{0},\ldots,X_{N-1}\}\) be a \((v,w,\lambda_{a},\lambda_{c})\)-OOC. For each sequence \(X_{i}\), let \(A_{i}\) be the set of integers modulo \(v\) denoting the positions of the \(1\)s. Then \(A_{i}\subseteq\mathbb{Z}_{v}\), \(|A_{i}|=w\) for all \(0\leq i\leq N-1\), and we have the conditions:
1. \(|A_{i}\cap(A_{i}+\delta)|\leq\lambda_{a}\) for all \(\delta\in\mathbb{Z}_{v}\setminus\{0\}\), i.e. any non-zero \(\delta\) occurs in \(\Delta(A_{i})\) at most \(\lambda_{a}\) times.
2. \(|A_{i}\cap(A_{j}+\delta)|\leq\lambda_{c}\) for all \(\delta\in\mathbb{Z}_{v}\), i.e. any \(\delta\) occurs in \(\Delta(A_{i},A_{j})\) at most \(\lambda_{c}\) times.
An OOC with \(\lambda_{c}=1\) and no auto-correlation requirement is a _conflict-avoiding code_ (CAC); see [12]. CACs are equivalently defined by the condition that \(\{\Delta(A_{1}),\ldots,\Delta(A_{n})\}\) are pairwise disjoint (distinct \(x_{1},x_{2}\in A_{i}\) and \(y_{1},y_{2}\in A_{j}\) (\(i\neq j\)) with \(x_{1}-x_{2}=y_{1}-y_{2}\) implies two distinct expressions for \(x_{1}-y_{1}\) in \(\Delta(A_{i},A_{j})\), and conversely).
**Proposition 5.2**.: _If \(\mathcal{C}\) is a \((v,w,\lambda_{a},\lambda_{c})\)-OOC with \(|\mathcal{C}|\geq 2\), then \(\lambda_{c}\geq\frac{w^{2}}{v}\)._
Proof.: Let \(\mathcal{C}=\{A_{0},\ldots,A_{N-1}\}\) as subsets of \(\mathbb{Z}_{v}\). Let \(F=\{((x,y),\delta)\ :\ x-y=\delta,x\in A_{i},y\in A_{j}\}\) for some \(A_{i}\), \(A_{j}\), \(A_{i}\neq A_{j}\). There are \(w\) values of \(x\) and \(w\) values of \(y\), and for each pair of \((x,y)\) there is a unique \(\delta=x-y\), so \(|F|=w^{2}\). On the other hand there are \(v\) possible values of \(\delta\), and at most \(\lambda_{c}\) pairs of \((x,y)\) such that \(x-y=\delta\). So \(|F|\leq v\lambda_{c}\)
The lower bound is met when every \(\delta\) occurs exactly \(w^{2}/v\) times as an external difference. If \(\lambda_{c}=w^{2}/v\) for an OOC, then, since all cross-correlation values are at most \(\lambda_{c}\) and at least \(w^{2}/v\), we must have the cross-correlation equal to \(w^{2}/v\) for all pairs of sequences, i.e. \(\Delta(A_{i},A_{j})=\lambda_{c}\mathbb{Z}_{v}\) for all \(A_{i}\neq A_{j}\). Hence OOCs with cross-correlation values meeting the lower bound are in fact PSEDFs in \(\mathbb{Z}_{v}\), and the PSEDFs in Section 3 give examples of these OOCs. In general, \((v,m,k,\lambda)\)-PSEDFs in \(\mathbb{Z}_{v}\) are \((v,k,\lambda_{a},\lambda)\)-OOCs for some \(\lambda_{a}\), and \((v,k,\lambda_{a},\lambda_{c})\)-OOCs are \((v,m,k,\lambda_{c})\)-PSEDFs if all cross-correlation values of the OOCs equal \(\lambda_{c}\). The extensions of PSEDFs from Proposition 4.1 will give \((v+1,k,\lambda_{a},\lambda_{c})\)-OOCs with \(\lambda_{c}=\lceil\frac{k^{2}}{v+1}\rceil\), also best-possible.
In OOC applications, auto-correlation aids synchronisation; minimising \(\lambda_{a}\) is not a goal ([2]). It is upper-bounded by the weight \(w\) of the sequences; this bound is attained if a sequence \((x_{i})_{i=0}^{v-1}\) satisfies \((x_{i+r})_{i=0}^{v-1}=(x_{i})_{i=0}^{v-1}\) for some \(0<r<v\). In Theorem 3.1, \(\lambda_{a}=k\) since \(Y\) satisfies \((y_{t+k})=(y_{t})\) where \(0<k<v\); in Proposition 4.1, \(Y^{\prime}\) has auto-correlation values strictly less than \(k\), while \(X^{\prime}\) has auto-correlation exactly \(k-1\), so \(\lambda_{a}=k-1\).
|
2305.17232 | Quantum evolution with random phase scattering | We consider the quantum evolution of a fermion-hole pair in a d-dimensional
gas of non-interacting fermions in the presence of random phase scattering.
This system is mapped onto an effective Ising model, which enables us to show
rigorously that the probability of recombining the fermion and the hole decays
exponentially with the distance of their initial spatial separation. In the
absence of random phase scattering the recombination probability decays like a
power law, which is reflected by an infinite mean square displacement. The
effective Ising model is studied within a saddle point approximation and yields
a finite mean square displacement that depends on the evolution time and on the
spectral properties of the deterministic part of the evolution operator. | Klaus Ziegler | 2023-05-26T19:44:17Z | http://arxiv.org/abs/2305.17232v1 | # Quantum evolution with random phase scattering
###### Abstract
We consider the quantum evolution of a fermion-hole pair in a d-dimensional gas of non-interacting fermions in the presence of random phase scattering. This system is mapped onto an effective Ising model, which enables us to show rigorously that the probability of recombining the fermion and the hole decays exponentially with the distance of their initial spatial separation. In the absence of random phase scattering the recombination probability decays like a power law, which is reflected by an infinite mean square displacement. The effective Ising model is studied within a saddle point approximation and yields a finite mean square displacement that depends on the evolution time and on the spectral properties of the deterministic part of the evolution operator.
###### Contents
* I Introduction
* II Summary of the main results
* III Quantum evolution with random phase scattering
* IV Functional integral representation
* V Discussion
* A Hopping expansion
* B Saddle point approximation
* C Mean square displacement
* VI Conclusions and outlook
* A Saddle point integration
Introduction
The creation of an electron-hole pair and its subsequent recombination is a fundamental process in quantum physics with many applications in different fields. Although there exist phenomenological descriptions of this process by classical decay models [1; 2], for a deeper understanding a quantum approach is required. We will focus here on a fermion-hole pair in a \(d\)-dimensional system of non-interacting fermions. The pair can be created either by photons and phonons in a real material or by injection into the system. Then the question is, whether this pair recombines after some evolution by emitting a photon/phonon or the fermion and the hole remain localized near the place where they were created initially (cf. Fig. 1). Both possibilities can be studied by measuring the return probability to the initial quantum state. This probability depends on the spatial separation of the fermion and the hole. Assuming that the hole is created at the site \({\bf R}\) and the fermion at the site \({\bf R}^{\prime}\), we can define the probability \(P_{{\bf R}{\bf R}^{\prime}}\) that the system returns to the initial state over the finite time interval \(\tau\). Although it is plausible that this probability decreases with increasing distance \(|{\bf R}-{\bf R}^{\prime}|\), the law of change with the distance depends on the interaction with the environment. For instance, on a periodic lattice this probability has a long range behavior, which depends on the dimensionality of the underlying space. In the following we will focus on the effect of random phase scattering on the spatial decay of this probability. In other words, is it possible to control the spatial fermion-hole separation to avoid their recombination?
To analyze the evolution and calculate physical quantities, the standard procedure would be to diagonalize the Hamiltonian \(H\) of the evolution operator \(e^{-iH\tau/\hbar}\). For a translational invariant system this can be achieved through a Fourier transformation. However, in a realistic system the Hamiltonian \(H\) is not translational invariant but subject to some disorder. In this case the corresponding random Hamiltonian cannot be diagonalized by a Fourier transformation. To mimic the effect of disorder in the evolution of a quantum state we "scramble" a translational invariant \(e^{-iH\tau/\hbar}\) with a random phase factor \(e^{i\alpha}\) by using the evolution operator \(U=e^{i\alpha}e^{-iH\tau/\hbar}\). This choice was inspired by the random unitary gate models that have been discussed in the context of quantum circuits [3; 4; 5]. The following analysis is also inspired by previous studies of the invariant measure of transport in systems with random chiral Hamiltonians [6]. Although this seems to be an entirely different problem, there are some striking similarities that are reflected by their graphical representations.
## II Summary of the main results
The central result of this work is that the random phase scattering of non-interacting fermions is equivalent to scattering on discrete Ising spins or on a continuous (real) Ising field. This is a consequence of a geometric restriction in the graphical representation due to the Fermi statistics. It enables us to deform the Ising field integration such that the poles of the integrand of the return probability \(P_{{\bf R}^{\prime}{\bf R}}\) are avoided. This implies an exponential decay with respect to \(|{\bf R}-{\bf R}^{\prime}|\). For an explicit evaluation of the decay we employ a saddle point approximation of the Ising field. This provides for \(h=e^{-iH\tau}\)
\[P_{{\bf R}^{\prime}{\bf R}}=\langle|(\phi+h)^{-1}_{{\bf R}{\bf R}^{\prime}}|^{2 }\rangle_{\phi}\approx|(\phi_{0}+h)^{-1}_{{\bf R}-{\bf R}^{\prime}}|^{2},\]
where \(\phi_{0}\) is determined by a saddle point equation. The corresponding mean square displacement reads
\[[R_{\nu}^{2}]=\frac{\tau^{2}}{2}\int_{\bf k}\frac{(\partial_{k_{\nu}}\epsilon _{\bf k})^{2}}{[1+\phi_{0}^{2}+2\phi_{0}\cos(E_{0}+\epsilon_{\bf k}\tau)]^{2}} \Big{/}\int_{\bf k}\frac{1}{1+\phi_{0}^{2}+2\phi_{0}\cos(E_{0}+\epsilon_{\bf k }\tau)},\]
where \(\epsilon_{\bf k}\) is the dispersion of the Hamiltonian \(H\) and \(E_{0}\) is related to the Fermi energy. In the absence of random phase scattering we have \(\phi_{0}=1\), which implies for \(E_{0}+\epsilon_{\bf k}\tau\leq\bar{E}\)
\[[R_{\nu}^{2}]\sim\frac{2\pi\tau}{3d\mu}(\pi-\bar{E})^{-2}\]
when \(\bar{E}<\pi\), and \([R_{\nu}^{2}]\) is infinite for \(\bar{E}\geq\pi\).
## III Quantum evolution with random phase scattering
A system of non-interacting fermions with the Hamiltonian \(H\) evolves during a fixed time step \(\tau\) with the unitary evolution operator \(U_{\tau}=e^{i\alpha}e^{-iH\tau}\). Here and subsequently we have chosen the scale of
physical quantities such that \(\hbar=1\). The phases \(\{\alpha_{\mathbf{r}}\}\) are randomly distributed on \([-\pi,\pi)\), independently on different lattice sites \(\mathbf{r}\). \(U_{\tau}\) acts on the \(2^{|\Lambda|}\) dimensional Hilbert space, spanned by the fermionic states \(|\{n_{\mathbf{r}}\}\rangle\) with occupation numbers \(n_{\mathbf{r}}=0\) or \(n_{\mathbf{r}}=1\) on a lattice site \(\mathbf{r}\in\Lambda\). \(|\Lambda|\) is the number of lattice sites and \(|\{n(0)\}\rangle\equiv|\{n_{\mathbf{r}}(0)\}\rangle\) is the initial state, in which the fermionic system is prepared at the beginning.
Then we consider the situation in which a fermion-hole pair is created by the operator \(c_{\mathbf{R}^{\prime}}^{\dagger}c_{\mathbf{R}}\) at time \(t=0\) at different sites \(\mathbf{R}\), \(\mathbf{R}^{\prime}\). To determine the spatial correlation of the fermion-hole pair after the time \(\tau\), where the quantum state is the initial state again, we write
\[\frac{\langle|\langle\{n(0)|U_{\tau}c_{\mathbf{R}^{\prime}}^{\dagger}c_{ \mathbf{R}}|\{n(0)\}\rangle|^{2}\rangle_{\alpha}}}{|\langle\{n(0)|U_{\tau}|\{ n(0)\}\rangle|^{2}\rangle_{\alpha}}},\]
which is the return probability for the initial state \(|\{n(0)\}\rangle\). To avoid the specific definition of the initial state, we sum over the return probabilites of all basis states to obtain
\[P_{\mathbf{R}\mathbf{R}^{\prime}}:=\frac{\langle|Tr(U_{\tau}c_{\mathbf{R}^{ \prime}}^{\dagger}c_{\mathbf{R}})|^{2}\rangle_{\alpha}}{\langle|TrU_{\tau}|^{ 2}\rangle_{\alpha}}, \tag{1}\]
where \(Tr\) is the trace of \(2^{|\Lambda|}\times 2^{|\Lambda|}\) matrices. For \(\tau=0\) we have
\[\langle\{n\}|c_{\mathbf{R}^{\prime}}^{\dagger}c_{\mathbf{R}}|\{n\}\rangle= \delta_{\mathbf{R}\mathbf{R}^{\prime}}\delta_{n_{\mathbf{R}},1},\]
such that only the particle number operator \(c_{\mathbf{R}}^{\dagger}c_{\mathbf{R}}\) contributes to the trace, while the spatially separated fermion-hole pair does not. For \(\tau>0\), though, the evolution \(U_{\tau}c_{\mathbf{R}^{\prime}}^{\dagger}c_{\mathbf{R}}|\{n\}\rangle\) can create some overlap with \(|\{n\}\rangle\), which contributes to the trace. Thus, the return probability \(P_{\mathbf{R}\mathbf{R}^{\prime}}\) (\(\mathbf{R}^{\prime}\neq\mathbf{R}\)) is a measure for how effective the evolution with \(U_{\tau}\) can move the fermion-hole pair to the same site. It is plausible that this is less likely the larger the distance \(|\mathbf{R}-\mathbf{R}^{\prime}|\) is and that it increases with increasing time \(\tau\). Therefore, \(P_{\mathbf{R}\mathbf{R}^{\prime}}\) decays with this distance and may increase with \(\tau\).
Besides creating a fermion and a hole simultaneously, we can also create a hole at site \(\mathbf{R}\) and time \(t=0\), let this hole evolve for the time \(\tau\) and then annihilate it. The probability for this annihilation process reads
\[P^{\prime}_{\mathbf{R}\mathbf{R}^{\prime}}:=\frac{\langle|Tr(c_{\mathbf{R}^{ \prime}}^{\dagger}U_{\tau}c_{\mathbf{R}})|^{2}\rangle_{\alpha}}{\langle|TrU_{ \tau}|^{2}\rangle_{\alpha}}. \tag{2}\]
In the following we drop the index \(\tau\) for simplicity and use \(U_{\tau}\equiv U\). Assuming that \(|\{\tilde{n}_{\mathbf{q}}\}\rangle\equiv|\{\tilde{n}\}\rangle\) are eigenstates of \(U\) for a special realization of the phases \(\{\alpha_{\mathbf{r}}\}\), we obtain
\[\tilde{C}_{\mathbf{Q},t}:=Tr(U^{(1-t)}c_{\mathbf{Q}}^{\dagger}U^{t}c_{ \mathbf{Q}})=\sum_{\{\tilde{n}_{\mathbf{q}}\},\{\tilde{n}_{\mathbf{q}}^{ \prime}\}}\langle\{\tilde{n}\}|U^{(1-t)}\{\tilde{n}\}\rangle\langle\{\tilde{n }\}|c_{\mathbf{Q}}^{\dagger}|\{\tilde{n}^{\prime}\}\rangle\langle\{\tilde{n} ^{\prime}\}|U^{t}|\{\tilde{n}^{\prime}\}\rangle\langle\{\tilde{n}^{\prime} \}|c_{\mathbf{Q}}|\{\tilde{n}\}\rangle \tag{3}\]
for \(t=0,1\). With \(\langle\{\tilde{n}\}|c_{\mathbf{Q}}^{\dagger}|\{\tilde{n}^{\prime}\}\rangle \langle\{\tilde{n}^{\prime}\}|c_{\mathbf{Q}}|\{\tilde{n}\}\rangle=\delta_{ \tilde{n}_{\mathbf{Q}}^{\prime},0}\delta_{\tilde{n}_{\mathbf{q}},1}\prod_{ \mathbf{q}^{\prime}\mathbf{Q}}\delta_{\tilde{n}_{\mathbf{q}}^{\prime},\tilde{ n}_{\mathbf{q}}}\) we get
\[\tilde{C}_{\mathbf{Q},t}=\sum_{\{\tilde{n}_{\mathbf{q}}\},\{\tilde{n}_{ \mathbf{q}}^{\prime}\}}\langle\{\tilde{n}\}|U^{(1-t)}|\{\tilde{n}\}\rangle \langle\{\tilde{n}^{\prime}\}|U^{t}|\{\tilde{n}^{\prime}\}\rangle\delta_{ \tilde{n}_{\mathbf{Q}}^{\prime},0}\delta_{\tilde{n}_{\mathbf{Q}},1}\prod_{ \mathbf{q}\neq\mathbf{Q}}\delta_{\tilde{n}_{\mathbf{q}}^{\prime},\tilde{n}_{ \mathbf{q}}}.\]
Since \(U\) is diagonal in this basis with \(\langle\{\tilde{n}\}|U|\{\tilde{n}\}\rangle=\prod_{\mathbf{q}}\langle\tilde{n }_{\mathbf{q}}|e^{-iE_{\mathbf{q}}\tilde{n}_{\mathbf{q}}}|\tilde{n}_{\mathbf{q}} \rangle=\prod_{\mathbf{q}}e^{-iE_{\mathbf{q}}\tilde{n}_{\mathbf{q}}}\), we obtain a product of diagonal matrix elements
\[\langle\{\tilde{n}\}|U^{(1-t)}|\{\tilde{n}\}\rangle\langle\{\tilde{n}^{\prime} \}|U^{t}|\{\tilde{n}^{\prime}\}\rangle=\prod_{\mathbf{q}}e^{-iE_{\mathbf{q}} \tilde{n}_{\mathbf{q}}(1-t)}e^{-iE_{\mathbf{q}}\tilde{n}_{\mathbf{q}}^{\prime}t}.\]
Figure 1: a) There is a recombination of a fermion created at \(\mathbf{R}\) and a hole created at \(\mathbf{R}^{\prime}\) at the blue dot. b) The localization of a fermion and a hole near their points of creation implies an exponentially small probability for recombination. The localization radius (or decay length) is indicated by the blue circles. In this sketch we consider \(|\mathbf{R}-\mathbf{R}^{\prime}|\) much larger than the localization radius.
Inserting this into Eq. (3), we get for the sum due to the Kronecker deltas
\[\tilde{C}_{{\bf Q},t}=Tr(U^{(1-t)}c^{\dagger}_{{\bf Q}}U^{t}c_{{\bf Q}})=e^{-iE_{ {\bf Q}}(1-t)}\prod_{{\bf q}\neq{\bf Q}}(1+e^{-iE_{{\bf q}}})=\frac{e^{iE_{{\bf Q }}t}}{1+e^{iE_{{\bf Q}}}}\prod_{{\bf q}}(1+e^{-iE_{{\bf q}}}). \tag{4}\]
Finally, we return to the real-space representation to obtain \(e^{-iE_{{\bf q}}}\rightarrow\hat{U}_{{\bf r}{\bf r}^{\prime}}\), where \(\hat{U}\) is a \(|\Lambda|\times|\Lambda|\) matrix on the lattice, and
\[Tr(Uc^{\dagger}_{{\bf R}^{\prime}}c_{{\bf R}})=({\bf 1}+\hat{U}^{\dagger})^{-1} _{{\bf R}{\bf R}^{\prime}}\det({\bf 1}+\hat{U})\,\ \ Tr(c^{\dagger}_{{\bf R}^{\prime}}Uc_{{\bf R}})=({\bf 1 }+\hat{U})^{-1}_{{\bf R}{\bf R}^{\prime}}\det({\bf 1}+\hat{U}), \tag{5}\]
where \(det\) is the corresponding determinant. Hence the return probabilities become
\[P_{{\bf R}{\bf R}^{\prime}}=\frac{\langle|({\bf 1}+\hat{U}^{\dagger})^{-1}_{{ \bf R}{\bf R}^{\prime}}\det({\bf 1}+\hat{U})|^{2}\rangle_{\alpha}}{\langle|\det({ \bf 1}+\hat{U})|^{2}\rangle_{\alpha}}\,\ \ P^{\prime}_{{\bf R}{\bf R}^{\prime}}=\frac{ \langle|({\bf 1}+\hat{U})^{-1}_{{\bf R}{\bf R}^{\prime}}\det({\bf 1}+\hat{U})|^{2} \rangle_{\alpha}}{\langle|\det({\bf 1}+\hat{U})|^{2}\rangle_{\alpha}} \tag{6}\]
due to Eqs. (1), (2). The identity \(|({\bf 1}+\hat{U}^{\dagger})^{-1}_{{\bf R}{\bf R}^{\prime}}|^{2}=|({\bf 1}+\hat{U})^{ -1}_{{\bf R}{\bf R}}|^{2}\) implies that \(P^{\prime}_{{\bf R}{\bf R}^{\prime}}=P_{{\bf R}{\bf R}}\).
## IV Functional integral representation
For the further treatment of the return probability \(P^{\prime}_{{\bf R}{\bf R}^{\prime}}=P_{{\bf R}^{\prime}{\bf R}}\) in Eq. (6) it is convenient to separate the random phase factor and the deterministic evolution of \(\hat{U}\) as \(\hat{U}_{{\bf r}{\bf r}^{\prime}}=e^{i\alpha}\hbar_{{\bf r}{\bf r}^{\prime}}\). Then we employ a Grassmann functional integral to write
\[P_{{\bf R}^{\prime}{\bf R}}=\frac{1}{{\cal N}}\langle\int_{\varphi}\exp\left[ \left(\begin{array}{c}\varphi_{1}\\ \varphi_{2}\end{array}\right)\cdot\left(\begin{array}{cc}{\bf 1}+e^{i \alpha}h&0\\ 0&{\bf 1}+h^{\dagger}e^{-i\alpha}\end{array}\right)\left(\begin{array}{c} \varphi^{\prime}_{1}\\ \varphi^{\prime}_{2}\end{array}\right)\right]\varphi_{1{\bf R}}\varphi^{\prime}_ {1{\bf R}^{\prime}}\varphi_{2{\bf R}^{\prime}}\varphi^{\prime}_{2{\bf R}} \rangle_{\alpha}\]
\[=\frac{1}{{\cal N}}\langle adj_{{\bf R}{\bf R}^{\prime}}({\bf 1}+e^{i\alpha}h)adj_{{ \bf R}^{\prime}{\bf R}}({\bf 1}+h^{\dagger}e^{-i\alpha})\rangle_{\alpha}=\frac{1}{{\cal N }}\langle|\det({\bf 1}+e^{i\alpha}h)|^{2}|({\bf 1}+e^{i\alpha}h)^{-1}_{{\bf R}{\bf R}^{ \prime}}|^{2}\rangle_{\alpha} \tag{7}\]
with the normalization
\[{\cal N}=\langle\int_{\varphi}\exp\left[\left(\begin{array}{c}\varphi_{1}\\ \varphi_{2}\end{array}\right)\cdot\left(\begin{array}{cc}{\bf 1}+e^{i \alpha}h&0\\ 0&{\bf 1}+h^{\dagger}e^{-i\alpha}\end{array}\right)\left(\begin{array}{c} \varphi^{\prime}_{1}\\ \varphi^{\prime}_{2}\end{array}\right)\right]\rangle_{\alpha}=\langle|\det({ \bf 1}+e^{i\alpha}h)|^{2}\rangle_{\alpha}.\]
We note that the kernel of the quadratic form has zero modes (i.e., eigenmodes of \({\bf 1}+e^{\alpha}h\) with vanishing eigenvalue) because the eigenvalues of the random unitary matrices \(e^{i\alpha}h\) and \(h^{\dagger}e^{-i\alpha}\) are randomly distributed on the unit circle in the complex plane. These zero modes depend on the realization of the random phase.
In the integral (7) we pull out the phase factors by rescaling the Grassmann fields to obtain
\[P_{{\bf R}^{\prime}{\bf R}}=\frac{1}{{\cal N}}\langle\int_{\varphi}\exp\left[ \left(\begin{array}{c}\varphi_{1}\\ \varphi_{2}\end{array}\right)\cdot\left(\begin{array}{cc}e^{-i\alpha}+h&0\\ 0&e^{i\alpha}+h^{\dagger}\end{array}\right)\left(\begin{array}{c}\varphi^{ \prime}_{1}\\ \varphi^{\prime}_{2}\end{array}\right)\right]\varphi_{1{\bf R}}\varphi^{\prime}_{1{ \bf R}^{\prime}}\varphi_{2{\bf R}^{\prime}}\varphi^{\prime}_{2{\bf R}} \rangle_{\alpha}\]
\[=\frac{1}{{\cal N}}\int_{\varphi}\prod_{{\bf r}}\langle(1+e^{-i\alpha_{\bf r }}\varphi_{1{\bf r}}\varphi^{\prime}_{1{\bf r}})(1+e^{i\alpha_{\bf r}}\varphi_{2{ \bf r}}\varphi^{\prime}_{2{\bf r}})\rangle_{\alpha}\exp\left[\left(\begin{array} []{c}\varphi_{1}\\ \varphi_{2}\end{array}\right)\cdot\left(\begin{array}{cc}h&0\\ 0&h^{\dagger}\end{array}\right)\left(\begin{array}{c}\varphi^{\prime}_{1}\\ \varphi^{\prime}_{2}\end{array}\right)\right]\varphi_{1{\bf R}}\varphi^{\prime}_{1{ \bf R}^{\prime}}\varphi_{2{\bf R}^{\prime}}\varphi^{\prime}_{2{\bf R}}\rangle_{ \alpha},\]
which gives after phase averaging
\[=\frac{1}{{\cal N}}\int_{\varphi}\prod_{{\bf r}}(1+\varphi_{1{\bf r}}\varphi^{ \prime}_{1{\bf r}}\varphi_{2{\bf r}}\varphi^{\prime}_{2{\bf r}})\exp\left[ \left(\begin{array}{c}\varphi_{1}\\ \varphi_{2}\end{array}\right)\cdot\left(\begin{array}{cc}h&0\\ 0&h^{\dagger}\end{array}\right)\left(\begin{array}{c}\varphi^{\prime}_{1}\\ \varphi^{\prime}_{2}\end{array}\right)\right]\varphi_{1{\bf R}}\varphi^{\prime}_{1{ \bf R}^{\prime}}\varphi_{2{\bf R}^{\prime}}\varphi^{\prime}_{2{\bf R}}. \tag{8}\]
We get the same result when we replace the phase factor by an Ising spin \(\{S_{{\bf r}}=\pm 1\}\) or by a real Gaussian field \(\phi_{{\bf r}}\) which will be called Ising field in the following. For the latter we write
\[P_{{\bf R}^{\prime}{\bf R}}=\frac{1}{{\cal N}_{\phi}}\int e^{-\frac{1}{2}\sum_{{ \bf r}}\phi^{2}_{{\bf r}}}\int_{\varphi}\exp\left[\left(\begin{array}{c} \varphi_{1}\\ \varphi_{2}\end{array}\right)\cdot\left(\begin{array}{cc}\phi+h&0\\ 0&\phi+h^{\dagger}\end{array}\right)\left(\begin{array}{c}\varphi^{\prime}_{1} \\ \varphi^{\prime}_{2}\end{array}\right)\right]\varphi_{1{\bf R}}\varphi^{\prime}_{1{ \bf R}^{\prime}}\varphi_{2{\bf R}^{\prime}}\varphi^{\prime}_{2{\bf R}}\prod_{{ \bf r}}d\phi_{{\bf r}}\]
\[=\frac{1}{\mathcal{N}_{\phi}}\int e^{-\frac{1}{2}\sum_{\mathbf{r}}\phi_{ \mathbf{r}}^{2}}adj_{\mathbf{R}\mathbf{R}^{\prime}}(\phi+h)adj_{\mathbf{R}^{ \prime}\mathbf{R}}(S+h^{\dagger})\prod_{\mathbf{r}}d\phi_{\mathbf{r}}=\frac{1} {\mathcal{N}_{\phi}}\int e^{-\frac{1}{2}\sum_{\mathbf{r}}\phi_{\mathbf{r}}^{2}} |adj_{\mathbf{R}\mathbf{R}^{\prime}}(\phi+h)|^{2}\prod_{\mathbf{r}}d\phi_{ \mathbf{r}}\]
\[=\langle|(\phi+h)_{\mathbf{R}\mathbf{R}^{\prime}}|^{2}\rangle_{\phi}\ \ \text{with}\ \ \ \langle\dots\rangle_{\phi}=\frac{1}{\mathcal{N}_{\phi}}\int e^{-\frac{1}{2} \sum_{\mathbf{r}}\phi_{\mathbf{r}}^{2}}|\det(\phi+h)|^{2}\dots\prod_{\mathbf{ r}}d\phi_{\mathbf{r}}. \tag{9}\]
This result is reminiscent of the average two-particle Green's function with respect to a Gaussian distribution of \(\phi_{\mathbf{r}}\), multiplied by the determinant term \(|\det(\phi+h)|^{2}\). There are two important differences in comparison to the average two-particle Green's function \(\langle|(V+H_{0})_{\mathbf{R}\mathbf{R}^{\prime}}|^{2}\rangle\) of Anderson localization, though. The first is that the determinant can be written as a product of the eigenvalues of \(\phi+h\). This cancels poles of \(|(\phi+h)_{\mathbf{R}\mathbf{R}^{\prime}}^{-1}|^{2}\), implying that the poles of the Green's functions are not relevant for the \(\phi_{\mathbf{r}}\) integration. In other words, the adjugate matrix \(adj_{\mathbf{R}\mathbf{R}^{\prime}}(\phi+h)=\det(\phi+h)(\phi+h)_{\mathbf{R} \mathbf{R}^{\prime}}^{-1}\) does not have any pole for \(|\phi_{\mathbf{r}}|<\infty\), and the integration with respect to \(\phi_{\mathbf{r}}\) can be deformed in any finite area of the complex plane. This reflects an exponential decay with a finite decay length of the return probability \(P_{\mathbf{R}^{\prime}\mathbf{R}}\). The second difference is that \(h\) is unitary and its eigenvalues are located on the unit circle of the complex plane, while the Hamiltonian \(H_{0}\) in the Anderson localization problem is a Hermitian matrix with eigenvalues on the real axis.
The deformation of the \(\phi_{\mathbf{r}}\) integration provides a rigorous but only qualitative result regarding the decay of \(P_{\mathbf{R}^{\prime}\mathbf{R}}\). For a quantitative result of the decay we must perform the integration explicitly. We will do that approximately within a saddle-point integration in Sect. V.2 and App. A.
## V Discussion
First, we note that the expansion of the integrand of Eq. (7) and a subsequent Grassmann and phase integration yields graphs with 4-vertices, where two edges \(h_{\mathbf{r}\mathbf{r}^{\prime}}\) from \(\varphi_{1}\) and two Hermitean conjugate edges \(h_{\mathbf{r}\mathbf{r}^{\prime}}^{\dagger}\) from \(\varphi_{2}\) are connected. This condition is enforced by the Grassmann property, which requires a product of \(\varphi_{1\mathbf{r}}\varphi_{1\mathbf{r}}^{\prime}\varphi_{2\mathbf{r}} \varphi_{2\mathbf{r}}^{\prime}\) at each site \(\mathbf{r}\). Moreover, the random phase factors glue \(h\) and \(h^{\dagger}\) at these products to form a 4-vertex and to prevent a 2-vertex. The geometric property of the 4-vertex enables us either to form loops of edges or to connect the sites \(\mathbf{R}\) and \(\mathbf{R}^{\prime}\) by a string of both types of edges. Both, the \(h\) edges as well as the \(h^{\dagger}\) edges form loops and an \(\mathbf{R}\)-\(\mathbf{R}^{\prime}\) string separately. This is a consequence of the diagonal kernel of the quadratic form in Eq. (7). Moreover, each loop carries a factor \(-1\) from the Grassmann field. Two typical examples are depicted in Fig. 2 with the same formation of the nine black edges but with different formations of the nine red edges. In the left example a loop and a double string are separated by a special choice of red edges, while in the right example there is only one connected graph.
This type of graphs is known from the invariant measure of chiral random Hamiltonians [6]. There is a crucial difference though that is related to the zero mode: In contrast to the random phase scattering \(e^{i\alpha_{\mathbf{r}}}h_{\mathbf{r}\mathbf{r}^{\prime}}\), the scattering of the chiral model is \(e^{i\alpha_{\mathbf{r}}}h_{\mathbf{r}\mathbf{r}^{\prime}}\sum_{\mathbf{r}^{ \prime\prime}}h_{\mathbf{r}^{\prime\prime}}^{\dagger}e^{-i\alpha_{\mathbf{r}^ {\prime\prime}}}\). For the latter we have a uniform zero mode
\[\sum_{\mathbf{r}^{\prime}}[\delta_{\mathbf{r}\mathbf{r}^{\prime}}-e^{i\alpha_{ \mathbf{r}}}h_{\mathbf{r}\mathbf{r}^{\prime}}\sum_{\mathbf{r}^{\prime\prime}}h _{\mathbf{r}^{\prime}\mathbf{r}^{\prime\prime}}^{\dagger}e^{-i\alpha_{\mathbf{ r}^{\prime\prime}}}]=0 \tag{10}\]
for any realization of the random phase.
### Hopping expansion
In order to get a better understanding of the behavior of the return probability \(P_{\mathbf{R}^{\prime}\mathbf{R}}\) we return to the expression of Eq. (6) with random phases and simplify it by neglecting the determinants. This leads to the product of the conjugate one-particle Green's functions of only two individual particles:
\[\langle(\mathbf{1}+h^{i\alpha})_{\mathbf{R}\mathbf{R}^{\prime}}^{-1}(\mathbf{1 }+e^{-i\alpha}h^{\dagger})_{\mathbf{R}\mathbf{R}}^{-1}\rangle_{\alpha}= \langle(e^{-i\alpha}+h)_{\mathbf{R}\mathbf{R}^{\prime}}^{-1}(e^{i\alpha}+h^{ \dagger})_{\mathbf{R}\mathbf{R}}^{-1}\rangle_{\alpha}.\]
A hopping expansion of the inverse matrices in powers of the evolution operator \(e^{i\alpha}h\) and its Hermitian conjugate can be written as a truncated geometric series
\[({\bf 1}+e^{i\alpha}h)^{-1}_{{\bf RR^{\prime}}}({\bf 1}+h^{\dagger}e^{-i\alpha})^ {-1}_{{\bf R^{\prime}R}}=\sum_{l,m=0}^{N-1}(e^{i\alpha}h)^{l}_{{\bf RR^{\prime}} }(h^{\dagger}e^{-i\alpha})^{m}_{{\bf R^{\prime}R}},\]
where the truncation with \(N<\infty\) is necessary because it is not clear whether the series converges. Since after phase averaging only \(l=m\) survives, we can ignore terms with \(l\neq m\) here. This gives
\[\sum_{l=0}^{N-1}(he^{i\alpha})^{l}_{{\bf RR^{\prime}}}(e^{-i\alpha}h^{\dagger}) ^{l}_{{\bf R^{\prime}R}}=\delta_{{\bf RR^{\prime}}}+h_{{\bf RR^{\prime}}}h^{ \dagger}_{{\bf R^{\prime}R}}+\sum_{{\bf r}_{1},{\bf r}_{1}^{\prime}}h_{{\bf R^ {\prime}1}}h_{{\bf r}_{1}{\bf R^{\prime}}}h^{\dagger}_{{\bf R^{\prime}r}_{1}^ {\prime}}h^{\dagger}_{{\bf r^{\prime}_{1}R}}e^{i\alpha_{{\bf r}_{1}}-i\alpha_ {{\bf r^{\prime}_{1}}}}\]
\[+\ldots+\sum_{{\bf r}_{1},{\bf r}_{1}^{\prime},{\bf r}_{2},{\bf r}_{2}^{ \prime},\ldots,{\bf r}_{N-1},{\bf r}_{N-1}^{\prime}}h_{{\bf R}{\bf r}_{1}}h_{{ \bf r}_{1}{\bf r}_{2}}\cdots h_{{\bf r}_{N-1}{\bf R^{\prime}}}h^{\dagger}_{{ \bf R^{\prime}};{\bf r}_{N-1}^{\prime}}h^{\dagger}_{{\bf r}_{N-1}^{\prime}{\bf r }_{N-2}^{\prime}}\cdots h^{\dagger}_{{\bf r^{\prime}_{1}R}}\prod_{j=1}^{N-1}e ^{i\alpha_{{\bf r}_{j}}-i\alpha_{{\bf r^{\prime}_{j}}}}. \tag{11}\]
Now we can average over the random phases to obtain
\[\langle({\bf 1}+he^{i\alpha})^{-1}_{{\bf RR^{\prime}}}({\bf 1}+e^{-i \alpha}h^{\dagger})^{-1}_{{\bf R^{\prime}R}}\rangle_{\alpha}=\delta_{{\bf RR^{ \prime}}}+h_{{\bf RR^{\prime}}}h^{\dagger}_{{\bf R^{\prime}R}}+\sum_{{\bf r}_{ 1}}h_{{\bf R}{\bf r}_{1}}h_{{\bf r}_{1}{\bf R^{\prime}}}h^{\dagger}_{{\bf R^{ \prime}r}_{1}}h^{\dagger}_{{\bf r}_{1}{\bf R}}\]
\[+\ldots+\sum_{{\bf r}_{1},{\bf r}_{2},\ldots,{\bf r}_{N-1}}\sum_{\pi_{N-1}}h_{ {\bf R}{\bf r}_{1}}h_{{\bf r}_{1}{\bf r}_{2}}\cdots h_{{\bf r}_{N-1}{\bf R^{ \prime}}}h^{\dagger}_{{\bf R^{\prime}};\pi({\bf r}_{N-1})}h^{\dagger}_{\pi({ \bf r}_{N-1})\pi({\bf r}_{N-2})}\cdots h^{\dagger}_{\pi({\bf r}_{1}){\bf R}}, \tag{12}\]
where we sum with respect to all permutations \(\pi_{N-1}\) of all non-degenerate sites of \(\{{\bf r}_{1},{\bf r}_{2},\ldots,{\bf r}_{N-1}\}\). Although this is a compact expression, it is difficult to perform the sum over the permutations and to calculate the corresponding values. Nevertheless, as an important special case the identity \(\pi_{N-1}=id\) can be calculated. It is a contribution of an unrestricted random walk on the lattice. This represents a long range correlation in the form of diffusion. However, it will be destroyed by the determinant factor in Eq. (7), as mentioned in the previous section, where the Ising field representation leads to an exponential decay. In other words, the coupling of many fermions to the random phase scattering supports localization by avoiding singularities that appear in the case of two particles. For a quantitative result of the exponential decay we study the mean square displacement within a saddle point approximation in the next section.
### Saddle point approximation
The return probability in Eq. (9) is treated within the saddle point integration of the Ising field \(\phi\) (cf. App. A). This yields
\[P_{{\bf R^{\prime}R}}=\langle|(\phi+h)^{-1}_{{\bf RR^{\prime}}}|^{2}\rangle_{ \phi}\approx|(\phi_{0}+h)^{-1}_{{\bf R^{\prime}}-{\bf R^{\prime}}}|^{2}, \tag{13}\]
Figure 2: Two typical graphs representing contributions to the functional integral of the return probability \(P_{{\bf R^{\prime}R}}\) in Eq. (7). Black (red) edges represent \(h\) (\(h^{\dagger}\)). Both edges form (i) a loop and (ii) a string connecting the sites \({\bf R}\) and \({\bf R^{\prime}}\) of the fermion and the hole. The strings are contributions to the inverse matrix elements \(({\bf 1}+e^{i\alpha}h)^{-1}_{{\bf RR^{\prime}}}\) and \(({\bf 1}+h^{\dagger}e^{-i\alpha})^{-1}_{{\bf RR^{\prime}}}\), respectively, while loops are contributions to the determinants. There are only 4-vertices, except for the endpoints \({\bf R}\) and \({\bf R^{\prime}}\), which are connecting to two black and two red edges. Other edge crossings are not connected by vertices.
where we have neglected the flucuations \(\delta\phi_{\bf r}\) around the saddle point \(\phi_{0}\). This approximation enables us to factorize the return probability as
\[P_{{\bf R}^{\prime}{\bf R}}\approx|C_{{\bf R}-{\bf R}^{\prime}}|^{2}\,\ \ C_{{\bf R}-{\bf R}^{ \prime}}=(\phi_{0}+h)^{-1}_{{\bf R}-{\bf R}^{\prime}},\]
where \(C_{{\bf R}-{\bf R}^{\prime}}\) can be represented by its Fourier transform
\[\tilde{C}_{\bf k}=\frac{1}{\phi_{0}+e^{-iE_{\bf k}}} \tag{14}\]
with the eigenvalue \(E_{\bf k}\) of the translational invariant matrix \(H\tau\). Thus, the effect of the random phase scattering is associated only with the value of \(\phi_{0}\), where the latter is determined by \(E_{\bf k}\) via the saddle point equation (10). Moreover, \(\phi_{0}=1\) represents the absence of random phase scattering.
The results for the Ising field \(\phi_{0}\) of App. A can be interpreted in terms of the magnetic properties of the classical Ising model [7]. The asymmetric shift \(\cos E_{0}\) plays the role of an external magnetic field and \(\phi_{0}\) corresponds to the magnetization [8]. Thus, the effective Ising model has a unique Ising field \(\phi_{0}>0\) or \(\phi_{0}<0\) when \(\cos E_{0}\neq 0\), while for \(\cos E_{0}=0\) there are either two degenerate solutions with opposite signs of \(\phi_{0}\) (ferromagnetic phase) or a single solution with \(\phi_{0}=0\) (paramagnetic phase). In contrast to the classical Ising model with a continuous transition though, Fig. 3 indicates a jump of \(\phi_{0}\) for our effective Ising model.
### Mean square displacement
The mean square displacement provides a measure for the localization length. It is defined as
\[[R_{\nu}^{2}]:=\frac{\sum_{{\bf R}^{\prime}}({\bf R}_{\nu}-{\bf R}_{\nu}^{ \prime})^{2}P_{{\bf R}^{\prime}{\bf R}}}{\sum_{{\bf R}}P_{{\bf R}^{\prime}{\bf R }}}=\frac{-\beta_{q_{\nu}}^{2}\tilde{P}_{\bf q}\Big{|}_{{\bf q}=0}}{\tilde{P}_ {0}}, \tag{15}\]
where \(\tilde{P}_{\bf q}\) is the Fourier transform of the translational-invariant \(P_{{\bf R}^{\prime}{\bf R}}\equiv P_{{\bf R}^{\prime}-{\bf R}}\). Now we study \(P_{{\bf R}{\bf R}^{\prime}}=|C_{{\bf R}{\bf R}^{\prime}}|^{2}\) with the help of the saddle point integration. In this case the mean square displacement reads
\[[R_{\nu}^{2}]=\frac{\sum_{{\bf R}}R_{\nu}^{2}|C_{{\bf R}}|^{2}}{\sum_{{\bf R}} |C_{{\bf R}}|^{2}}, \tag{16}\]
where the numerator is
\[\sum_{{\bf R}}R_{\nu}^{2}e^{i{\bf q}\cdot{\bf R}}|C_{{\bf R}}|^{2}\Big{|}_{{ \bf q}=0}=-\partial_{q_{\nu}}^{2}\sum_{{\bf R}}e^{i{\bf q}\cdot{\bf R}}|C_{{ \bf R}}|^{2}\Big{|}_{{\bf q}=0}=-\partial_{q_{\nu}}^{2}\sum_{{\bf R}}\int_{{ \bf k}}\int_{{\bf k}^{\prime}}e^{i({\bf q}-{\bf k}-{\bf k}^{\prime})\cdot{\bf R }}\tilde{C}_{{\bf k}}\tilde{C}_{{\bf k}^{\prime}}^{*}\Big{|}_{{\bf q}=0}\]
with the Fourier transform \(\tilde{C}_{\bf k}\) of Eq. (14). Then the \({\bf R}\) summation can be performed and leads to a Kronecker delta, which gives for Eq. (16)
\[[R_{\nu}^{2}]=-\partial_{q_{\nu}}^{2}\int_{{\bf k}}\tilde{C}_{{\bf k}}\tilde{C} _{{\bf q}-{\bf k}}^{*}\Big{|}_{{\bf q}=0}\Big{/}\int_{{\bf k}}\tilde{C}_{{\bf k }}\tilde{C}_{-{\bf k}}^{*}.\]
Now we assume that \(C_{-{\bf k}}=C_{\bf k}\) to obtain eventually
\[[R_{\nu}^{2}]=\int_{{\bf k}}|\partial_{k_{\nu}}\tilde{C}_{{\bf k}}|^{2}\Big{/} \int_{{\bf k}}|\tilde{C}_{{\bf k}}|^{2}, \tag{17}\]
which becomes with Eq. (14)
\[[R_{\nu}^{2}]=\int_{{\bf k}}\frac{(\partial_{k_{\nu}}E_{{\bf k}})^{2}}{(1+ \phi_{0}^{2}+2\phi_{0}\cos E_{{\bf k}})^{2}}\Big{/}\int_{{\bf k}}\frac{1}{1+ \phi_{0}^{2}+2\phi_{0}\cos E_{{\bf k}}}. \tag{18}\]
A special case is the one in which the energy in Eq. (10) is symmetric with respect to \(\phi_{0}\rightarrow-\phi_{0}\). Then there exists a critical value \(\tau_{c}\) of the evolution time \(\tau\): If \(\tau\) exceeds \(\tau_{c}\) the saddle point is always
\(\phi_{0;min}=0\), as indicated in Fig. 3a). This implies for Eq. (14) that \(\tilde{C}_{\bf k}=e^{iE_{\bf k}}\), which yields for the corresponding mean square displacement
\[[R_{\nu}^{2}]=\int_{\bf k}(\partial_{k_{\nu}}E_{\bf k})^{2}=\tau^{2}\int_{\bf k} (\partial_{k_{\nu}}\epsilon_{\bf k})^{2}. \tag{19}\]
\(\epsilon_{\bf k}\) is the dispersion of the Hamiltonian \(H\). Thus, the mean square displacement increases with the squared evolution time \(\tau\). For \(\tau<\tau_{c}\), on the other hand, or when the energy is asymmetric with respect to \(\phi_{0}\rightarrow-\phi_{0}\), we have \(\phi_{0}\neq 0\).
In the absence of random phase scattering we have \(\phi_{0}=1\) directly from Eq. (7). Then for the special case \(E_{\bf k}=k^{2}\tau/2\mu\) (\(0\leq k\leq\lambda\)) the mean square displacement on a \(d\)-dimensionsional lattice reads
\[[R_{\nu}^{2}]=\frac{\tau}{d\mu}\int_{0}^{\bar{E}}\frac{E^{d/2}}{(1+\cos E)^{ 2}}dE\Big{/}\int_{0}^{\bar{E}}\frac{E^{d/2-1}}{1+\cos E}dE \tag{20}\]
with the integration cut-off \(\bar{E}=\lambda^{2}\tau/2\mu\). This is a finite expression for \(\bar{E}<\pi\), which diverges with a power law as
\[[R_{\nu}^{2}]\sim\frac{2\pi\tau}{3d\mu}(\pi-\bar{E})^{-2} \tag{21}\]
when we approach \(\bar{E}=\pi\) from below. For \(\bar{E}\geq\pi\) the mean square displacement is always infinite without random phase scattering. This result has the form of a diffusion relation with time \(\tau\) and a divergent diffusion coefficient for \(\bar{E}\rightarrow\pi\) when we ignore the fact that \(\bar{E}\) also depends on \(\tau\). A possible interpretation is that the fermion-hole pair is subject to diffusion due to its interaction with the other fermions of the system. The divergence, on the other hand, reflects a long range correlation of the fermion and the hole that reflects the pole of \(\tilde{C}_{\bf k}=1/(1+e^{-iE_{\bf k}})\).
A more detailed analysis, especially for the evaluation of \(\tau_{c}\), requires specific expressions of the dispersion \(\epsilon_{\bf k}\). This would exceed the goal of this work to present a generic approach for the effect of disorder on the recombination of fermion-hole pairs.
## VI Conclusions and outlook
The probability \(P_{{\bf R}^{\prime}{\bf R}}\), which describes the probability to return to the initial quantum state after the creation of a fermion at site \({\bf R}\) and a hole at site \({\bf R}^{\prime}\) and their evolution, decays always exponentially
Figure 3: The Ising energy \({\cal E}(\phi)\) as defined in Eq. (16) for a constant density of states. a) \({\cal E}(\phi)\) is plotted at the symmetry point \(E_{0}=\pi/2\) for the band width \(a=1.9\) (blue curve) and \(a=2.1\) (red curve). It indicates a jump of the Ising field from two degenerate nonzero values to \(\phi_{0;min}=0\). b) \({\cal E}(\phi)\) is plotted with band width \(a=7.5\) for a symmetric band \(E_{0}=0\) (red curve) and a band that is shifted by \(E_{0}=\pi/2\) (blue curve).
with the distance \(|{\bf R}-{\bf R}^{\prime}|\) in the presence of random phase scattering. To obtain this rigorous result a mapping of the random phase model onto an Ising-like model was essential. This was supplemented by an approximative calculation of this decay, based on a saddle point integration of the effective Ising model, to get some quantitative insight into the decay. The latter calculation is instructive, since it demonstrates how the solution of the saddle point equation avoids the singularities of the underlying fermion model. In the absence of random phase scattering, one of these singularities leads to a non-exponential decay for a sufficiently long evolution of the state with the fermion-hole pair. This is reflected by an infinite mean square displacement of the fermion and the hole.
In our approximation we have not included the Gaussian fluctuations around the saddle point solution. It would be interesting to include them and to determine their effect on the decay of the return probability. In this context it would also be useful to understand the effect of these fluctuations on the transition from \(\phi_{0}\neq 0\) to \(\phi_{0}=0\) at the symmetry point under an increasing evolution time. Another extension of our approach is the application to the return probability of a system under periodically repeated projective measurements [9; 10] or under randomly repeated projective measurements [11]. Then the effect of random phase scattering on the resulting monitored evolution could also be described by the effective Ising field model. Even more interesting but also more challenging would be the extension of the approach to the transition probability for the monitored evolution under randomly repeated projective measurements [12].
## Appendix A Saddle point integration
We approximate the integral
\[\langle\ldots\rangle_{\phi}=\frac{1}{{\cal N}_{\phi}}\int e^{-\frac{1}{2} \sum_{\bf r}\phi_{\bf r}^{2}}|\det(\phi+h)|^{2}\ldots\prod_{\bf r}d\phi_{\bf r}\]
by using a saddle-point integration. Then we determine the maximal contribution to the integrand by assuming a uniform \(\phi\) and write \(\phi_{\bf r}=\phi+\delta\phi_{\bf r}\). This enables us to approximate the integral in terms of Gaussian fluctuation around the uniform \(\phi\) with respect to \(\delta\phi_{\bf r}\), where \(\phi\) must be fixed as \(\phi_{0}\) at the minimum of the Ising energy
\[{\cal E}(\phi)=\frac{1}{2}\phi^{2}-\int_{-\infty}^{\infty}\log(1+\phi^{2}+2 \phi\cos E)\rho(E)dE \tag{10}\]
with the density of states \(\rho(E)\). The integrand is singular for \(\phi=1\), \(E=\pi\) and for \(\phi=-1\), \(E=0\). These singularities yield large values for the energy. Therefore, they do not represent lowest energy contributions of the saddle point. This is also reflected in the curves of Fig. 3b). Moreover, the saddle-point solution \(\phi_{0}\) must satisfy
\[{\cal E}^{\prime}(\phi)=\phi-2\int_{-\infty}^{\infty}\frac{\phi+\cos E}{1+ \phi^{2}+2\phi\cos E}\rho(E)dE=0. \tag{11}\]
For a constant density of states \(\rho(E)\) on the interval \([-a/2+E_{0},a/2+E_{0}]\) we get
\[{\cal E}(\phi)=\frac{1}{2}\phi^{2}-\frac{1}{a}\int_{-a/2+E_{0}}^{a/2+E_{0}} \log(1+\phi^{2}+2\phi\cos E)dE. \tag{12}\]
A special case is \(E_{0}=\pi/2\), where we have
\[{\cal E}(\phi)=\frac{1}{2}\phi^{2}-\frac{1}{a}\int_{-a/2}^{a/2}\log(1+\phi^{2} -2\phi\sin E)dE\]
with the symmetry relation \({\cal E}(\phi)={\cal E}(-\phi)\). This would also hold when the density of states is symmetric with respect to \(E=\pi/2\) in Eq. (10). The Ising energy is plotted for several values of \(E_{0}\) and the band width \(a\) in Fig. 3.
|
2304.00467 | Robust Multiview Point Cloud Registration with Reliable Pose Graph
Initialization and History Reweighting | In this paper, we present a new method for the multiview registration of
point cloud. Previous multiview registration methods rely on exhaustive
pairwise registration to construct a densely-connected pose graph and apply
Iteratively Reweighted Least Square (IRLS) on the pose graph to compute the
scan poses. However, constructing a densely-connected graph is time-consuming
and contains lots of outlier edges, which makes the subsequent IRLS struggle to
find correct poses. To address the above problems, we first propose to use a
neural network to estimate the overlap between scan pairs, which enables us to
construct a sparse but reliable pose graph. Then, we design a novel history
reweighting function in the IRLS scheme, which has strong robustness to outlier
edges on the graph. In comparison with existing multiview registration methods,
our method achieves 11% higher registration recall on the 3DMatch dataset and
~13% lower registration errors on the ScanNet dataset while reducing ~70%
required pairwise registrations. Comprehensive ablation studies are conducted
to demonstrate the effectiveness of our designs. | Haiping Wang, Yuan Liu, Zhen Dong, Yulan Guo, Yu-Shen Liu, Wenping Wang, Bisheng Yang | 2023-04-02T06:43:40Z | http://arxiv.org/abs/2304.00467v1 | # Robust Multiview Point Cloud Registration with Reliable
###### Abstract
In this paper, we present a new method for the multiview registration of point cloud. Previous multiview registration methods rely on exhaustive pairwise registration to construct a densely-connected pose graph and apply Iteratively Reweighted Least Square (IRLS) on the pose graph to compute the scan poses. However, constructing a densely-connected graph is time-consuming and contains lots of outlier edges, which makes the subsequent IRLS struggle to find correct poses. To address the above problems, we first propose to use a neural network to estimate the overlap between scan pairs, which enables us to construct a sparse but reliable pose graph. Then, we design a novel history reweighting function in the IRLS scheme, which has strong robustness to outlier edges on the graph. In comparison with existing multiview registration methods, our method achieves \(11\%\) higher registration recall on the 3DMatch dataset and \(\sim 13\%\) lower registration errors on the ScanNet dataset while reducing \(\sim 70\%\) required pairwise registrations. Comprehensive ablation studies are conducted to demonstrate the effectiveness of our designs. The source code is available at [https://github.com/WHU-USI3DV/SGR](https://github.com/WHU-USI3DV/SGR).
## 1 Introduction
Point cloud registration is a prerequisite for many tasks such as 3D reconstruction [17, 25, 32] and 3D segmentation [27, 35]. Most recent registration methods [1, 7, 22, 28, 39, 46, 57] mainly focus on pairwise registration of two partial point clouds (scans), which can only reconstruct a part of the scene. In order to get a completed scene reconstruction, all partial point clouds should be simultaneously aligned, which is called _multiview registration_. Due to its complexity, multiview point cloud registration receives less attention recently and only few recent studies propose multiview registration methods [54, 55, 18, 30, 21].
Given \(N\) unaligned partial point clouds, multiview registration aims to find a globally-consistent pose for every partial point cloud. A commonly-adopted pipeline of multiview registration consists of two phases [55]. First, a pairwise registration algorithm [49, 28, 46] is applied to ex
Figure 1: **Overview**. (1) Given \(N\) unaligned partial scans, our target is to register all these scans into (4) a completed point cloud. Our method has two contributions. (2) We learn a global feature vector to initialize a sparse pose graph which contains much less outliers and reduces the required number of pairwise registrations. (3) We propose a novel IRLS scheme. In our IRLS scheme, we initialize weights from both global features and pairwise registrations. Then, we design a history reweighting function to iteratively refine poses, which improves the robustness to outliers.
haustively estimate the relative poses of all \(\binom{N}{2}\) scan pairs, which forms a fully-connected pose graph. The edges of the graph stand for the relative poses of scan pairs while nodes represent scans. Since the dense pose graph may include inaccurate or even incorrect relative poses (outliers) between two irrelevant scans, in the second phase, these pairwise poses are jointly optimized by enforcing the cycle consistency [30] to reject outlier edges and improve accuracy. For the second phase, most recent methods, including handcrafted methods [5, 13, 29] or learning-based [21, 30, 55] methods, follow a scheme of Iterative Reweighting Least Square (IRLS). In the IRLS, initial weights are assigned to edges to indicate these edges are reliable or not. Then, based on the weights, a synchronization algorithm is applied to compute a new relative pose on every edge. After that, the weights on edges are updated according to the difference between the old relative poses and the new ones. IRLS iteratively synchronize poses from edge weights and update weights with synchronized poses.
In an ideal case, an IRLS scheme will gradually lower the weights of the outlier edges and only consider the inlier edges for pose synchronization. However, the initial densely-connected graph contains lots of outliers, which often prevents the iterative reweighting mechanism of IRLS from finding correct edges. To improve the robustness to outliers, many researches focus on applying advanced handcrafted reweighting functions [11, 29] or designing graph network to learn reweighting functions [30, 55]. However, the handcrafted reweighting functions usually require a good initialization to converge to the correct poses while learning-based reweighting methods may not generalize to unseen settings. Designing a robust IRLS algorithm still remains an open problem.
In this paper, we show that multiview registration can be improved from two aspects, as shown in Fig. 1. First, we learn a good initialization of the input pose graph which avoids exhaustive pairwise registrations and reduces the outlier ratio. Second, we propose a novel history reweighting function which enables a stable convergence to correct poses in the IRLS scheme.
In the pose graph construction, we learn a global feature on each point cloud and the correlation of two global feature indicates the overlap ratio between two point clouds. Such global features enable us to generate a sparse pose graph with fewer but more reliable edges instead of a densely-connected graph. After that, we only need to apply the pairwise registration algorithm and IRLS on these sparse edges, which greatly reduce the computation complexity of pairwise registration from \(O(N^{2})\) to \(O(N)\). Meanwhile, these reliable edges contain much less outliers than the fully-connected graph, which provides the possibility to find more accurate and consistent global poses in IRLS.
Though the initial graph contains much less outliers, existing IRLS algorithms are still sensitive to these outliers and can be totally biased towards these outliers in the first few iterations. An example is shown in Fig. 2: the initial graph only contains two outlier edges. However, the outlier scan pair "#0-#4" looks very similar and thus is initialized with a large weight. Such an incorrect large weight interferes the subsequent pose synchronization and brings systematic errors to the synchronized poses. The vanilla IRLS trusts all synchronized poses and is easily dominated by these erroneous poses, which leads to incorrect convergence as shown in Fig. 1(c). To address this problem, we propose a simple yet effective reweighting function called the _history reweighting function_. In history reweighting function, edge weights at a specific iteration not only depends on the synchronized poses at the current iterations but also considers the historical synchronized poses in previous iterations, which acts like a regularizer to prevent the IRLS from being dominated by outliers at the early unstable iterations as shown in the Fig. 2 (d). Then, the edge weights in our graph gradually stabilize in the subsequent iterative refinements, leading to the convergence to correct poses.
Figure 2: An example on the 3DMatch dataset. (a) The input scans under the ground truth poses. (b) The constructed sparse pose graph with two incorrect relative poses (#0-#2 and #0-#4), where #0 and #4 looks very similar to each other so that the pose graph incorrectly include this scan pair. (c) and (d) show the normalized weights on the graph edges on different iterations of the vanilla IRLS and our method respectively. Our method is able to find the outlier edges and gradually reduce their weights while vanilla IRLS is biased towards the outlier edge (#0-#4) after few iterations. (e) and (f) are the multiview registration results of the vanilla IRLS and our method respectively.
We evaluate our method on three widely-used benchmarks: the 3DMatch/3DLoMatch dataset [28, 59], the ScanNet dataset [16], and the ETH dataset [44]. With the help of the proposed sparse graph construction and IRLS with history reweighting, our method surpasses the current multiview registration baselines by \(11.0\%\) and \(6.2\%\) in registration recall on 3DMatch and 3DLoMatch, reduces the mean rotation and translation errors on ScanNet by \(12.8\%\) and \(13.8\%\). Meanwhile, our method shows strong generalization ability. Only trained on the indoor dataset, our method achieves a \(99.8\%\) registration recall on the outdoor ETH dataset. Moreover, all the above state-of-the-art performances of our method only require \(20\%\sim 40\%\) pairwise registrations of existing multiview point cloud registration methods, which demonstrates our computation efficiency.
## 2 Related work
### Pairwise registration
There are mainly two kinds of pairwise point cloud registrations. Feature-based methods extract a set of local descriptors [1, 7, 50, 22, 49, 15] on detected keypoints [7, 28]. Then, local descriptors are matched to build correspondences [39, 46, 57, 56]. Finally, correspondences are filtered [12, 14, 6, 36, 43] and used in transformation estimation [46, 56, 49] to find rigid transformations. Other works, known as direct registration methods, either directly regress the transformations [2, 31, 38, 58] or refine correspondences [51, 20, 40, 52] by considering the information from both point clouds with attention layers. Our multi-view registration method is based on the pairwise registration, which is compatible with all above methods.
### Multiview registration
Most multiview point cloud registration methods [5, 10, 13, 29, 21, 23, 55, 10] aim at recovering the absolute scan poses from exhaustive pairwise registrations. However, exhaustive pairwise registration is time-consuming [18] and may contain lots of outliers [55]. To reduce the computational burden, some traditional works [18, 24, 33, 42, 53] resort to growing-based strategy to merge selected scans iteratively, which requires fewer pairwise registrations but may fail due to the accumulated errors in the growing process. In contrast, we avoid complex growing strategies to find inlier pairs but incorporate learning-based techniques to select reliable scan pairs, which enables more accurate subsequent synchronization. Other works [5, 9, 13, 21, 30, 33, 47, 48, 55, 60] focus on pruning outliers on the constructed graph. IRLS-based scheme is one of the most prevalent technique [5, 55, 21, 26, 29, 30]. However, the iterative refinement of IRLS can easily trapped in a local minima and fails to prune out outlier edges [5, 55]. The reweighting function is proved to be the most important design in a reliable IRLS [5, 26, 30]. Thus, recent learning-based advances [21, 30, 55] adopt data-driven strategy to learn robust reweighting functions, which achieve impressive performances but cannot generalize well to unfamiliar graphs. We design a history reweighting function with strong generalization ability and robustness to outliers.
## 3 Method
### Overview
Consider a set of unaligned scans \(\mathcal{P}=\{P_{i}|i=1,...,N\}\) in the same 3D scene. The target of multiview registration is to recover the underlying global scan poses \(\{T_{i}=(R_{i},t_{i})\in SE(3)|i=1,...,N\}\). In the following, we first introduce how to initialize a pose graph with reliable edges in Sec. 3.2. Then, we propose a novel history reweighting function within a IRLS scheme in Sec. 3.3 to solve for the poses of every scan. The pipeline is illustrated in Fig. 3.
### Learn to construct a sparse graph
In this section, we aim to construct a pose graph for the multiview registration. Specifically, the graph is denoted by \(\mathcal{G}(\mathcal{V},\mathcal{E})\), where each vertex \(v_{i}\in\mathcal{V}\) represents each scan \(P_{i}\) while edge \((i,j)\in\mathcal{E}\) encodes the relative poses between scan \(P_{j}\) and scan \(P_{i}\). We will first estimate an overlap score \(s_{ij}\) for each scan pair \((P_{i},P_{j})\). Then, given the overlap scores, we construct a sparse graph by selecting a set of scan pairs with large estimated overlaps and apply pairwise transformations on them only.
**Global feature extraction**. To extract the global feature \(F\) for a point cloud \(P\), we first downsample \(P\) by voxels and extract a local feature \(f_{p}\in\mathbb{R}^{d}\) on every sampled point \(p\in P\) from its local 3D patch \(N_{p}=\{p^{\prime}|\|p-p^{\prime}\|_{2}<r,p^{\prime}\in P\}\) within a radius of \(r\) by
\[f_{p}=\varphi(N_{p}), \tag{1}\]
where \(\varphi\) is a neural network for extracting local descriptors, such as PointNet [45], FCGF [15], and YOHO [49]. By default, we adopt YOHO as the local descriptor [49] due to its superior performance. Then, we apply a NetVLAD [3] layer on the local features to extract a global feature \(F\)
\[F=NetVLAD(\{f_{p}\}). \tag{2}\]
Note \(F\in\mathbb{R}^{n}\) is normalized such that \(\|F\|_{2}=1\).
**Sparse graph construction**. For a scan pair \((P_{i},P_{j})\), we estimate their overlap score by
\[s_{ij}=(F_{i}^{T}F_{j}+1)/2, \tag{3}\]
where \(s_{ij}\in[0,1]\) indicates the overlap between \(P_{i}\) and \(P_{j}\). We train the \(NetVLAD\) with a L1 loss between the predicted overlap score and the ground-truth overlap ratio.
For each scan, we select other \(k\) scan pairs with the largest overlap scores to connect with the scan. This leads to a sparse graph with edges
\[\mathcal{E}=\{(i,j:\operatorname*{arg-topk}_{P_{j}\in\mathcal{P},j\neq i}s_{ij}), \forall P_{i}\in\mathcal{P}\}. \tag{4}\]
On each edge \((i,j)\in\mathcal{E}\) of the constructed graph, we estimate a relative pose \(T_{ij}\) on the scan pair from their extracted local descriptors. By default, we follow [49] to apply nearest neighborhood matcher on the local descriptors and estimate the relative pose from the RANSAC variant.
**Discussion**. Recent multiview registration methods [21, 30, 55, 60] usually exhaustively estimate all \(\binom{N}{2}\) relative poses and many of these scan pairs have no overlap at all. In our method, we extract global features to determine the overlap scores to select \(N\times k\) scan pairs. Actually, we only need to conduct pairwise registration less than \(N\times k\) because the graph is an indirection graph and we only need to count each edge once. Our global feature extraction is much more efficient than matching descriptors and running RANSAC in the pairwise registration. The subsequent pose synchronization only needs to operate on these sparse edges, which also improves the efficiency. Moreover, the retained pose graph contains much less fewer outliers, which thus improves the accuracy of the subsequent synchronization.
### IRLS with history reweighting
In this section, we apply the Iteratively Reweighted Least Squares (IRLS) scheme to estimate the consistent global poses on all scans. The key idea of IRLS is to associate a weight on each edge to indicate the reliability of each scan pair. These weights are iteratively refined such that outlier edges will have small weights so these outlier relative poses will not affect the final global poses. In the following, we first initialize edge weights, and iteratively estimate poses based on edge weights and update edge weights with the proposed history reweighting function.
#### 3.3.1 Weight initialization
The weight \(w_{ij}^{(0)}\) is initialized from both the estimated overlap score \(s_{ij}\) and the quality of the pairwise registration by
\[w_{ij}^{(0)}=s_{ij}*r_{ij}, \tag{5}\]
where \(r_{ij}\) reveals the quality of pairwise registration. In the pairwise registration, a set of correspondences \(C=\{(p,q)|p\in P_{i},q\in P_{j}\}\) are established by matching local descriptors. Thus, \(r_{ij}\) is defined as the number of inlier correspondences in \(C\) conforming with \(T_{ij}=(R_{ij},t_{ij})\), which is
\[r_{ij}=\sum_{(p,q)\in C}[\![\|p-R_{ij}q-t_{ij}\|^{2}<\tau]\!], \tag{6}\]
where \([\![\cdot]\!]\) is the Iverson bracket, \(\tau\) is a pre-defined inlier threshold.
#### 3.3.2 Pose synchronization
Given the edge weights and input relative poses \(\{w_{ij},T_{ij}=(R_{ij},t_{ij})|(i,j)\in\mathcal{E}\}\), we solve for the global scan poses \(\{T_{i}=(R_{i},t_{i})\}\). We adopt the closed-form synchronization algorithm proposed in [4, 30]. We first compute the rotations by rotation synchronization [21, 4], and then compute the translations by translation synchronization [30].
**Rotation synchronization**. The goal of rotation synchronization is to solve
\[\{R_{1},...R_{N}\}=\operatorname*{arg\,min}_{R_{1},...R_{N}\in SO(3)}\sum_{(i,j)\in\mathcal{E}}w_{ij}\|R_{ij}-R_{i}^{T}R_{j}\|_{F}^{2}, \tag{7}\]
where \(\|\cdot\|_{F}\) means the Frobenius norm of the matrix. The problem has a closed-form solution, which can be derived from the eigenvectors of a symmetric matrix \(L\in\mathbb{R}^{3N*3N}\)
\[L=\left(\begin{array}{cccc}\sum\limits_{(1,j)\in\mathcal{E}}w_{1j}\mathbf{ I}_{3}&-w_{12}R_{12}&\cdots&-w_{1N}R_{1N}\\ -w_{21}R_{21}&\sum\limits_{(2,j)\in\mathcal{E}}w_{2j}\mathbf{I}_{3}&\cdots&-w_ {2N}R_{2N}\\ \vdots&\vdots&\ddots&\vdots\\ -w_{N1}R_{N1}&-w_{N2}R_{N2}&\cdots&\sum\limits_{(N,j)\in\mathcal{E}}w_{Nj} \mathbf{I}_{3}\end{array}\right) \tag{8}\]
\(L\) is a sparse matrix since the constructed graph is sparse. Given three eigenvectors \(\tau_{1},\tau_{2},\tau_{3}\in\mathbb{R}^{3N}\) corresponding to the three smallest eigenvalues \(\lambda_{1}<\lambda_{2}<\lambda_{3}\) of \(L\), we stack these three eigenvectors to construct a matrix \(V=[\tau_{1},\tau_{2},\tau_{3}]\in\mathbb{R}^{3N*3}\). Then, \(R_{i}\) can be derived by projecting
Figure 3: The pipeline of the proposed method.
\(v_{i}=V[3i-3:3i]\in\mathbb{R}^{3*3}\) to \(SO(3)\). More details can be found in the supplementary material.
**Translation synchronization**. Similarly, translation synchronization retrieves the translation vectors \(\{t_{i}\}\) that minimize the problem:
\[\{t_{1},...,t_{N}\}=\operatorname*{arg\,min}_{t_{1},...,t_{N}\in\mathbb{R}^{3 }}\sum_{(i,j)\in\mathcal{E}}w_{ij}\|R_{i}t_{ij}+t_{i}-t_{j}\|^{2} \tag{9}\]
We solve it by the standard least square method [30].
#### 3.3.3 History reweighting function
Given the synchronized poses, we re-compute weights on edges such that outlier edges will have smaller weights than the inlier edges. Assume the synchronized poses at the \(n\)-th iteration are \(\{T_{i}^{(n)}=(R_{i}^{(n)},t_{i}^{(n)})\}\). We first compute the rotation residual \(\delta_{ij}^{(n)}\) by
\[\delta_{ij}^{(n)}=\Delta(R_{ij},R_{i}^{(n)T}R_{j}^{(n)}), \tag{10}\]
where \(\Delta(R_{1},R_{2})\) means the angular difference between the rotation \(R_{1}\) and \(R_{2}\). \(\Delta(R_{1},R_{2})\) is implemented by transforming \(R_{1}^{T}R_{2}\) into an axis-angle form and outputing the rotation angle. Then, the updated weights are computed from rotation residuals of all previous iterations by
\[w_{ij}^{(n)}=w_{ij}^{(0)}\exp\left(-\sum_{m=1}^{n}g(m)\delta_{i,j}^{(m)}\right), \tag{11}\]
where \(g(m)\) is a predefined coefficient function of the iteration number with \(g(m)>0\). We will elaborate the design of \(g(m)\) later. Instead, we first discuss the intuition behind the Eq. (11).
**Intuition of Eq. (11)**. Similar to previous reweighting functions [5, 21, 55], a larger rotation residual \(\delta\) will lead to a smaller weight because large residuals are often caused by outliers. Meanwhile, there are two differences from previous reweighting functions. First, we multiply the initial weights \(w_{ij}^{(0)}\) so that the recomputed weights always retain information from the warm-start initialization in Sec. 3.3.1 and these initialized weights will be adjusted by the residuals in the iterative refinement. Second, the weight at a specific iteration \(n\) considers the residuals of all previous iterations \(m\leq n\). This design is inspired from the momentum optimization method RMSProp or Adam [34], which utilizes the gradients in the history to stabilize the optimization process. Here, we adopt similar strategy to consider all residuals in the history to determine a robust weight for the current iteration, which is less sensitive to outliers.
**Design of coefficient function \(g(m)\)**. \(g(m)\) can be regarded as a weight function. A small value of \(g(m)\) means that we do not trust the residual at the iteration \(m\) and this residual may not correctly identify inliers and outliers. In our observation, the residuals estimated by the first few iterations are not very stable so we want \(g(m)\) is increasing with the iteration number \(m\). Meanwhile, if we want to conduct \(M\) IRLS iterations in total, we want the sum of coefficients at the final iteration \(M\) will be 1, i.e. \(\sum_{m=1}^{M}g(m)=1\). Thus, in our design, we have
\[g(m)=\frac{2m}{M(M+1)}. \tag{12}\]
After computing the updated weights, we iteratively synchronize the poses with these updated weights as stated in Sec. 3.3.2 and compute new weights from these new poses as stated in Sec. 3.3.3. The IRLS run \(M\) iterations in total and the synchronized poses at the final iteration are regarded as the output poses for all scans.
## 4 Experiments
### Experimental protocol
#### 4.1.1 Datasets
We evaluate the proposed method on three widely used datasets: 3D(Lo)Match [59, 28], ScanNet [16], and ETH [44] as follows.
**3DMatch** contains scans collected from 62 indoor scenes among which 46 are split for training, 8 for validation, and 8 for testing. Each test scene contains 54 scans on average. We follow previous works [21, 28] to use 1623 scan pairs with \(>30\%\) overlap ratio and 1781 scan pairs with \(10\%\sim 30\%\) overlap as two test sets, denoted as 3DMatch and 3DLoMatch, respectively.
**ScanNet** contains RGBD sequences of 1513 indoor scenes. We follow [21] to use the same 32 test scenes and convert 30 RGBD images that are 20 frames apart to 30 scans on each scene. There are 960 scans in total and we exhaustively select all 13920 scan pairs for evaluation.
**ETH** has 4 outdoor scenes with large domain gaps to the 3DMatch dataset and each scene contains 33 scans on average. 713 scan pairs are officially selected for evaluation.
Our model is only trained on the training split of 3DMatch and evaluated on 3D(Lo)Match, ScanNet, and ETH. More training details can be found in supplementary material. For evaluation, we first perform multiview registration to recover the global scan poses. Then, we follow [21] to evaluate the multiview registration quality on pairwise relative poses computed from the recovered global poses. By default, we set \(k\) in sparse graph construction to 10 for two indoor datasets and 6 for the ETH dataset.
#### 4.1.2 Metrics
We follow [1, 46, 15, 49] to adopt Registration Recall (RR) for evaluation on 3D(Lo)Match and ETH. RR reports the
ratio of correctly aligned scan pairs. A scan pair is regarded correctly-aligned if the average distance between the points under the estimated transformation \((R_{pre},t_{pre})\) and these points under the ground truth transformation \((R_{gt},t_{gt})\) is less than 0.2m for the 3D(Lo)Match dataset and 0.5m for the ETH dataset. RR of all methods is calculated on the same official evaluation scan pairs mentioned in Sec. 4.1.1.
For the evaluation on ScanNet, we follow [21, 30, 55] to report Empirical Cumulative Distribution Functions (ECDF) of the rotation error \(re\) and translation error \(te\):
\[re=arccos\left(\frac{tr(R_{pre}^{T}R_{gt})-1}{2}\right)\ \ te=\|t_{pre}-t_{gt}\|^{2}. \tag{13}\]
We also report the number of required pairwise registrations to initialize the pose graphs, denoted as "#Pair".
#### 4.1.3 Baselines
We compare the proposed method against several multi-view registration baselines: EIGSE3 [5], L1-IRLS [11], RotAvg [11], LMVR [21], LITS [55], and HARA [37]. Specifically, LMVR is an end-to-end method, which performs pairwise registration and transformation synchronization in a single deep neural network. EIGSE3 proposes a spectral approach to solve transformation synchronization and further applies IRLS with Cauchy [26] reweighting function to improve robustness. L1-IRLS and RotAvg are two robust algorithms, which perform IRLS-based rotation synchronization using \(l_{1}\) and \(l_{1/2}\) reweighting functions to resist outliers. HARA is a state-of-the-art hand-crafted synchronization method, which conducts growing-based edge pruning by checking cycle consistency and performs IRLS-based synchronization using \(l_{1/2}\) reweighting functions on the retained edges. LITS is a state-of-the-art learning-based transformation synchronization method. All baseline methods except LMVR are compatible with any pairwise registration methods [1, 46, 49, 1]. Thus, we compare our method with these baselines using different pairwise registration algorithms, including FCGF [15], SpinNet [1], YOHO [49], GeoTransformer [46].
#### 4.1.4 Pose graph construction
For a fair comparison with baseline multiview registration methods, we report the performances produced on three different types of input pose graphs. The first type "Full" does not prune any edge so the pose graph is fully-connected. The second type "Pruned" prunes edges according to the quality of pairwise registration, which is adopted by previous methods LITS [55] and LMVR [21] (called "Good" in their papers). "Pruned" first applies pairwise registration algorithms (FCGF [15], YOHO [49], SpinNet [1] or GeoTransformer [46]) to exhaustively register all scan pairs and then only retain scan pairs whose median point distance in the registered overlapping region is less than 0.05m [21, 55] (0.15m for ETH). The final type "Ours" applies the proposed global feature for the overlap score estimation and constructs a sparse graph according to scores.
### Results on three benchmarks
Qualitative results are shown in Fig. 4. Quantitative results on the 3DMatch, the ScanNet and the ETH datasets
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline \multirow{2}{*}{_Prun_} & \multirow{2}{*}{_Method_} & \multirow{2}{*}{_fitur_} & \multicolumn{2}{c|}{SpinaNet [1]} & YOHO [49] & GeoTrans [46] \\ & & & _3D/SDL-RR_ (\%) & _3D/SDL-RR_ (\%) & _3D/SDL-RR_ (\%) \\ \hline \multirow{4}{*}{Full} & EIGSE3 [5] & 11905 & 20.8 / 13.6 & 232.7 / 6.6 & 170.91 \\ & L1-IRLS [11] & 11905 & 49.8 / 28.2 & 527.2 / 23.2 & 557.3 / 7.33 \\ & RotAvg [11] & 11905 & 59.3 / 38.9 & 61.8 / 44.1 & 68.6 / 56.5 \\ & LITS [55] & 11905 & 68.1 / 47.9 & 70.5 / 9.0 & 84.2 / 73.0 \\ & HARA [37] & 11905 & 58.7 / 63.6 & 83.1 / 68.7 & 83.4 / 68.5 \\ & Ours & 11905 & 93.3 / 77.2 & 93.2 / 76.8 & 91.5 / 82.4 \\ & L1-IRLS [11] & 11905 & 66.9 / 46.2 & 68.6 / 49.0 & 77.4 / 58.3 \\ & Produce [11] & 11905 & 72.8 / 55.3 & 77.2 / 60.3 & 81.6 / 68.5 \\ & L1-IRLS [11] & 11905 & 73.1 / 55.5 & 80.8 / 65.2 & 84.6 / 76.8 \\ & HARA [37] & 11905 & 84.0 / 62.5 & 83.7 / 71.9 & 84.9 / 73.7 \\ & Ours & 11905 & **93.1 / 88.6** & 95.2 / 82.3 & 95.2 / 82.8 \\ & Ours & **2988** & **94.5 / 86.0** & 96.2 / 90.5 & 95.2 / 83.0 \\ \hline \end{tabular}
\end{table}
Table 1: Registration recall on the 3DMatch (“3D”) and 3DL-Match (“3DL”) datasets. We report results with different pairwise registration algorithms (SpinNet [1], YOHO [49], GeoTrans [46]).
Figure 4: Qualitative results on the 3DMatch, ScanNet, and ETH datasets.
are shown in Table 1, Table 2 and Table 3, respectively.
First, the results show that our method achieves significant better performances than all baseline methods with \(\sim\)5%-10% improvements on the 3DMatch and 3DLoMatch dataset, which demonstrates that our method is able to accurately align low-overlapped scan pairs via pose synchronization. Meanwhile, our method only requires \(\sim 30\%\) pairwise registrations with the help of our sparse graph construction, which greatly improves the efficiency.
Second, when using the same pose graphs as previous method, our method already achieves better performances on all datasets, which is benefited from our history reweighting function in the IRLS. Meanwhile, applying our global features for the graph construction further improves the results, which demonstrates the predicted overlap score is more robust than simply pruning edges according to the pairwise registration.
Finally, the results on the outdoor ETH dataset demonstrate the generalization ability of the proposed method. Both our method and the learning-based method LITS [55] is trained on the indoor 3DMatch dataset, However, LITS does not generalize well to outdoor dataset (only \(\sim\)45% recall) even though it shows strong performances on both indoor datasets. In comparison, our method still achieves strong performances (almost 100% registration recall) on the outdoor dataset.
### Analysis
We thoroughly conduct analyses on the proposed designs about the pose graph construction and history reweighting IRLS modules in this section. By default, all analyses are conducted on the 3D(Lo)Match dataset with YOHO [49] as the pairwise registration method.
#### 4.3.1 Sparse Graph Construction
**Are predicted overlap scores well-calibrated**? Well-calibrated overlap scores should assign higher scores to scan pairs with more overlap regions. Meanwhile, we want the scan pairs with higher overlap scores can be easily aligned by the pairwise registration algorithms. In Fig. 5, we report the averaged ground truth overlap ratios and correct ratio of pairwise registration of scan pairs with top-30 predicted overlap scores. It can be seen that the estimated overlap
\begin{table}
\begin{tabular}{l|l|c c c c c c|c c c c c} \hline \hline \multirow{2}{*}{_Pose Graph_} & \multirow{2}{*}{_Method_} & \multirow{2}{*}{_\#Pair_} & \multicolumn{8}{c}{_Rotation Error_} & \multicolumn{8}{c}{_Translation Error (m)_} \\ & & & 3\({}^{\circ}\) & 5\({}^{\circ}\) & 10\({}^{\circ}\) & 30\({}^{\circ}\) & 45\({}^{\circ}\) & Mean/Med & 0.05 & 0.1 & 0.25 & 0.5 & 0.75 & Mean/Med \\ \hline \hline \multirow{10}{*}{Full} & LMVR [21] & 13920 & 48.3 & 53.6 & 58.9 & 63.2 & 64.0 & 48.1\({}^{\circ}\)/33.7\({}^{\circ}\) & 34.5 & 49.1 & 58.5 & 61.6 & 63.9 & 0.83/0.55 \\ & LITS [55] & 13920 & 47.4 & 58.4 & 70.5 & 78.3 & 79.7 & 27.6\({}^{\circ}\)/- & 29.6 & 47.5 & 66.7 & 73.3 & 77.6 & 0.56/- \\ & EIGGS [5]* & 13920 & 19.7 & 24.4 & 32.3 & 49.3 & 56.9 & 53.6\({}^{\circ}\)/48.0\({}^{\circ}\) & 11.2 & 19.7 & 30.5 & 45.7 & 56.7 & 1.03/0.94 \\ & L-IRLS [11]* & 13920 & 38.1 & 44.2 & 48.8 & 55.7 & 56.5 & 53.9\({}^{\circ}\)/47.1\({}^{\circ}\) & 18.5 & 30.4 & 40.7 & 47.8 & 54.4 & 1.14/1.07 \\ & RotAvg [11]* & 13920 & 44.1 & 49.8 & 52.8 & 56.5 & 57.3 & 53.1\({}^{\circ}\)/44.0\({}^{\circ}\) & 28.2 & 40.8 & 48.6 & 51.9 & 56.1 & 1.31/10.5 \\ & LITS [55]* & 13920 & 52.8 & 67.1 & 74.9 & 77.9 & 79.5 & 26.8\({}^{\circ}\)/27.9\({}^{\circ}\) & 29.4 & 51.1 & 68.9 & 75.0 & 77.0 & 0.68/0.66 \\ & HARA [37]* & 13920 & 54.9 & 64.3 & 71.3 & 74.1 & 74.2 & 32.1\({}^{\circ}\)/29.2\({}^{\circ}\) & 35.8 & 54.4 & 66.3 & 69.7 & 72.9 & 0.87/0.75 \\ & Ours & 13920 & 57.2 & 68.5 & 75.1 & 78.1 & 78.8 & 26.4\({}^{\circ}\)/19.5\({}^{\circ}\) & 39.4 & 61.5 & 72.0 & 75.2 & 77.6 & 0.70/0.59 \\ & EIGGS [5]* & 13920 & 40.8 & 46.3 & 51.9 & 61.7 & 65.7 & 40.6\({}^{\circ}\)/37.1\({}^{\circ}\) & 25.3\({}^{\circ}\) & 38.5 & 51.0 & 59.3 & 66.1 & 0.88/0.84 \\ & L1-IRLS [11]* & 13920 & 46.3 & 54.2 & 61.6 & 64.3 & 66.8 & 41.8\({}^{\circ}\)/34.0\({}^{\circ}\) & 24.1 & 38.5 & 48.3 & 55.6 & 60.9 & 1.05/1.01 \\ & RotAvg [11]* & 13920 & 50.2 & 60.1 & 65.3 & 66.8 & 68.8 & 38.5\({}^{\circ}\)/31.6\({}^{\circ}\) & 31.8 & 49.0 & 58.8 & 63.3 & 65.6 & 0.96/0.83 \\ & LITS [55]* & 13920 & 54.3 & 69.4 & 75.6 & 78.5 & 80.3 & 24.9\({}^{\circ}\)/19.9\({}^{\circ}\) & 31.4 & 54.4 & 72.3 & 76.7 & 79.6 & 0.65/0.56 \\ & HARA [37]* & 13920 & 55.7 & 63.7 & 69.0 & 70.8 & 72.1 & 34.7\({}^{\circ}\)/31.3\({}^{\circ}\) & 35.2 & 53.6 & 65.4 & 68.6 & 71.7 & 0.86/0.71 \\ & Ours & 13920 & **59.4** & 71.9 & 80.0 & 82.1 & 82.6 & **21.7\({}^{\circ}\)/19.1\({}^{\circ}\)** & **39.9** & 63.0 & 74.3 & 77.6 & 80.2 & 0.64/**0.47** \\ \hline \multirow{10}{*}{Curs} & Ours & **6004** & 39.1 & **73.1** & **80.8** & **82.5** & **83.0** & **21.7\({}^{\circ}\)/19.0\({}^{\circ}\)** & **39.9** & 63.1** & **76.7** & **79.0** & **81.5** & **6.60**/0.49 \\ \hline \hline \end{tabular}
* means using the same selected frames and pairwise transformations as ours.
\end{table}
Table 2: Registration performance on the ScanNet dataset. The pairwise registration algorithm for all methods is YOHO [49] except for LMVR [21] which includes pairwise registration in its pipeline.
Figure 5: Ground truth overlap ratios and correct ratios of pairwise registration with Top-k overlap scores.
scores are able to identify the reliable scan pairs with high overlap ratios. A visualization of retrieved scans using the global feature is given in Fig. 6.
**Can our sparse graphs improve other multiview registration methods?** We compare the performance of EIGSE3 [5], RotAvg [11], and LITS [55] using the fully-connected pose graph ("Full"), outliers pruned by pairwise registration results [21, 30, 55] ("Pruned") and the proposed sparse graph ("Ours") in Table 4. It can be seen that our sparse graph construction boosts the performance of baseline methods by a larger margin than "Pruned" graphs. Note "Pruned" requires exhaustive pairwise registration while we only need to conduct pairwise registration on the retained edges. Thus, our method is more efficient. Detailed running times are provided in the supplementary material.
#### 4.3.2 Ablation studies on history reweighting
We conduct ablation studies on our designs in the proposed IRLS algorithm. The results are shown in Table. 5 and the convergence curves are shown in Fig. 7. We consider the following three designs. 1) _Weight initialization_ (_WI_). We initialize the weight to be the product of both the inlier correspondence number \(r_{ij}\) and the predicted overlap score \(s_{ij}\). Alternatively, we may just initialize the weight with \(r_{ij}\) or \(s_{ij}\) only. Results show that the proposed initialization is better. 2) _History reweighting_ (_HR_). In our reweighting function, the recomputed weight is determined by rotation residuals of all previous iterations. Alternatively, we may just compute the weight from the rotation residual of current iteration. History reweighting stabilizes the iterative refinement and makes IRLS more robust to outliers. 3) _Designing_\(g(m)\)_to be increasing with_\(m\) (_INC_). In our design, we set \(g(m)\) to be increasing with \(m\) so that the residuals at early iterations will have smaller impacts on results. Alternatively, we may set \(g(m)=1/M\) so that all residuals contribute equally to the weights. However, rotations estimated in the early stage are not very stable so that reducing their impacts will improve the results.
## 5 Conclusion
In this paper, we propose a novel multiview point cloud registration method. The key of the proposed method is a learning-based sparse pose graph construction which can estimate a overlap ratio between two scans, enabling us to select high-overlap scan pairs to construct a sparse but reliable graph. Then, we propose a novel history reweighting function in IRLS scheme, which improves robustness to outliers and has better convergence to correct poses. The proposed method demonstrates the state-of-the-arts performances on both indoor and outdoor datasets with much fewer pairwise registrations.
## 6 Acknowledgement
This research is jointly sponsored by the National Key Research and Development Program of China (No.2022YFB3904102), the National Natural Science Foundation of China Projects (No.42171431, U20A20185, 61972435), the Open Fund of Hubei Luojia Laboratory (No.2201000054) and the Guangdong Basic and Applied Basic Research Foundation (2022B1515020103).
\begin{table}
\begin{tabular}{l|l|c|c c} \hline _Pose Graph_ & _Method_ & _\#Pair_ & _3D-RR (\%)_ & _3D-RR (\%)_ \\ \hline \hline Full & EIGSE3 [5] & 11905 & 23.2 & 6.6 \\ Pruned [21] & EIGSE3 [5] & 11905 & 40.1 & 26.5 \\ Ours & EIGSE3 [5] & **2798** & **60.4** & **44.6** \\ Full & RotAvg [11] & 11905 & 61.8 & 44.1 \\ Pruned [21] & RotAvg [11] & 11905 & 77.2 & 60.3 \\ Ours & RotAvg [11] & **2798** & **81.7** & **63.9** \\ Full & LITS [55] & 11905 & 77.0 & 59.0 \\ Pruned [21] & LITS [55] & 11905 & 80.8 & 65.2 \\ Ours & LITS [55] & **2798** & **84.6** & **68.6** \\ \hline \end{tabular}
\end{table}
Table 4: Performances of applying different multiview registration methods on different input pose graphs.
Figure 6: An example of retrieving scans using the global feature. The predicted top-3 scans with largest overlap scores indeed have large overlaps with the query scan while the 3 scans with smallest overlap scores are far away from the query.
Figure 7: Curves of rotation error w.r.t. iteration number with ablation on specific components of our IRLS scheme on the 3DMatch (left) and the ScanNet (right). “\(\backslash\)” means “without”.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline & \multicolumn{2}{c|}{_Initialization_} & \multicolumn{2}{c|}{_Reweighting_} & \multirow{2}{*}{_Full_} \\ & _w/o \(s_{ij}\)_ & _w/o \(r_{ij}\)_ & _w/o IR_ & _w/o INC_ \\ \hline \hline _3D-RR(\%)_ & 95.5 (-0.7) & 76.9 (-19.3) & 83.1 (-13.1) & 94.1 (-2.1) & 96.2 \\ _3D-RR(\%)_ & 79.9 (-1.7) & 63.4 (-18.2) & 68.9 (-12.7) & 79.8 (-1.8) & 81.6 \\ \hline \end{tabular}
\end{table}
Table 5: Ablation studies on the proposed IRLS scheme. |
2305.00028 | SMT Solving over Finite Field Arithmetic | Non-linear polynomial systems over finite fields are used to model functional
behavior of cryptosystems, with applications in system security, computer
cryptography, and post-quantum cryptography. Solving polynomial systems is also
one of the most difficult problems in mathematics. In this paper, we propose an
automated reasoning procedure for deciding the satisfiability of a system of
non-linear equations over finite fields. We introduce zero decomposition
techniques to prove that polynomial constraints over finite fields yield finite
basis explanation functions. We use these explanation functions in model
constructing satisfiability solving, allowing us to equip a CDCL-style search
procedure with tailored theory reasoning in SMT solving over finite fields. We
implemented our approach and provide a novel and effective reasoning prototype
for non-linear arithmetic over finite fields. | Thomas Hader, Daniela Kaufmann, Laura Kovács | 2023-04-24T19:52:09Z | http://arxiv.org/abs/2305.00028v2 | # SMT Solving over Finite Field Arithmetic
###### Abstract
Non-linear polynomial systems over finite fields are used to model functional behavior of cryptosystems, with applications in system security, computer cryptography, and post-quantum cryptography. Solving polynomial systems is also one of the most difficult problems in mathematics. In this paper, we propose an automated reasoning procedure for deciding the satisfiability of a system of non-linear equations over finite fields. We introduce zero decomposition techniques to prove that polynomial constraints over finite fields yield finite basis explanation functions. We use these explanation functions in model constructing satisfiability solving, allowing us to equip a CDCL-style search procedure with tailored theory reasoning in SMT solving over finite fields. We implemented our approach and provide a novel and effective reasoning prototype for non-linear arithmetic over finite fields.
## 1 Introduction
Solving a system of polynomial equations is one of the hardest problems in mathematics, with emerging applications in cryptography, software security, code optimizations, control theory, and many other areas of computer science. Computing solutions to polynomial equations is known to be decidable and algorithmically solvable over algebraically closed fields thanks to the fundamental theory of algebra and Buchberger's algorithms for Grobner basis computation [8, 37]. Yet, when restricting the algorithmic study of solving polynomial equations over integers, the problem becomes undecidable [29].
Until recently, the algorithmic study of solving polynomial constraints, and hence automated reasoning in polynomial arithmetic, was the sole domain of computer algebra systems [1, 28, 36, 43]. These systems are very powerful in computing the set of all solutions of polynomial constraints, but generally suffer from high computational overhead, such as doubly exponential computation complexities in terms of number of variables [11].
With the purpose of scaling non-linear reasoning, especially for solving satisfiability instances of polynomial arithmetic, exciting new developments in boolean satisfiability (SAT)/satisfiability modulo theory (SMT) reasoning arose by combining a Conflict-Driven Clause Learning (CDCL)-style search for a feasible assignment, called Model Constructing Satisfiability (MCSat), with algebraic decompositions and projections over the solution space of polynomial inequalities [25, 12]. Unlike the classic CDCL(T) approach of SMT-solvers, MCSat [25, 12, 24] combines the capabilities of a SAT solver and a theory solver into a single procedure while keeping the search principles theory independent. To the best of our knowledge, SMT solving over finite fields lacks a dedicated approach for reasoning over finite fields. Encoding the problem in existing theories (e.g. NIA) are inefficient [33].
_In this paper we address this challenge and introduce a CDCL-style search procedure extended with zero decomposition techniques for explaining and resolving (variable) conflicts while solving polynomial constraints over finite fields._
Need for Finite Fields.Finite fields provide natural ground to model bounded machine arithmetic, for example when considering modern cryptosystems with applications in system security and post-quantum cryptography. Existing approches build for example private and secure systems from Zero-Knowledge Proofs [18] or verify blockchain technologies, such as smart contracts [38], with all these efforts implementing finite field arithmetic. Elliptic curve cryptography [21] exploits polynomials over finite fields, with further use in TLS encryption [30], SSH [35] and digital signatures [23]. Polynomial equations over finite fields are also used in coding theory [27; 31], decoding error-correcting codes of large error rates. In addition, solving polynomials over finite fields has applications in finite biological models, such as modeling cycles of biological networks as continuous dynamical systems [31; 32].
SMT Solving over Finite Fields.In this paper we introduce an MCSat-based decision procedure for solving polynomial constraints over finite fields, extending thus the landscape of SMT solving with finite field arithmetic. We formalize _SMT solving over finite fields_ as follows (see Section 2 for relevant notation).
Given a finite field \(\mathbb{F}_{q}\) with order \(q=p^{k}\), where \(p\) is a prime number and \(k\geq 1\), let \(F\) be a set of polynomial constraints in \(\mathbb{F}_{q}[X]\) and \(\mathcal{F}\) a formula following the logical structure:
\[\mathcal{F}\quad=\quad\bigwedge_{C\subseteq F}\bigvee_{f\in C}f\quad=\quad \bigwedge_{C\subseteq F}\bigvee_{f\in C}\mathsf{poly}(f)\rhd 0\quad\text{with} \rhd\in\{=,\neq\}.\]
SMT solving over finite fields:Does an assignment \(\nu:\{x_{1},\ldots,x_{n}\}\to\mathbb{F}_{q}\) exists that satisfies \(\mathcal{F}\)?
**Example 1**.: _We show an instance of the SMT solving problem over finite fields, by considering the finite field \(\mathbb{F}_{5}\) whose elements are \(\{0,1,2,3,4\}\). Note that \(-1\) is \(4\) in \(\mathbb{F}_{5}\). Let \(\mathcal{F}\) be the formula representing the conjunction of the polynomial constraints \(\{x_{1}^{2}-1=0,x_{1}x_{2}-x_{2}-1=0\}\) over \(\mathbb{F}_{5}[x_{1},x_{2}]\). In our work we address SMT solving of \(\mathcal{F}\) over \(\mathbb{F}_{5}[x_{1},x_{2}]\), deriving that \(\mathcal{F}\) is satisfiable using the variable assignment \(\{x_{1}\mapsto 4,x_{2}\mapsto 2\}\)._
To the best of our knowledge, existing SMT-based approaches lack the necessary theory for reasoning over finite fields, and therefore assertions that model the behavior of finite fields must be included in the input problem formalization (i.e. \(F\)). As a workaround, one may use so-called _field polynomials_ (\(\{x_{k}^{q}-x_{k}\mid 1\leq k\leq n\}\) for a ring \(\mathbb{F}_{q}[X]\)) to characterize finite fields and thus restrict the solution space of \(\mathbb{F}_{q}[X]\) to the finite domain of the field \(\mathbb{F}_{q}\). Unfortunately, using field polynomials is practically inefficient, as already witnessed in our initial attempts from [19; 20]: when used during variable elimination, field polynomials yield new polynomials as logical consequences of the initial set \(F\) of polynomials and at the same time hugely increase the degree and size of the newly derived polynomials in the search space.
Our contributions.In this paper we do not rely on field polynomials but extend the theory-dependent rules of MCSat to natively support finite fields arithmetic. The main difficulty in MCSat-based reasoning comes with generating so-called _explanation clauses_ for resolving conflicting variable assignments during SMT solving. We therefore develop a novel _theory propagation_ rule for finite fields that admits propagation of theory literals (Section 4). Our method exploits zero decomposition techniques [40] to prove that polynomial constrains over finite fields yield finite basis explanation functions (Theorem 2), implying computabilty of such functions. We use single polynomial projections and adjust subresultant regular subchains [42]
to calculate greatest common divisors with regard to partial variable assignments (Section 5), allowing us to avoid the use of field polynomials when deriving explanation clauses during solving polynomial constraints (Theorems 3-4). Our explanation clauses are integrated within MCSat, restricting the search space of SMT solving over \(\mathbb{F}_{q}[X]\). We implement our approach in a new prototype for SMT solving over finite fields (Section 6) and experimentally demonstrate the applicability of SMT solving over finite fields (Section 7).
## 2 Preliminaries
We provide a brief summary of the relevant algebraic concepts of finite fields [15].
Fields and Polynomials.A _field_\(\mathbb{F}\) consists of a set \(S\) on which two binary operators addition "\(+\)" and multiplication "\(\cdot\)" are defined. Both operators are commutative, associative, have a neutral element in \(S\) (denoted as _zero_\((0)\) and _one_\((1)\), respectively), and each element in \(S\) has additive and multiplicative inverses. Furthermore, distributivity holds. Informally speaking, a _field_ is a set \(S\) with well-defined operations addition, subtraction, multiplication, and division (with the exception of division by zero). Field examples include \(\mathbb{Q}\) and \(\mathbb{R}\).
Let \(X\) be the set of variables \(\{x_{1},\ldots,x_{n}\}\). We sort the variables in \(X\) according to their index \(x_{1}<x_{2}<\cdots<x_{n}\). Since \(x_{i}\) is the i-th variable in the order, we say it is of of _class_\(i\), denoted by \(\mathsf{cls}(x_{i})=i\). We have \(X_{k}=\{x_{i}\in X\mid i\leq k\}\).
By \(\mathbb{F}[X]\) we denote the ring of polynomials in variables \(X\) with coefficients in \(\mathbb{F}\). A _term_\(\tau=x_{1}^{d_{1}}\cdots x_{n}^{d_{n}}\) is a product of powers of variables for \(d_{i}\in\mathbb{N}\). If all \(d_{i}=0\), we have \(\tau=1\). A multiple of a term \(c\tau\) with \(c\in\mathbb{F}\setminus\{0\}\) is a _monomial_. A _polynomial_ is a finite sum of monomials with pairwise distinct terms.
The _degree of a term_\(\tau\) is the sum of its exponents \(\sum_{i=0}^{n}d_{n}\). The _degree of a polynomial_\(p\) is the highest degree of its terms. We write \(\mathsf{deg}(p,x_{i})\) to denote the highest _degree of_\(x_{i}\) in \(p\).
For a polynomial \(p\), the set of variables of \(p\) is denoted by \(\mathsf{vars}(p)\). If \(\mathsf{vars}(p)=\emptyset\), then \(p\) is _constant_. If \(|\mathsf{vars}(p)|=1\), \(p\) is _univariate_, and otherwise it is _multivariate_. For a set of polynomials \(P\), we define \(\mathsf{vars}(P)=\bigcup_{p\in P}\mathsf{vars}(p)\).
An order \(\leq\) is fixed on the set of terms such that for all terms \(\tau,\sigma_{1},\sigma_{2}\) it holds that \(1\leq\tau\) and further \(\sigma_{1}\leq\sigma_{2}\Rightarrow\tau\sigma_{1}\leq\tau\sigma_{2}\). One such order is the _lexicographic term order_: If \(x_{1}<x_{2}<\ldots x_{n}\), then for two terms \(\sigma_{1}=x_{1}^{d_{1}}\cdots x_{n}^{d_{n}}\), \(\sigma_{2}=x_{1}^{e_{1}}\cdots x_{n}^{e_{n}}\) it holds \(\sigma_{1}<\sigma_{2}\) iff there exists an index \(i\) with \(d_{j}=e_{j}\) for all \(j>i\), and \(d_{i}<e_{i}\).
For a polynomial \(p\), the _leading variable_\(\mathsf{lv}(p)\) is the variable \(x_{i}\) of \(\mathsf{vars}(p)\) with the highest class. Let \(\mathsf{cls}(p)=\mathsf{cls}(\mathsf{lv}(p))\). We define the coefficient of \(x_{i}^{\mathsf{deg}(p,x_{i})}\) as the _leading coefficient_ of \(p\) with respect to \(x_{i}\) and write it as \(\mathsf{lc}(p,x_{i})\). We denote \(\mathsf{red}(p,x_{i})=p-\mathsf{lc}(p,x_{i})x_{i}^{\mathsf{deg}(p,x_{i})}\) as the _reductum_ of \(p\) with respect to \(x_{i}\).
**Example 2**.: _Given the polynomial \(p=2x_{3}^{2}x_{1}+4x_{3}x_{2}^{4}+x_{3}x_{2}+7x_{1}\in\mathbb{Q}[x_{1},x_{2},x _{3}]\), we have \(\mathsf{vars}(p)=\{x_{1},x_{2},x_{3}\}\), \(\mathsf{lv}(p)=x_{3}\) and \(\mathsf{red}(p,x_{3})=4x_{3}x_{2}^{2}+x_{3}x_{2}+7x_{1}\). Furthermore, \(\mathsf{lc}(p,x_{1})=2x_{3}^{2}+7\), \(\mathsf{lc}(p,x_{2})=4x_{3}\), and \(\mathsf{deg}(p,x_{3})=2\), \(\mathsf{deg}(p,x_{2})=4\)._
A polynomial \(p\in\mathbb{F}[X]\) is _irreducible_ if it cannot be represented as the product of two non-constant polynomials, i.e. there exist no \(q,r\in\mathbb{F}[X]\) such that \(p=q\cdot r\). A polynomial \(g\in\mathbb{F}[X]\) is called a _greatest common divisor (gcd)_ of polynomials \(p_{1},\ldots,p_{s}\) if \(g\) divides \(p_{1},\ldots,p_{s}\) and every common divisor of \(p_{1},\ldots,p_{s}\) divides \(g\).
A tuple of values \(\alpha\in\mathbb{F}^{n}\) is a _root_ or _zero_ of a polynomial \(p\in\mathbb{F}[X]\) if \(p(\alpha)=0\). A field \(\mathbb{F}\) is _algebraically closed_ if every non-constant univariate polynomial in \(\mathbb{F}[x]\) has a root in \(\mathbb{F}\).
Let \(\mathbb{K}\subseteq\mathbb{F}\) be a field with respect to the field operations inherited from \(\mathbb{F}\). We call \(\mathbb{F}\) a _field extension_ of \(\mathbb{K}\) and write \(\mathbb{F}/\mathbb{K}\). An _algebraic extension_ of \(\mathbb{F}\) is a field extension \(\mathbb{G}/\mathbb{F}\) such that every element of \(\mathbb{G}\) is a root of a non-zero polynomial with coefficients in \(\mathbb{F}\). An _algebraic closure_ of a field \(\mathbb{F}\) is an algebraic extension \(\mathbb{G}\) that is algebraically closed; we call \(\mathbb{F}\) the _base field_.
Finite Fields.In a _finite field_\(\mathbb{F}_{q}\) the set \(S\) has only finitely many elements. The number of elements is denoted by \(q\) and is called the _order_ of the finite field. We denote the algebraic closure of \(\mathbb{F}_{q}\) as \(\overline{\mathbb{F}}_{q}\). A finite field \(\mathbb{F}_{q}\) exists iff \(q\) is the \(k\)-th power of a prime \(p\), i.e. \(q=p^{k}\). All finite fields with the same order are isomorphic, i.e. there exists a structure-preserving mapping between them. In case \(k=1\), \(\mathbb{F}_{q}\) can be represented by the integers modulo \(p\) and we have \(S=\{0,1,\ldots,p-1\}\) with the standard integer addition and multiplication operation performed modulo \(p\). For example for \(\mathbb{F}_{5}\) we have \(S=\{0,1,2,3,4\}\) and \(2+3=0\) and \(3\cdot 4=2\).
The elements of \(\mathbb{F}_{q}=\mathbb{F}_{p^{k}}\) with \(k>1\) are polynomials with degree \(k-1\) and coefficients in \(\mathbb{F}_{p}\). Addition and multiplication of the polynomials is performed modulo a univariate irreducible polynomial \(g\in\mathbb{F}_{q}[a]\) with degree \(k\). For example \(\mathbb{F}_{4}=\mathbb{F}_{2^{2}}\) is generated using the irreducible polynomial \(g\): \(a^{2}+a+1\). The elements are \(\{0,1,a,1+a\}\) and \(((a)+(1))\cdot(a+1)\) evaluates to \(a\).
Polynomial Constraints and Formulas.A _polynomial constraint \(f\) over \(p\)_ in the ring \(\mathbb{F}_{q}[X]\) is of the form \(p\vartriangleright 0\) where \(p\in\mathbb{F}_{q}[X]\) and \(\vartriangleright\in\{=,\neq\}\). Since a total ordering with respect to the field operations on elements of a finite field \(\mathbb{F}_{q}\) does not exist, we only consider inequality constraints of the form \(p\neq 0\) and do not consider \(<\) and \(>\). We define \(\mathsf{poly}(f)=p\), and extend \(\mathsf{vars}(f)=\mathsf{vars}(\mathsf{poly}(f))\) and \(\mathsf{cls}(f)=\mathsf{cls}(\mathsf{poly}(f))\). For a set of constraints \(F\) we define \(\mathsf{vars}(F)=\bigcup_{f\in F}\mathsf{vars}(f)\). A polynomial constraint \(f\) is negated by substituting \(\vartriangleright\) in \(f\) with the other element, i.e. \(\neg(\mathsf{poly}(f)=0)\) is equivalent to \(\mathsf{poly}(f)\neq 0\).
Let \(\nu:X\to\mathbb{F}_{q}\) denote an (partial) _assignment_ of variables \(X\). We extend \(\nu\) to an evaluation of polynomials in the natural way, i.e. \(\nu(p):\mathbb{F}_{q}[X]\to\mathbb{F}_{q}\). Given an assignment \(\nu\) and a polynomial constraint \(f=p\vartriangleright 0\), we say \(\nu\)_satisfies_\(f\) iff \(\nu(p)\vartriangleright 0\) holds. The function \(\nu\) is also used to evaluate a constraint \(f\). If \(\nu\) does not assign all variables of \(\mathsf{poly}(f)\), we define \(\nu(f)=\mathsf{undef}\). If \(\nu\) assigns all variables of \(\mathsf{poly}(f)\), then \(\nu(f)=\mathsf{true}\) if \(\nu\) satisfies \(f\), and \(\nu(f)=\mathsf{false}\) otherwise. Given a set of polynomial constraints \(F\), we have \(\nu(F)=\mathsf{true}\) iff \(\nu\) satisfies all elements in \(F\). If such an \(\nu\) exists, we say that \(F\) is _satisfiable_ and \(\nu\)_satisfies_\(F\).
We refer to a single constraint as an _atom_. A _literal_ is an atom or its negated form. A _clause_\(C\) is a disjunction of literals. If \(C\) contains only one literal it is a _unit clause_. A _formula_\(\mathcal{F}\) is a set of clauses \(\mathcal{C}\). Logically a formula represents a conjunction of disjunctions of literals. An _assignment_\(\nu\) satisfies a clause \(C\) if at least one literal in \(C\) is satisfied by \(\nu\). Finally, \(\nu\) satisfies a set of clauses if every clause is satisfied by \(\nu\).
## 3 Model Constructing Satisfiability (MCSat)
In this section, we summarize the MCSat approach [12, 24, 25] as presented in [25]. Our MCSat adjustments for finite fields are given in Sections 4 and 5.
MCSat Terminology.The MCSat procedure is a transition system with each state denoted by an indexed pair \(\langle M,\mathcal{C}\rangle_{k}\) of a _trail_\(M\) and a set of clauses \(\mathcal{C}\). The index \(k\) specifies the _level_ of
the state. In our case, a clause \(C\in\mathcal{C}\) is a set of polynomial constraints over \(\mathbb{F}_{q}[X]\). We require the following terminology:
* Each _trail element_ of \(M\) is either a _decided_ or _propagated literal_, or a _variable assignment_.
* A decided literal \(f\) is considered to be true. A propagated literal, indicated \(C\to f\), denotes that the status of clause \(C\) implies that \(f\) is true. A variable assignment \(x_{i}\mapsto\alpha\) maps a theory variable \(x_{i}\in X\) to some \(\alpha\in\mathbb{F}_{q}\).
* We say \(f\in M\), if a constraint \(f\) is a trail element. Let \(\mathsf{constr}(M)=\{f\in M\}\).
* A trail is _non-redundant_ if it contains each constraint at most once.
* For a constraint \(f\) let \(\mathsf{level}(f)=i\Leftrightarrow x_{i}\in\mathsf{vars}(f)\land\forall j>i:x_ {j}\notin\mathsf{vars}(f)\) and define the level of a clause \(\mathsf{level}(C)=\max_{f\in C}\mathsf{level}(f)\). Let \(\mathcal{C}_{i}=\{C\in\mathcal{C}\mid\mathsf{level}(C)\leq i\}\).
* We have \(\mathsf{level}(M)=k\), if \(x_{k-1}\mapsto\alpha\) is the highest variable assignment in \(M\), i.e. no variable assignment for \(x_{k},\ldots,x_{n}\) exists.
* A trail is _increasing in level_, if all variables but the highest level variable of a constraint \(f\) are assigned before \(f\) appears on the trail.
* Taking all theory variable assignments \(x_{1}\mapsto\alpha_{1},\ldots,x_{k}\mapsto\alpha_{k}\) of a trail \(M\) with \(\mathsf{level}(M)=k+1\), we define \(\mathbf{\alpha}_{M}=\alpha_{1},\ldots,\alpha_{k}\) and generate a (partial) assignment function \(\nu_{M}:X\mapsto\mathbb{F}_{q}\). We overload \(\nu_{M}\) to evaluate constraints and sets of constraints as discussed in Section 2.
* We further say that \(M\) is _feasible_ if \(\nu_{M}(\mathsf{constr}(M))\) has a solution for \(x_{k}\). The set of possible values for \(x_{k}\) is denoted by \(\mathsf{fspl}(M)\).
* Given an additional constraint \(f\), with \(\mathsf{poly}(f)\in\mathbb{F}_{q}[X]\), we extend \(\mathsf{fspl}(f,M)=\mathsf{fspl}([\![M,f]\!])\). If \(\mathsf{fspl}(f,M)\neq\emptyset\) we say that \(f\) is _compatible_ with \(M\), denoted by \(\mathsf{comp}(f,M)\).
* A state \(\langle M,\mathcal{C}\rangle_{k}\) is _well-formed_ when \(M\) is non-redundant, increasing in level, \(\mathsf{level}(M)=k\), \(\mathsf{fspl}(M)\neq\emptyset\), \(\nu_{M}\) satisfies \(\mathcal{C}_{k-1}\), \(\forall f\in\mathsf{constr}(M):\nu_{M}(f)=\mathsf{true}\), and all propagated literals \(E\!\rightarrow\!f\) are implied, i.e. \(f\in E\) and for all literals \(f^{\prime}\neq f\) in \(E\), \(\nu_{M}(f^{\prime})=\mathsf{false}\) or \(\neg f^{\prime}\in\mathsf{constr}(M)\).
* Given a well-formed state with trail \(M\), assume constraint \(f\) with \(\mathsf{poly}(f)\in\mathbb{F}_{q}[X_{k}]\). Let: \[\mathsf{val}(f,M)=\begin{cases}\nu_{M}(f)&x_{k}\notin\mathsf{vars}(f),\mathsf{ level}(M)=k\\ \mathsf{true}&f\in\mathsf{constr}(M)\\ \mathsf{false}&\neg f\in\mathsf{constr}(M)\\ \mathsf{undef}&\text{otherwise}\end{cases}\] We overload this function to handle clauses. As such, we define \(\mathsf{val}(C,M)=\mathsf{true}\) if there exists \(f\in C\) such that \(\mathsf{val}(f,M)=\mathsf{true}\); \(\mathsf{val}(C,M)=\mathsf{false}\) if \(\mathsf{val}(f,M)=\mathsf{false}\) for al \(f\in C\); and \(\mathsf{val}(C,M)=\mathsf{undef}\) in all other cases.
MCSat Calculus.The MCSat calculus is given in Figure 1 and detailed next. Given a set of clauses \(\mathcal{C}\), in our case clauses of polynomial constraints over \(\mathbb{F}_{q}[X]\), the goal is to move from an initial state of \(\langle[\![]\!],\mathcal{C}\rangle_{1}\) to one of the two termination states, namely \(\langle\mathsf{sat},\nu\rangle\) or \(\mathsf{unsat}\), by continuously applying transition rules. A termination and correctness proof that is independent of the used theory, is given in [25, Thm. 1].
The search rules either select a clause for further processing (Sel-Clause), detect a conflict (Conflict), detect satisfiability (Sat), or assign a variable while increasing the level before performing another search step (Lift-Level).
Clause satisfaction rules determine how a clause \(C\) is absorbed into the trail \(M\) given a state \(\langle M,\mathcal{C}\rangle_{k}\vDash C\) through semantic reasoning on the theory. The first two rules are similar to classical DPLL-style propagation and differ in whether we meet a single compatible literal (B-Prop) or can choose between multiple yet undetermined compatible literals (Decide-Lit).
The T-Prop rule is the core component of any MCSat procedure. It utilizes theory knowledge to propagate literals during the search. The explanation function \(\mathsf{exp}\) generates a valid lemma \(E\) that justifies the propagation. This rule was dubbed "R-Propagation" in [25] since the focus was merely real arithmetic. However, the rule itself does not rely on reals, only the explanation function does. For our purpose, we refer to this rule as "Theory-Propagation", in short T-Prop. In Section 4 we prove that explain functions for polynomials over finite fields always exists. Moreover, in Section 5 we show that explanation functions are also computable using zero decomposition procedures, avoiding the applications of field polynomials within MCSat.
The conflict resolution rules of Figure 1 rely on standard boolean conflict analysis [34], using the standard boolean resolution function \(\mathsf{resolve}\). We either resolve propagation or decision steps (\(\mathsf{Resolve-Prop}\), \(\mathsf{Resolve-Dec}\)) or backtrack if there is no conflicting literal in the trail's top literal (\(\mathsf{Consume-Prop}\), \(\mathsf{Consume-Dec}\)). The only theory-specific aspects of the conflict resolution are the rules \(\mathsf{Drop-Level-1}\) and \(\mathsf{Drop-Level-2}\), where we undo theory variable assignments. We add the conflict clause \(C\) to the clause set to avoid assignment repetition.
Figure 1: Transition Rules of MCSat
Theory Propagation for Polynomials over Finite Fields
The key challenge in designing an MCSat-based decision procedure for a particular theory is developing theory propagation in the respective theory to be used within the T-Prop rule of the MCSat calculus of Figure 1. In this section, we introduce zero decomposition procedures over polynomial constraints (Section 4.1) in support of theory propagation over finite fields, allowing us to prove the existence of explanation clauses within MCSat over finite fields (Section 4.2).
Upon application of the T-Prop rule, a literal is selected for propagation and justified by a newly generated explanation clause \(E\). In our work we focus on propagating polynomial constraint literals \(f\) and define their respective explanations using so-called theory lemmas.
**Definition 1** (Polynomial Explanation).: _Let \(f\) be a constraint, \(M\) a trail, and \(E\) a clause of constraints. \(E\) is a valid (theory) lemma if for any arbitrary assignment \(\nu\), \(\nu(E)\neq\mathsf{false}\). The clause \(E\) justifies \(f\) in \(M\) iff \(f\in E\) and \(\forall f^{\prime}\in E:f\neq f^{\prime}\Rightarrow\mathsf{val}(f^{\prime},M)= \mathsf{false}\). \(E\) is an explanation clause for \(f\) in \(M\) if \(E\) is a valid theory lemma and justifies \(f\) in \(M\)._
Note that the explanation clauses \(E\) for \(f\) are generated using an explanation function \(\mathsf{exp}\) during the applications of the T-Prop rule of Figure 1. We define the \(\mathsf{exp}\) function as follows.
**Definition 2** (Polynomial Explanation Function \(\mathsf{exp}\)).: _A function \(\mathsf{exp}:\{\textit{constraint}\}\times\{\textit{trail}\}\to\{\textit{clause}\}\) is an explanation function \(\mathsf{exp}(f,M)=E\) iff \(f\notin M\), \(\neg\mathsf{comp}(\neg f,M)\), and \(E\) is an explanation clause for \(f\) in \(M\)._
**Example 3**.: _A most trivial explanation function propagates \(f\) by excluding the current trail \(M\) of level \(k\) via \(E=\{f\}\cup\{\neg f^{\prime}\mid f^{\prime}\in M\textit{ and }x_{k-1}\in\mathsf{vars}(f^{\prime})\}\cup\{x\neq\alpha\mid(x\mapsto\alpha) \in M\}\)._
As any (non-trivial) explanation function may introduce new literals, the termination of a general MCSat procedure requires that all newly introduced literals are taken from a finite basis.
**Definition 3** (Finite-basis Polynomial Explanation).: _The function \(\mathsf{exp}(f,M)\) is a finite basis explanation function if it returns an explanation clause \(E\) for \(f\) in \(M\) and all new literals in \(E\) are taken from a finite basis._
In conclusion, if a theory admits a finite basis explanation function, then MCSat-based reasoning in that respective theory is terminating. Yet, providing a finite basis explanation function is not trivial. A key piece is an efficient procedure to decompose polynomial sets. We present our tailored procedures in Section 5.
### Zeros in Polynomials over Finite Fields
Let us arbitrarily fix the sets of polynomials \(P,Q\subset\mathbb{F}_{q}[X_{k}]\). We assume that \(P\) contains polynomials from equality constraints, and \(Q\) consists of inequality constraints.
We define the following sets of solutions: \(\mathsf{zero}(P)=\{\boldsymbol{\alpha}\in\overline{\mathbb{F}}^{k}_{q}\mid p( \boldsymbol{\alpha})=0\text{ for all }p\in P\}\) and \(\mathsf{zero}_{q}(P)=\{\boldsymbol{\alpha}\in\mathbb{F}_{q}^{k}\mid p( \boldsymbol{\alpha})=0\text{ for all }p\in P\}\). Clearly, \(\mathsf{zero}_{q}(P)\subseteq\mathsf{zero}(P)\). For simplicity, we use set subtraction to define \(\mathsf{zero}(P/Q)=\mathsf{zero}(P)\setminus\mathsf{zero}(Q)\) and \(\mathsf{zero}_{q}(P/Q)=\mathsf{zero}_{q}(P)\setminus\mathsf{zero}_{q}(Q)\). We further write \(\hat{P}\) for \(P\setminus\mathbb{F}_{q}[X_{k-1}]\). We use the tuple \(\mathcal{S}=(P,Q)\) to mean a _(polynomial) system_ and write \(\mathsf{zero}(\mathcal{S})=\mathsf{zero}(P/Q)\). We finally define the projection set \(\mathsf{proj}_{k}\mathsf{zero}_{q}(P/Q)=\{\boldsymbol{\alpha}\in\mathbb{F}_{q }^{k-1}\mid\exists\beta\in\mathbb{F}_{q}\text{ such that }(\boldsymbol{\alpha},\beta)\in\mathsf{ zero}_{q}(P/Q)\}\). Intuitively, projection sets are used to reduce the problem of solving polynomials over \(k\) variables into the smaller problem of solving polynomials over \(k-1\) variables, providing thus means for eliminating the variable \(x_{k}\).
**Definition 4** (Zero Decomposition).: _A zero decomposition procedure is an algorithm that given \(P,Q\subset\mathbb{F}_{q}[X_{k}]\) generates a set of systems \(\Delta=\{(P_{1},Q_{1}),\ldots,(P_{m},Q_{m})\}\) such that_
\[\mathsf{zero}_{q}(P/Q)=\bigcup_{(P^{\prime},Q^{\prime})\in\Delta}\mathsf{zero}_{ q}(P^{\prime}/Q^{\prime}). \tag{1}\]
_The zero decomposition procedure is projecting in case \(P^{\prime},Q^{\prime}\in\mathbb{F}_{q}[X_{k-1}]\) for all \((P^{\prime},Q^{\prime})\in\Delta\) and_
\[\mathsf{proj}_{k}\mathsf{zero}_{q}(P/Q)=\bigcup_{(P^{\prime},Q^{\prime})\in \Delta}\mathsf{zero}_{q}(P^{\prime}/Q^{\prime}).\]
_Given additionally \(\boldsymbol{\alpha}\in\mathbb{F}_{q}^{k-1}\) which cannot be extended to a zero, i.e. there is no \(\beta\in\mathbb{F}_{q}\) such that \((\boldsymbol{\alpha},\beta)\in\mathsf{zero}_{q}(P/Q)\), we say that an algorithm is a weak projecting zero decomposition procedure for \(\boldsymbol{\alpha}\) if_
\[\mathsf{proj}_{k}\mathsf{zero}_{q}(P/Q)\subseteq\bigcup_{(P^{\prime},Q^{ \prime})\in\Delta}\mathsf{zero}_{q}(P^{\prime}/Q^{\prime}) \tag{2}\]
_with \(P^{\prime},Q^{\prime}\in\mathbb{F}_{q}[X_{k-1}]\) and \(\boldsymbol{\alpha}\notin\mathsf{zero}_{q}(P^{\prime}/Q^{\prime})\) for all \((P^{\prime},Q^{\prime})\in\Delta\)._
For many zero decomposition procedures [42], the _pseudo division_ operation plays an important role, as follows. Consider polynomials \(f,g\in\mathbb{F}_{q}[X_{k}]\), with \(f\neq 0\). Let \(r,o\in\mathbb{F}_{q}[X_{k}]\) denote polynomials. We define the _pseudo-remainder formula_ (in \(x_{k}\)) as
\[l^{d}\cdot g=o\cdot f+r\]
where \(l=\mathsf{lc}(f,x_{k})\), \(d=\max(\mathsf{deg}(g,x_{k})-\mathsf{deg}(f,x_{k})+1,0)\), and \(\mathsf{deg}(r,x_{k})<\mathsf{deg}(f,x_{k})\). The _pseudo-remainder_\(r\) and _pseudo-quotient_\(o\) of \(g\) with respect to \(f\) in \(x_{k}\) are denoted as \(\mathsf{prem}(g,f,x_{k})\) and \(\mathsf{pquo}(g,f,x_{k})\), respectively. The polynomials \(o\) and \(r\) are uniquely determined by \(f\) and \(g\) and are computable [42].
**Example 4**.: _Let \(f=x_{2}+x_{1}\) and \(g=3x_{2}x_{1}^{2}+x_{1}\) in \(\mathbb{F}_{5}[x_{1},x_{2}]\). Noting that \(-2=3\) in \(\mathbb{F}_{5}\), we have eliminated \(x_{2}\) in both the pseudo remainder and pseudo quotient by_
\[\underbrace{(3x_{2}x_{1}^{2}+x_{1})}_{g}\ =\ \underbrace{(-2x_{1}^{2})}_{ \mathsf{pquo}(g,f,x_{2})}\ \cdot\ \underbrace{(x_{2}+x_{1})}_{f}\ +\ \underbrace{(2x_{1}^{3}+x_{1})}_{ \mathsf{prem}(g,f,x_{2})}.\]
Calculating \(\gcd\)s is another method for reducing the degree of \(x_{k}\) and thereby eliminating it. We employ subresultant regular subchains (SRSs) to calculate \(\gcd\)s with respect to a partial assignment, as shown in Lemma 2.4.2 of [42]. Given two polynomials \(f,g\in\mathbb{F}_{q}[X_{k}]\) with \(\mathsf{deg}(f,x_{k})\geq\mathsf{deg}(g,x_{k})>0\), we denote by \(\mathsf{srs}(f,g,x_{k})=h_{2},\ldots,h_{r}\) the SRS of \(f\) and \(g\) with regard to \(x_{k}\). Let \(l=\mathsf{lc}(g,x_{k})\) and \(l_{\ell}=\mathsf{lc}(h_{\ell},x_{k})\). Then for \(2\leq\ell\leq r\), we have
\[\gcd(f(\boldsymbol{\alpha},x_{k}),g(\boldsymbol{\alpha},x_{k}))=h_{\ell}( \boldsymbol{\alpha},x_{k})\]
if \(\boldsymbol{\alpha}\in\mathsf{zero}(\{l_{\ell+1},\ldots,l_{r}\}/\{l,l_{\ell}\})\). When \(f\), \(g\), and \(h_{\ell}\) are partially evaluated w.r.t. \(\boldsymbol{\alpha}\), the above \(\gcd\) is equivalent to computing a univariate \(\gcd\).
**Example 5**.: _Let \(f=x_{3}^{2}+x_{3}x_{2}+4\) and \(g=x_{3}x_{2}+x_{1}\) in \(\mathbb{F}_{5}[x_{1},x_{2},x_{3}]\). Then \(\mathsf{srs}(f,g,x_{3})=[h_{2},h_{3}]=[x_{3}x_{2}+x_{1},-x_{2}^{2}x_{1}-x_{2}^{ 2}+x_{1}^{2}]\). Using the assignment function \(\nu=\{x_{2}\mapsto 1,x_{1}\mapsto 3\}\) we have \(\nu(f)=x_{3}^{2}+x_{3}-1\) and \(\nu(g)=\nu(h_{2})=x_{3}-2\) which is indeed the gcd of \(\nu(f)\) and \(\nu(g)\)._
### Explaining Propagated Literals
We now show that polynomial constraints over finite fields have finite basis explanation functions.
Our MCSat-based theory propagation works as follows. Let \(M\) be a level \(k\) trail, implying that variable \(x_{k}\) is not yet assigned. Suppose \(f\) is a constraint such that \(f\notin\mathsf{constr}(M)\) and \(\neg\mathsf{comp}(\neg f,M)\). We derive polynomial constraints that are \(\mathsf{false}\) for \(\llbracket M,\neg f\rrbracket\) to generate an explanation clause \(E\) in order to justify propagating \(f\). To enable an instant application of T-Prop, we ensure that for all \(e\in E\) either \(e=f\), \(\mathsf{level}(e)<k\), or \(\neg e\in\mathsf{constr}(M)\) must hold.
Finite Basis Explanations.Towards generating explanation clauses \(E\), we consider the polynomial constraint systems
\[A=\{f^{\prime}\in\mathsf{constr}(M)\mid\mathsf{level}(f^{\prime})=k\}\cup\{ \neg f\}\text{ and }A_{\rhd}=\{p\mid(p\rhd 0)\in A\}\text{ for }\rhd\in\{=,\neq\}. \tag{3}\]
Further, we fix the system \(\mathcal{A}=(A_{=},A_{\neq})\). From \(\neg\mathsf{comp}(\neg f,M)\) follows that \((\boldsymbol{\alpha}_{M},\beta)\notin\mathsf{zero}_{q}(\mathcal{A})\) for all \(\beta\in\mathbb{F}_{q}\). Based on Definition 4, we use a weak projecting zero decomposition procedure for \(\boldsymbol{\alpha}_{M}\) and decompose \(\mathcal{A}\) into multiple systems \(\mathcal{A}_{1},\ldots,\mathcal{A}_{r}\) such that for every \(1\leq\ell\leq r\) we have that \(\boldsymbol{\alpha}_{M}\notin\mathsf{zero}(\mathcal{A}_{\ell})\). Then each \(\mathcal{A}_{\ell}=(P_{\ell},Q_{\ell})\) contains (at least) one polynomial \(u\) in \(\mathbb{F}_{q}[X_{k-1}]\) that excludes \(\boldsymbol{\alpha}_{M}\). Depending on whether \(u\in P_{\ell}\) or \(u\in Q_{\ell}\), we generate an appropriate constraint \(c_{\ell}\) as \(u=0\) or \(u\neq 0\), respectively, to ensure that \(c_{\ell}(\boldsymbol{\alpha}_{M})=\mathsf{false}\). As a result, we set \(C=\{c_{1},\ldots,c_{r}\}\).
For any \((\boldsymbol{\alpha},\beta)\in\mathsf{zero}_{q}(\mathcal{A})\), by Definition 4 we have that \(\boldsymbol{\alpha}\in\mathsf{zero}_{q}(\mathcal{A}_{\ell})\) for some \(1\leq\ell\leq r\) and thus \(c_{\ell}(\boldsymbol{\alpha})=\mathsf{true}\). Hence, anytime an assignment function fulfills all constraints from \(A\), it also fulfills at least one constraint of \(C\). We generate the explanation clause \(E=\{\neg a\mid a\in A\}\cup C\).
**Theorem 1** (Explanation Clause \(E\)).: _Given a trail \(M\) of level \(k\). Let \(f\) be a constraint such that \(f\notin\mathsf{constr}(M)\) and \(\neg\mathsf{comp}(\neg f,M)\). Further let \(A=\{f^{\prime}\in\mathsf{constr}(M)\mid\mathsf{level}(f^{\prime})=k\}\cup\{ \neg f\}\) and \(C=\{c_{1},\ldots,c_{r}\}\) constructed as defined above. Then \(E=\{\neg a\mid a\in A\}\cup C\) is an explanation clause for \(f\) in \(M\)._
Proof.: By Definition 1 we show that \(E\) is a valid theory lemma and justifies \(f\) in \(M\).
By construction we have that \(\neg f\in A\) and thus \(f\in E\). Let \(a\in A\) be a constraint such that \(a\neq\neg f\). Then \(\neg a\in\mathsf{constr}(M)\), therefore, \(\mathsf{val}(a,M)=\mathsf{false}\). Let \(c\in C\), from the construction of \(C\) it immediately follows that \(\mathsf{level}(c)<k\) and \(c(\boldsymbol{\alpha}_{M})=\mathsf{false}\), thus \(\nu_{M}(c)=\mathsf{false}\). Since all constraints in \(E\) but \(f\) evaluate to \(\mathsf{false}\) under \(M\), we derive that \(E\)_justifies \(f\) in \(M\)_.
Let \(\nu\) be an arbitrary assignment. We distinguish the following two cases:
_Case 1:_ Assume \(\nu(\neg a)=\mathsf{false}\) for all \(a\in A\). Let \(\mathcal{A}=(A_{=},A_{\neq})\) as defined in (3) and \(\boldsymbol{\alpha}\) be \(\nu\) represented as a \(k\)-tuple. Since \(\nu(a)=\mathsf{true}\) for all \(a\in A\), we have \(\boldsymbol{\alpha}\in\mathsf{zero}(\mathcal{A})\). Since \(\mathcal{A}\) was zero decomposed into systems \(\mathcal{A}_{1},\ldots,\mathcal{A}_{r}\), there exists \(1\leq i\leq r\) such that \(\boldsymbol{\alpha}\in\mathsf{zero}(\mathcal{A}_{i})\). Thus \(c_{i}(\alpha)=\mathsf{true}\) for \(c_{i}\in C\). As \(C\subseteq E\), it follows that \(\nu(E)=\mathsf{true}\).
_Case 2:_ Assume \(\nu(\neg a)\neq\mathsf{false}\) for some \(a\in A\). As \(\neg a\in E\), we obtain \(\nu(E)\neq\mathsf{false}\).
As \(\nu(E)\neq\mathsf{false}\) in both of the cases above, we conclude that \(E\) is a valid lemma.
**Example 6**.: _(Example 1) We have \(\mathbb{F}_{5}[x_{1},x_{2}]\) and two unit clauses \(C_{1}=\{c_{1}\}=\{x_{1}^{2}-1=0\}\) and \(C_{2}=\{c_{2}\}=\{x_{1}x_{2}-x_{2}-1=0\}\). Assume the current trail is \(M=\llbracket x_{1}^{2}-1=0,\,x_{1}\mapsto 1\rrbracket\). We cannot add \(c_{2}\) as we have \(\neg\mathsf{comp}(c_{2},M)\). Towards a conflict, we propagate \(\neg c_{2}\). Then \(A=\{x_{1}x_{2}-x_{2}-1=0\}\), \(A_{=}=\{x_{1}x_{2}-x_{2}-1\}\), and \(A_{\neq}=\emptyset\). Using a weak zero decomposition procedure (cf. Example 7), we derive the zero decomposition \(\Delta=\{(\emptyset,\{x_{1}-1\})\}\) and generate \(E=\{\neg c_{2},x_{1}-1\neq 0\}\) to justify \(\neg c_{2}\) on \(M\). However, \(\neg c_{2}\) on \(M\) results in a conflict with \(C_{2}\)
_Thus, we resolve \(E\) with \(C_{2}\) and learn that \(x_{1}-1\neq 0\) must hold. We backtrack the assignment of \(x_{1}\) and end up with \(M=\llbracket x_{1}^{2}-1=0,x_{1}-1\neq 0\rrbracket\). Assigning \(x_{1}\mapsto 4\), we eventually reach Sat with \(M=\llbracket c_{1},x_{1}-1\neq 0,x_{1}\mapsto 4,c_{2},x_{2}\mapsto 2\rrbracket\) and \(\nu_{M}=\{x_{1}\mapsto 4,x_{2}\mapsto 2\}\)._
We next show that our explanations from Theorem 1 can be turned into explanations with finite basis. As such, using our explanation functions in the T-Prop rule ensures that MCSat terminates for our theory (Section 5).
**Theorem 2** (Finite Based Explanations).: _Every explanation function for theory of \(\mathbb{F}_{q}[X]\) can be finite based._
Proof.: The proof relies on application of Fermat's little theorem. We show that every polynomial \(p\) in the explanation clause can be translated to an equivalent polynomial \(p^{\prime}\) from a finite basis. Given \(\mathbb{F}_{q}\), by the generalized Fermat's little theorem, every element \(a\in\mathbb{F}_{q}\) satisfies \(a^{q}\equiv a\). Let \(t=c\prod_{i=1}^{r}x_{i}^{p_{i}}\) be a term of \(p\). Then by Fermat's little theorem an equivalent term \(t^{\prime}\) can be found such that \(p_{i}\leq q\) for all \(1\leq i\leq r\). When \(c\in\mathbb{F}_{q}\), \(r\) is finite, and \(d_{i}\leq q\) for \(1\leq i\leq r\), there is only a finite set \(T\) of different terms. As a polynomial is a sum of terms, there are \(2^{|T|}-1\) different polynomials that can be constructed from \(T\). By replacing all terms of \(p\) by an equivalent term from \(T\), we have an equivalent polynomial \(p^{\prime}\) from a finite basis.
## 5 Explanation Functions over Finite Fields in MCSat
Section 4 established the generation of explanation clauses for the theory of polynomials over finite fields (Theorem 1) and proved the existence of a finite basis explanation function (Theorem 2).
The primary component of the provided explanation generation procedure is a weak projecting zero decomposition procedure. It remains to show in this section that such a procedure exists. We provide a novel method for giving such a decomposition that does not rely on field polynomials, as in [20]. This is the last piece to providing a finite basis explanation function and thus to turning the MCSat calculus of Figure 1 into an SMT solving approach over finite fields.
Projecting Zero Decomposition.Recall the generation of finite basis explanations in Section 4.2 for a trail \(M\) of level \(k\). Again, we have \(\boldsymbol{\alpha}_{M}\in\mathbb{F}_{q}^{k-1}\) and \(\mathcal{A}=(A_{=},A_{\neq})\) be the polynomial system of constraint set \(A\) as defined in (3), such that \((\boldsymbol{\alpha}_{M},\beta)\notin\mathsf{zero}(\mathcal{A})\) for all \(\beta\in\mathbb{F}_{q}\). Depending on \(|A|\), \(|A_{=}|\), and \(|A_{\neq}|\), we utilize different projecting procedures to find explanation clauses \(E\). Each procedure takes \(\mathcal{A}\) and \(\boldsymbol{\alpha}_{M}\) as input and decomposes \(\mathcal{A}\) in the set of systems \(\Delta=\{\mathcal{A}_{1},\ldots,\mathcal{A}_{r}\}\), according to Definition 4. By the construction of \(E\), it thus suffices to return one constraint \(f_{\ell}\) of each system \(\mathcal{A}_{\ell}\in\Delta\) such that \(f_{\ell}(\boldsymbol{\alpha}_{M})=\mathsf{false}\).
Based on the structure of \(A\), we use single polynomial projections (Section 5.1) or SRS-based projections (Section 5.2) to derive the explanation constraints \(f_{\ell}\) of each system \(\mathcal{A}_{\ell}\).
### Single Polynomial Projection for Deriving Explanation Constraints
In case \(|A|=1\) the coefficients of the polynomial constraint \(f\in A\) can be used for projecting. By the construction of \(A\) we have that \(A=\{-f\}\), \(\mathsf{level}(\neg f)=k\) and \(\boldsymbol{\alpha}_{M}\) cannot be extended to satisfy \(\neg f\). We write \(\mathsf{poly}(\neg f)=c_{1}\cdot x_{k}^{d_{1}}+\cdots+c_{m}\cdot x_{k}^{d_{m}}\). By the definition of this polynomial, we have that each \(c_{i}\in\mathbb{F}_{q}[X_{k-1}]\) and thus \(c_{i}\) can be fully evaluated by \(\boldsymbol{\alpha}_{M}\). Let \(\gamma_{i}=c_{i}(\boldsymbol{\alpha}_{M})\)
for \(1\leq i\leq m\) and set \(F=\{c_{i}-\gamma_{i}\neq 0\mid 1\leq i\leq m\}\). Each \(f_{\ell}\in F\) represents one (single-polynomial) system which is returned by a zero decomposition procedure. We denote this procedure as \(\mathsf{Proj}_{\mathsf{Coeff}}\) and prove the following.
**Theorem 3** (Single Polynomial Weak Projection).: _Let \(\mathcal{A}\) be a polynomial system with a single polynomial \(a\in\mathbb{F}_{q}[X_{k}]\) and let \(\mathbf{\alpha}\in\mathbb{F}_{q}[X_{k-1}]\) be an assignment that cannot be extended to be a zero of \(\mathcal{A}\). Then \(\mathsf{Proj}_{\mathsf{Coeff}}(\mathcal{A},\mathbf{\alpha})\) is a weak projecting zero decomposition procedure for \(\mathbf{\alpha}\)._
Proof.: Termination of \(\mathsf{Proj}_{\mathsf{Coeff}}\) is obvious. Let \(a=c_{1}\cdot x_{k}^{d_{1}}+\cdots+c_{m}\cdot x_{k}^{d_{m}}\). Furthermore, there is no \(\beta\in\mathbb{F}_{q}\) such that \((\mathbf{\alpha},\beta)\in\mathsf{zero}_{q}(\mathcal{A})\). Then \(\mathsf{Proj}_{\mathsf{Coeff}}(\mathcal{A},\mathbf{\alpha})\) returns a set of systems \(\Delta=\{(\emptyset,\{c_{i}-\gamma_{i}\})\mid 1\leq i\leq m\}\) where \(\gamma_{i}\in\mathbb{F}_{q}\) is \(c_{i}(\mathbf{\alpha})\). Let \(\mathbf{\xi}=(\xi_{1},\ldots,\xi_{k})\in\mathsf{zero}_{q}(\mathcal{A})\). Towards a contradiction, assume that \(\mathbf{\xi}\notin\mathsf{zero}_{q}(\mathcal{S})\) for all \(\mathcal{S}\in\Delta\). Then for all \(c_{i}\), we have that \(c_{i}(\mathbf{\xi})=\gamma_{i}=c_{i}(\mathbf{\alpha})\), i.e. all coefficients of \(a\) evaluate to the same value for \(\mathbf{\alpha}\) and \(\mathbf{\xi}\). As there is no value to be assigned to \(x_{k}\) such that \(\mathbf{\alpha}\) can be extended to a zero of \(\mathcal{A}\), \(\mathbf{\xi}\) cannot exist, as \(\xi_{k}\) would extend \(\mathbf{\alpha}\). Therefore, \(\mathbf{\xi}\in\mathsf{zero}_{q}(\mathcal{S})\) for some \(\mathcal{S}\in\Delta\). Furthermore, note that \(\mathbf{\alpha}\) is excluded from all systems in \(\Delta\) by construction.
**Example 7**.: _Filling the gap in Example 6, we use \(\mathsf{Proj}_{\mathsf{Coeff}}\) to decompose \(\mathcal{A}=(\{x_{1}x_{2}-x_{2}-1\},\emptyset)\) with \(\mathbf{\alpha}=(1)\). Let \(a\) be the polynomial in \(\mathcal{A}\). We write \(a=(x_{1}-1)x_{2}^{1}+(-1)x_{2}^{0}\). Evaluating the coefficients, we get \(\gamma_{1}=0\), \(\gamma_{0}=-1\) and generate \(F=\{(x_{1}-1)-0\neq 0,(-1)-(-1)\neq 0\}\). As there are no zeros in the second system, we return \(\Delta=\{(\emptyset,\{x_{1}-1\})\}\)._
### SRS-Based Projection for Deriving Explanation Constraints
For \(|A|>1\), we use the procedure \(\mathsf{Proj}_{\mathsf{Reg}}(\mathcal{A},\mathbf{\alpha})\) as shown in Algorithm 1 and described next. Algorithm 1 is a weak projecting zero decomposition procedure for \(\mathbf{\alpha}\) that decomposes the system \(\mathcal{A}\). It utilizes SRS chains to calculate gcds that reduce the degree of \(x_{k}\). This idea is based on the algorithm \(\mathsf{RegSer}\) presented in [41, 42]. While this original work presents \(\mathsf{RegSer}\) for polynomials over fields with characteristic \(0\) only, the work of [26] claims validity of the approach also over finite fields. Algorithm 1 relies on this result and proceeds as follows.
Consider two polynomials \(p_{1},p_{2}\in\mathbb{F}_{q}[X_{k}]\) with \(\mathsf{lv}(p_{1})=\mathsf{lv}(p_{2})=x_{k}\) and let \(h_{2},\ldots,h_{r}=\mathsf{srs}(p_{1},p_{2},x_{k})\). Further, let \(l_{i}=\mathsf{lc}(h_{i},x_{k})\) for all \(2\leq i\leq r\). Then, by case distinction over the evaluation of \(l_{2},\ldots,l_{r}\in\mathbb{F}_{q}[X_{k-1}]\), the set of zeros can be decomposed to guarantee that each \(h_{i}\) is a gcd of \(p_{1}\) and \(p_{2}\) in one newly generated system. The gcd property is then used to reduce \(\mathsf{deg}(p_{1},x_{k})\) and \(\mathsf{deg}(p_{2},x_{k})\) in this system. The original approach of [41, 42] splits the zero set for every \(h_{i}\) and thus generates exponentially many systems. In our setting, a full decomposition can be avoided by guiding the search using \(\mathbf{\alpha}\). This is done by evaluating \(l_{2},\ldots,l_{r}\) with \(\mathbf{\alpha}\) and not further exploring systems that already exclude \(\mathbf{\alpha}\). Therefore, only a linear amount of systems is generated in Algorithm 1. This computation is performed until a polynomial is found that excludes \(\mathbf{\alpha}\). In case there are polynomials in \(x_{k}\) left, we exclude them using \(\mathsf{Proj}_{\mathsf{Coeff}}\).
The **return**_call_**if**\(c\) statements of Algorithm 1, where _call_ is a recursive call and the _guard_\(c\) is a polynomial constraint, are used to track which path the search takes. If \(c\) evaluates to \(\mathsf{true}\) under \(\mathbf{\alpha}\) then the recursive call is performed and its result is returned. In addition, the procedure keeps a set of tracked constraints \(C\) that is empty in beginning. Whenever a guard \(c\) is reached but \(\mathsf{false}\) under \(\mathbf{\alpha}\), it is added to \(C\), otherwise, \(\neg c\) is added. The constraints in \(C\) are added to the returned sets accordingly. Thus, the constraints in \(C\) describe the search space that was not visited during the search.
Recall the notion of \(\tilde{P},\tilde{Q}\) from Section 4.1. Lines 1-2 of Algorithm 1 return an excluding polynomial in case one is found. Line 5 ensures that \(\mathsf{lc}(p,x_{2})\neq 0\) which is a requirement for
any further gcd operation. Lines 6-12 are used to remove polynomials of \(P\) until only one is left, which is then used in lines 14-19 to remove polynomials from \(Q\). Lines 10 and 17 handle the special case \(\mathsf{l}\mathsf{v}(h_{r})<x_{k}\). By definition of SRS, \(\mathsf{l}\mathsf{v}(h_{i})=x_{k}\) for \(2\leq i\leq r-1\), but not necessarily for \(h_{r}\). Roughly speaking, \(\mathsf{l}\mathsf{v}(h_{r})<x_{k}\) denotes a constant gcd and, thus, divisor-free polynomials. Line 22 splits elements in \(Q\) to remove \(x_{k}\) in case all polynomials \(p\in P\) are free of \(x_{k}\).
```
1return\((\{p\},\emptyset)\) for any \(p\in P\) with \(\mathsf{l}\mathsf{v}(p)<x_{k}\) and \(p(\boldsymbol{\alpha})\neq 0\)
2return\((\emptyset,\{q\})\) for any \(q\in Q\) with \(\mathsf{l}\mathsf{v}(q)<x_{k}\) and \(q(\boldsymbol{\alpha})=0\)
3if\(|\hat{P}|>0\)then
4 select \(p\in P\) with the smallest positive \(\mathsf{deg}(p,x_{k})\)
5return\(\mathsf{P}_{\mathsf{Reg}}(P\setminus\{p\}\cup\{\mathsf{red}(p,x_{k})\},Q, \boldsymbol{\alpha})\) if\(\mathsf{lc}(p,x_{k})(\boldsymbol{\alpha})=0\)
6if\(|\hat{P}|>1\)then
7 select any \(p^{\prime}\in\hat{P}\setminus\{p\}\)
8 compute \(h_{2},\ldots,h_{r}=\mathsf{srs}(p^{\prime},p,x_{k})\) and let \(l_{i}=\mathsf{lc}(h_{i},x_{k})\) for \(2\leq i\leq r\)
9if\(\mathsf{l}\mathsf{v}(h_{r})<x_{k}\)then
10return\(\mathsf{P}_{\mathsf{Reg}}(P\setminus\{p,p^{\prime}\}\cup\{h_{r},h_{r-1}\},Q, \boldsymbol{\alpha})\) if\(l_{r-1}(\boldsymbol{\alpha})\neq 0\)
11
12for\(i=r,\ldots,2\)doreturn\(\mathsf{P}_{\mathsf{Reg}}(P\setminus\{p,p^{\prime}\}\cup\{h_{i},l_{i+1},\ldots,l_{r}\},Q, \boldsymbol{\alpha})\) if\(l_{i}(\boldsymbol{\alpha})\neq 0\)
13elseif\(|\hat{Q}|>0\) and \(\mathsf{l}\mathsf{v}(p)=x_{k}\)then
14 select any \(q\in\hat{Q}\)
15 compute \(h_{2},\ldots,h_{r}=\mathsf{srs}(q,p,x_{k})\) and let \(l_{i}=\mathsf{lc}(h_{i},x_{k})\) for \(2\leq i\leq r\)
16if\(\mathsf{l}\mathsf{v}(h_{r})<x_{k}\)then
17return\(\mathsf{P}_{\mathsf{Reg}}(P\setminus\{p\}\cup\{\mathsf{pquo}(p,h_{r},x_{k})\},Q \setminus\{q\},\boldsymbol{\alpha})\) if\(l_{r}(\boldsymbol{\alpha})\neq 0\)
18for\(i=r,\ldots,2\)doreturn\(\mathsf{P}_{\mathsf{Reg}}(P\setminus\{p\}\cup\{\mathsf{pquo}(p,h_{i},x_{k}),l_{i+1}, \ldots,l_{r}\},Q,\boldsymbol{\alpha})\) if\(l_{i}(\boldsymbol{\alpha})\neq 0\)
19return\(\mathsf{P}_{\mathsf{Reg}}((P,Q)\cup\mathsf{Proj}_{\mathsf{ coeff}}(p,\boldsymbol{\alpha}),\boldsymbol{\alpha})\)
20elseif\(|\hat{Q}|>0\)then
21forall\(q\in\hat{Q}\)doreturn\(\mathsf{P}_{\mathsf{Reg}}(P,Q\setminus\{q\}\cup\{\mathsf{red}(q,x_{k})\}, \boldsymbol{\alpha})\) if\(\mathsf{lc}(q,x_{k})(\boldsymbol{\alpha})=0\)
22return\(\mathsf{P}_{\mathsf{Reg}}((P,Q\setminus\hat{Q})\cup\mathsf{Proj}_{\mathsf{ coeff}}(\prod_{q^{\prime}\in\hat{Q}}q^{\prime},\boldsymbol{\alpha}),\boldsymbol{\alpha})\)
```
**Algorithm 1**\(\mathsf{P}_{\mathsf{Reg}}(\mathcal{A}=(P,Q),\,\boldsymbol{\alpha})\)
**Example 8**.: _Assume we have the system \(\mathcal{A}=(\{x_{3}^{2}+x_{3}x_{2}+4\},\{x_{3}x_{2}+x_{1}\})\) in \(\mathbb{F}_{5}[x_{1},x_{2},x_{3}]\) and let \(\boldsymbol{\alpha}=(3,1)\). At line 15, \(\mathsf{P}_{\mathsf{Reg}}\) we will calculate the first SRS according to Example 5. Eventually, the computation terminates with a zero decomposition represented by the constraints \(\{x_{2}=0,-x_{2}^{2}x_{1}-x_{2}^{2}+x_{1}^{2}\neq 0,-x_{2}^{4}+2x_{2}^{2}x_{1} \neq 0\}\), each representing one generated system._
**Theorem 4** (SRS-Based Weak Projection).: _Let \(\mathcal{A}\) be a polynomial system and let \(\boldsymbol{\alpha}\in\mathbb{F}_{q}^{k-1}\) be an assignment that cannot be extended to be a zero of \(\mathcal{A}\). Then \(\mathsf{P}_{\mathsf{Reg}}(\mathcal{A},\boldsymbol{\alpha})\) of Algorithm 1 is a weak projecting zero decomposition of \(\mathcal{A}\) for \(\boldsymbol{\alpha}\)._
Proof.: We show that \(\mathsf{P}_{\mathsf{Reg}}(\mathcal{A},\boldsymbol{\alpha})\) terminates and is a weak projecting zero decomposition for \(\boldsymbol{\alpha}\).
_Termination:_ As the first two loops in Algorithm 1 are bound by the size of the SRS decomposition \(r\) and the size of a SRS is bound, both loops certainly terminate. The third and last loop iterates over the finite amount of elements in \(\hat{Q}\) and thus terminates. It remains to show that the recursion depth of Algorithm 1 is bound. Note that for every recursive call of \(\mathsf{P}_{\mathsf{Reg}}\) the
degree in \(x_{k}\) for at least one polynomial in \(\mathcal{A}=(P,Q)\) decreases. Once \(\hat{P}=\hat{Q}=\emptyset\) no further recursive call is performed. We distinct two cases:
_Case 1:_: Assume \(|\hat{P}|>0\). Then, the degree of \(x_{k}\) in polynomials of \(P\) is reduced in each recursive call of lines 4-19 of Algorithm 1. In case one polynomial \(p\in\hat{P}\) remains, we use \(\mathsf{Proj}_{\mathsf{Coeff}}\) to remove \(x_{k}\).
_Case 2:_: Assume \(|\hat{P}|=0\). Algorithm 1 proceeds by splitting polynomials in \(\hat{Q}\) in line 22. For a given polynomial \(q\in\hat{Q}\) the recursion depth of the call in line 22 is bound by the number of coefficients of \(q\) in \(x_{k}\). For each call the leading coefficient is removed. Once \(x_{k}\) is removed, we have \(\mathsf{lc}(q,x_{k})=q\). Then, we either return in line 2 or the guard of the call in line 22 is false.
As all recursive calls eventually terminate, Algorithm 1 terminates. Note that from the zero decomposition argument below follows that the recursion always ends in line 1 or 2. This usually happens before all \(x_{k}\) are eliminated.
_Zero Decomposition:_ Results of [41; 42] imply that \(\mathsf{RegSer}\) is a zero decomposition procedure that generates a sequence of regular systems. Besides other properties, for a regular system \((P,Q)\) holds that either \(\hat{P}=\emptyset\) or \(\hat{Q}=\emptyset\). Furthermore, \(P\) is a triangular set, thus \(|\hat{P}|\leq 1\). With a very similar argument can be proven that \(\mathsf{P}_{\mathsf{Reg}}\) performs a zero decomposition towards regular systems, although the systems are not fully computed. In \(\mathsf{P}_{\mathsf{Reg}}\) the decomposition ends after \(x_{k}\) has been fully processed as for generating explanations further decomposition is not required.
Let \(\mathcal{A}^{\prime}=(P,Q)\) be one decomposed system from \(\mathcal{A}\). We first show that \(P,Q\in\mathbb{F}_{q}[X_{k-1}]\) and \(\boldsymbol{\alpha}\notin\mathsf{zero}_{q}(\mathcal{A}^{\prime})\). From the lemmas presented in [42] for \(\mathsf{RegSer}\), it follows that the each decomposition step in lines 4-19 of Algorithm 1 as well as line 22 performs a zero decomposition according to equation (1); thus, \(\boldsymbol{\alpha}\) cannot be extended to a zero of any such generated system. In case \(\mathcal{A}^{\prime}\) is a systems that was not further expanded in a conditional recursive call, i.e. the negation of the guard is in \(\mathcal{A}^{\prime}\), then the desired property holds by construction. In case \(\mathcal{A}^{\prime}\) contains a polynomial from \(\mathbb{F}_{q}[X_{k-1}]\) which excludes \(\boldsymbol{\alpha}\) directly, Algorithm 1 stops in lines 1 or 2, returning only this one polynomial. In case the regular decomposition procedure has concluded for \(x_{k}\) and no such polynomials can be found, by definition of a regular system, we end up with either exactly one polynomial \(p\in\hat{P}\) or \(\hat{Q}\neq\emptyset\), but not both. We distinct two cases:
1. Assume \(\hat{Q}=\emptyset\), then \(\hat{P}=\{p\}\). Since \(\boldsymbol{\alpha}\) cannot be extended to a zero of \(\mathcal{A}^{\prime}\) but is not excluded by any other polynomial in \(\mathcal{A}^{\prime}\), we conclude that it cannot be extended to become a zero of \(p\). Therefore, we may call \(\mathsf{Proj}_{\mathsf{Coeff}}\) in line 20 to further decompose \(\mathcal{A}^{\prime}\). The weak projecting zero decomposition property of \(\mathsf{Proj}_{\mathsf{Coeff}}\) concludes the proof.
2. Assume \(\hat{Q}\neq\emptyset\), then \(\hat{P}=\emptyset\). Since the regular decomposition process has concluded, the recursive call in line 22 is not executed for any \(q\in\hat{Q}\). We know that \(\boldsymbol{\alpha}\) cannot be extended such that all \(q\in\hat{Q}\) evaluate to a non-zero value, we have that the product of all \(q\in\hat{Q}\) when evaluated with \((\boldsymbol{\alpha},\beta)\) for all \(\beta\in\mathbb{F}_{q}\). We thus use \(\mathsf{Proj}_{\mathsf{Coeff}}\) to conclude the weak projecting zero decomposition property.
We finally show that \(\mathsf{P}_{\mathsf{Reg}}\) fulfills equation 2. Let \(\boldsymbol{\xi}\in\mathsf{zero}_{q}(\mathcal{A})\). As \(\mathsf{P}_{\mathsf{Reg}}\) performs a zero decomposition, there is a system \(\mathcal{A}^{\prime}\) such that \(\boldsymbol{\xi}\in\mathsf{zero}_{q}(\mathcal{A}^{\prime})\). If Algorithm 1 returns in lines 1 or 2, then \(\boldsymbol{\xi}\) is a zero of the returned single polynomial (sub-)system of \(\mathcal{A}^{\prime}\). If \(\mathcal{A}^{\prime}\) is not further expanded because of a guard \(c\) in an conditional recursive call, the polynomial of \(\neg c\) is in the according set of \(A^{\prime}\) and returned as a single polynomial sub-system of \(\mathcal{A}^{\prime}\). As \(\boldsymbol{\xi}\) is a zero of \(\mathcal{A}^{\prime}\), it is also a zero of the returned sub-system. Finally, in case \(\mathsf{P}_{\mathsf{Reg}}\) utilizes \(\mathsf{Proj}_{\mathsf{Coeff}}\) to remove a polynomial in \(x_{k}\) from \(\mathcal{A}^{\prime}\), we have by Theorem 3 that \(\boldsymbol{\xi}\) is a zero of one of the returned (projected) systems. In any case, \(\boldsymbol{\xi}\) is a zero of the decomposition and thus equation 2
holds.
## 6 Implementation
We have implemented our MCSat approach for SMT solving over finite fields in a new prototype1, written in Python and using the computer algebra system Sage [36] for handling polynomials. While our work is not limited to a specific field order, practical implementation constraints (from our implementation as well as Sage) are a limiting factor in the prototype's ability to handle large(r) field. Besides the general procedure of MCSat and the theory specific details presented in Sections 4 and 5, the performance of our approach therefore depends on certain implementation details. In the sequel we discuss our design decisions.
Footnote 1: The source code of the prototype together with the generated test instances are available: [https://github.com/Ovascos/ffsat](https://github.com/Ovascos/ffsat)
Selecting literals for propagation.While the general MCSat framework does not restrict the application of theory propagation beyond the conditions of the T-Prop rule, it is up to the theory to determine whether a theory propagation is applicable and appropriate. For the theory of polynomials over finite fields, we utilize a similar propagation strategy as [25] uses for reals. Let \(\langle M,\mathcal{C}\rangle_{k}\) be the current state with feasible trail \(M\) and \(f\in C\) a literal of a previously selected (yet unsatisfied) clause \(C\in\mathcal{C}\) such that \(\mathsf{level}(f)=k\) and \(\mathsf{val}(f,M)=\mathsf{undef}\). If this happens, we use T-Prop to add \(f\) to \(M\) in order to satisfy \(C\).
Let \(\mathcal{X}_{k}\subseteq\mathbb{F}_{q}\) be the set of possible values for \(x_{k}\) that satisfy \(\nu[M](f)\). We can distinguish four different scenarios of propagation by comparing feasible values for \(x_{k}\), namely:
1. If \(\mathcal{X}_{k}=\mathbb{F}_{q}\), then \(f\) is propagated.
2. If \(\mathcal{X}_{k}=\emptyset\), then \(\neg f\) is propagated.
3. If \(\mathcal{X}_{k}\supseteq\mathsf{fslb}(M)\), i.e. \(f\) does not restrict the feasible values of \(M\), then \(f\) is propagated.
4. If \(\mathcal{X}_{k}\cap\mathsf{fslb}(M)=\emptyset\), then \(\neg f\) is propagated.
Informally, a propagation of \(f\) demonstrates the (theory) knowledge that \(C\) is fulfilled by \(M\). By propagating \(\neg f\) with explanation \(E\), it is very likely that a conflict will arise right away (cf. Example 6). The design of \(E\) results in an immediate resolution of \(E\) with \(C\). Because generating explanation clauses is costly, they are not generated at the moment of propagation but only when they are needed for conflict analysis.
Storing feasible values.As we work with finite set \(\mathbb{F}_{q}\) of theory values, feasible values for a reasonable small \(q\) can be enumerated. Even for larger field orders, the number of zeros of a polynomial given a partial assignment is still constrained by the length of the polynomial.
Variable Order.The order in which the theory variables are assigned in the trail can hugely influence the number of conflicts and thus generated explanations. Finding a beneficial variable order is a general consideration for both MCSat style approaches and computer algebra algorithms alike. While it is highly important for practical performance, our procedure is correct for any ordering; optimizing the variable order is an interesting task for future work.
## 7 Experiments and Discussion
We compare the performance of our Python prototype in solving polynomial systems to state-of-the-art Grobner basis techniques provided by Sage. It is important to note that by design
Grobner basis techniques require polynomial systems (cf. Section 4.1) as inputs and are incapable of handling polynomial constraints (i.e. disjunctions and conjunctions of polynomials). We therefore limit our experimental comparison to polynomial benchmarks with a small subset of inputs, as Grobner basis algorithms with general polynomial constraints over finite fields would involve exponential many calls (in the number of constraints) to the Grobner basis algorithm.
We further note that SMT-LIB standard and repository [4] does not support finite field arithmetic. For this reason, we cannot yet directly compare our work to SMT-based solvers. To circumvent such limitations, we represent polynomial constraints directly in our Python framework and compare our work only to Grobner basis approaches supporting such input.
Experimental setup.For our experiments on SMT solving over finite fields, we created 250 polynomial systems over a range of finite field orders (3, 13, 211) and different numbers of variables (up to 64). To get better insights, we have utilized two different methods of polynomial system generation:
* Rand: All polynomials in this test set are fully random generated by Sage's random_element function. The degree of the polynomials is at most 4. These created systems are more frequently unsatisfiable and have fewer zeros on average. This category of tests has smaller systems and fewer variables since they are challenging to solve (for any strategy). It is ensured that at least one polynomial has a constant term to avoid trivial 0-solutions, as this would give our approach an unfair advantage.
* Craft: These polynomial systems are crafted to have multiple solutions by explicitly multiplying zeros. They tend to be easy to solve. Thus, these systems are considerable larger with a huge amount of variables. Polynomial constraints are restricted to up to 5 distinct variables with up to 3 zeros each.
Each test set consists of 25 polynomial systems with fixed field order and a fixed number of variables and constraints; see Table 1. Our experiments were run on an AMD EPYC 7502 CPU with a timeout of 300 seconds per benchmark instance.
We compare our procedure (FFSat) to a Grobner basis approach (GB). The latter uses field polynomials to limit the solutions to those for the base field. To get an elimination ideal and thus ensure to get a satisfiable assignment from a calculated Grobner basis, one typically relies on lexicographic term-ordering, which is especially expensive. However, to "only" check whether a polynomial system is satisfiable without returning an assignment, it suffices to calculate the Grobner basis in any term ordering. In our experiments we therefore calculate two different bases. GB uses the (efficient) default ordering provided by Sage, while GB\({}_{\textsc{lex}}\) uses a lexicographic term ordering.
Experimental results.For analyzing our experimental findings, let us note that the already highly engineered Grobner basis algorithms written in C/C++ utilized by Sage have an inherent performance advantage compared to our Python implementation. Yet, Table 1 demonstrates that our approach works well for satisfiable cases. The number of instances that were resolved between FFSat and GB within the predetermined timeout of 300s for each instance is also compared in Table 1. Each test is identified by its type, the finite field size \(q\), the number of variables \(n\), and the number of constraints per system \(c\).
Experimental analysis and discussions.We note the following key insights of our approach in comparison to Grobner basis approaches:
* Our Python prototype can already keep up with highly engineered Grobner basis approaches
on some classes of instances but further engineering work is required to match existing Grobner basis implementations consistently.
* The strength of our work comes with solving satisfiable instances. This is because we can often find a satisfying assignment without fully decomposing the polynomial system.
* While the MCSat approach is capable of detecting conflicts by deriving empty clauses, the point in time when an empty clause can be derived is highly dependent on the variable order.
* The lack of an order on finite fields leads to the generation of many inequality constraints when a partial assignment cannot be extended. Developing further optimizations to detect such cases, especially for unsat instance with large field orders, is a task for future work.
* Grobner basis approaches are saturation based, thus they have a conceptually advantage on unsatisfiable instances. This is due to the fact that Grobner basis methods terminate once a non-zero constant is determined, which is why we feel inventing and employing extremely efficient monomial orderings would aid our work.
* It is notable that our approach seems to show complementary performance characteristics to existing Grobner basis techniques, indicating that a portfolio approach could be valuable.
In summary, our current experiments show the general effectiveness of our approach, indicating also how present weaknesses can be mitigated with existing techniques from MCSat and CDCL solving (e.g. heuristics on the variable order, restarts, pre- andprocessing techniques to reduce clause complexity, clause deletion, etc.).
## 8 Related Work
Grobner bases [8] and triangular sets [2, 3] have been introduced to compute the solution space of polynomial equations, by reducing the degree of polynomials through variable elimination. Solving polynomial equations in general entails finding all of its solutions in the algebraic closure of the underlining coefficient field. Yet, for the purpose of satifiability, solutions in the base field are usually of the most interest. Obviously, there are only a finite number of solutions if the base field is finite; yet, enumerating all of the finitely numerous possibilities is not practically viable.
To limit the solutions of Grobner bases and triangular sets to finite fields, a common technique is to introduce and add the set of field polynomials to the set of polynomial equa
\begin{table}
\begin{tabular}{c|c c c|c c c} Type & \(q\) & \(n\) & \(c\) & FFSat & GB & GB\({}_{\text{LEX}}\) \\ \hline Rand & 3 & 8 & 8 & **25** & **25** & **25** \\ Rand & 3 & 16 & 16 & **12** & 11 & 0 \\ Craft & 3 & 32 & 32 & **25** & **25** & 0 \\ Craft & 3 & 64 & 64 & **25** & 24 & 0 \\ Rand & 13 & 8 & 4 & **25** & 0 & 0 \\ Rand & 13 & 8 & 8 & **1** & 0 & 0 \\ Craft & 13 & 32 & 16 & **19** & 18 & 1 \\ Rand & 211 & 8 & 4 & **17** & 0 & 0 \\ Rand & 211 & 8 & 16 & 0 & 0 & 0 \\ Craft & 211 & 16 & 8 & 24 & **25** & **25** \\ \end{tabular}
\end{table}
Table 1: Instances solved by FFSat, GB, and GB\({}_{\text{LEX}}\), out of 25 polynomial systems per test set.
tions [16, 22] Using field polynomials though greatly impacts practical performance, as show-cased in [20]. Specialized ways for computing Grobner bases and triangular sets over finite fields have therefore been created, such as the XL algorithm [9], F4 [13], and F5 [14] for Grobner bases. Although all of these strategies are aimed at solving polynomial systems over finite fields, none of them explicitly address inequalities even though inequalities may be converted into equalities using the Rabinowitsch trick [10, 4.2 Prop. 8].
Optimization concepts for triangular sets have been introduced in [42], including efficient characteristic set algorithms [17, 22] and polynomial decomposition into simple sets [26]. Although these approaches integrate reasoning over inequalities, none of them considers systems of clauses with polynomial constraints as needed for our SMT solving problem. Furthermore, they all require the generation of exponentially many sets to fully describe the systems. Our approach only explores a linear sized decomposition on demand.
A related approach to our search procedure is given in the hybrid framework of [5, 6]. Here, a partial evaluation of the system is performed by fixing some variables before starting multiple Grobner bases computations. Instead, in our work we show that subresultant regular subchain computations allows us to avoid working with Grobner bases (and hence their double-exponential computational complexities).
Substantial progress has also been devoted to the problem of dealing with specific boolean polynomials, i.e. finite fields with only two elements. PolyBoRi [6, 7] is fairly effective in this domain, but it does not generalize towards arbitrary finite fields, which is the focus of our work.
Recently, an algebraic SMT decision technique for computing satisfiability of polynomial equalities/inequalities over large prime fields has been introduced in [39]. As polynomial systems are a subset of our polynomial constraint clauses, our work complements this effort, by also establishing a computational approach for deriving explanation clauses within MCSat reasoning.
## 9 Conclusion
We introduce a novel reasoning approach for determining the satisfiability of a given system of non-linear polynomial constraints over finite fields. As a framework, we adopt an MCSat decision procedure and expand it with a specific theory propagation rule that allows variable propagation over finite fields by adding so-called explanation clauses. To show the existence of these explanation clauses over finite fields, we apply zero decomposition procedures over polynomial constraints. Based on the structure of the polynomial system, we construct explanation clauses to resolve conflicting variable assignments. We distinguish between single polynomial projections and projections of multiple polynomials using subresultant regular subchains. Our work avoids using field polynomials while reducing the size of the projected polynomials.
We aim to further optimize our prototype through specific design decisions. For example, we will investigate the effect the variable order has on SMT solving over finite fields. Furthermore, we wish to improve performance if the given polynomial system is unsatisfiable; in this case, we are also interested in generating proof certificates. Finally, integrating our prototype within a high-performance SMT solver is another line for future work.
Acknowledgements.We thank Nikolaj Bjorner for the fruitful discussion on this work. We acknowledge partial support from the ERC Consolidator Grant ARTIST 101002685, the TU Wien SecInt Doctoral College, and the FWF SFB project SpyCoDe F8504. |
2307.16033 | CoVid-19 Detection leveraging Vision Transformers and Explainable AI | Lung disease is a common health problem in many parts of the world. It is a
significant risk to people health and quality of life all across the globe
since it is responsible for five of the top thirty leading causes of death.
Among them are COVID 19, pneumonia, and tuberculosis, to name just a few. It is
critical to diagnose lung diseases in their early stages. Several different
models including machine learning and image processing have been developed for
this purpose. The earlier a condition is diagnosed, the better the patient
chances of making a full recovery and surviving into the long term. Thanks to
deep learning algorithms, there is significant promise for the autonomous,
rapid, and accurate identification of lung diseases based on medical imaging.
Several different deep learning strategies, including convolutional neural
networks (CNN), vanilla neural networks, visual geometry group based networks
(VGG), and capsule networks , are used for the goal of making lung disease
forecasts. The standard CNN has a poor performance when dealing with rotated,
tilted, or other aberrant picture orientations. As a result of this, within the
scope of this study, we have suggested a vision transformer based approach end
to end framework for the diagnosis of lung disorders. In the architecture, data
augmentation, training of the suggested models, and evaluation of the models
are all included. For the purpose of detecting lung diseases such as pneumonia,
Covid 19, lung opacity, and others, a specialised Compact Convolution
Transformers (CCT) model have been tested and evaluated on datasets such as the
Covid 19 Radiography Database. The model has achieved a better accuracy for
both its training and validation purposes on the Covid 19 Radiography Database. | Pangoth Santhosh Kumar, Kundrapu Supriya, Mallikharjuna Rao K, Taraka Satya Krishna Teja Malisetti | 2023-07-29T17:45:27Z | http://arxiv.org/abs/2307.16033v2 | # CoVid-19 Detection leveraging Vision Transformers and Explainable AI
###### Abstract
Lung disease is a common health problem in many parts of the world. It is a significant risk to people's health and quality of life all across the globe since it is responsible for five of the top thirty leading causes of death. Among them are COVID-19, pneumonia, and tuberculosis, to name just a few. It is critical to diagnose lung diseases in their early stages. Several different models including machine learning and image processing have been developed for this purpose. The earlier a condition is diagnosed, the better the patient's chances of making a full recovery and surviving into the long term. Thanks to deep learning algorithms, there is significant promise for the autonomous, rapid, and accurate identification of lung diseases based on medical imaging. Several different deep learning strategies, including convolutional neural networks (CNN), vanilla neural networks, visual geometry group-based networks (VGG), and capsule networks, are used for the goal of making lung disease forecasts. The standard CNN has a poor performance when dealing with rotated, tilted, or other aberrant picture orientations. As a result of this, within the scope of this study, we have suggested a vision-transformer-based approach end to end framework for the diagnosis of lung disorders. In the architecture, data augmentation, training of the suggested models, and evaluation of the models are all included. For the purpose of detecting lung diseases such as pneumonia, Covid-19, lung opacity, and others, a specialised Compact Convolution Transformers (CCT) model have been tested and evaluated on datasets such as the Covid-19 Radiography Database. The model has achieved a better accuracy for both its training and validation purposes on the Covid-19 Radiography Database. There have been a number of different evaluation criteria utilised, such as accuracy, recall, confusion matrix, and so on. been used for the purpose of analysing the model. And than these we also used XAI for evaluating the model.
## I Introduction
Infections of the lungs are persistent illnesses that have an effect on the human body's tissues and organs and make it difficult to breathe.A few examples of lung diseases include pneumonia, Covid-19, tuberculosis, lung cancer, and other other lung problems. According to the Forum of International Respiratory Societies [1], approximately 334 million people around the world suffer from asthma. Additionally, each year 1.4 million people pass away as a result of tuberculosis, 1.6 million pass away as a result of lung cancer, and millions more pass away as a result of pneumonia. The COVID-19 pandemic resulted in the infection of millions of individuals, which in turn had a detrimental impact on the healthcare systems located across [2]. There is no question that disorders affecting the lungs are among the leading causes of death and disability across the globe. Early detection considerably boosts both the patient's chances of making a full recovery and their chances of surviving for a longer period of time [3]. Healthcare informatics is the field that deals with the management and use of information in the healthcare industry. One example of how machine learning and deep learning algorithms can be used in healthcare informatics is in the detection of lung diseases, such as pneumonia. The widespread distribution of COVID-19 has increased interest in the early detection of lung diseases, as the virus can cause severe lung damage and respiratory issues [4]. In addition, other viral or bacterial infections can also contribute to pneumonia, a type of lung disease. Two commonly used diagnostic methods for finding lung disorders are computed tomography (CT) scans and CXR imaging [5]. CT scans use X-rays to produce detailed images of the inside of the body, while CXR imaging uses a special camera to produce images of the chest. These diagnostic methods can be used to identify abnormalities in the lungs that may indicate the presence of a lung disease.
On the other hand, medical imaging treatments like X-ray and CT-based screening are more often used since they are more readily accessible, they are quicker, and they are typically safe. Imaging using X-rays, as opposed to imaging with CT, is often used for COVID-19 screening since it needs less imaging time and is less expensive than CT imaging. Scanners that use X-ray technology are frequently available even in more remote locations. In recent years, academics have focused their attention on developing methods for diagnosing lung diseases using a variety of methods, including classical
Fig. 1: The functionality of the proposed solution
machine learning and deep learning. CNN, VGGNet, ResNet, and LSTM are among the many algorithms that may be used in the process of diagnosing lung disorders. In this study, we suggest a transformer-based architecture for the classification and diagnosis of lung diseases. The following is a list of the significant contributions that this work makes:
* The use of CT Scan and CXR datasets for the purpose of training the transformer models.
* Constructed and analysed the sophisticated image classification model with transformers for the identification of lung disease.
* Conducting an analysis of the trained models by employing a variety of accuracy measures, such as precision and recall, amongst others.
## II Related Works
Several computer-aided design (CAD) systems have been developed throughout the years to assist physicians in deciphering medical images [6]. However, developing a reliable CAD system is challenging. More choices in how to build such a system have emerged because to the development of powerful graphics processing units (GPUs) and deep learning approaches like convolutional neural networks (CNNs). Chest X-rays, CT scans, histopathology images, etc. may all be used in conjunction with deep learning methods to diagnose lung disorders. Briefly summarise the latest studies that have been undertaken by various researchers to diagnose lung disease utilising chest x-ray and CT scan pictures in the next two subsections.
### _X-Ray Imaging for the Diagnosis of Lung Disease_
X-ray pictures provide a number of challenges for clinical interpretation, including various possible abnormalities and complicated backgrounds [7]. This calls for expert-level manual annotation (radiologists). The ability to automatically analyse X-ray images is rapidly becoming into an important diagnostic resource. Since recent advancements in this area [8] have made deep neural networks widely used, they are often used to the problem of X-ray picture categorization. When compared to other pneumonias, COVID-Net [9] is the only open-source, created, and maintained technology that can reliably identify between COVID-19 and other pneumonias. From the provided specs and preliminary design prototype, COVID-Net is able to learn the architectural design starting point and go further with machine-driven design exploration. Your chest X-ray will work with it. Classification as "normal," "pneumonia," or "COVID-19." Nahid et al [10].'s method varied the gathering of discriminative features by fusing image processing techniques with a two-channel CNN. The model correctly identified cases of pneumonia with a sensitivity of 97.92%. If done by hand, however, the crop operation needed to remove the unwanted components would be time-consuming and difficult. In [11], a CNN model that makes use of GradCAM to draw attention to affected regions is shown. The model has a good validation accuracy of 84.8%. However, the authors did not use any augmentation methods to produce unique training samples; as a consequence, the samples were identical. In order to construct models for medical imaging, transfer learning is often used as a workaround for the scarcity of available training data. Chouhan et al. [12] employed a transfer learning technique to classify X-rays of pneumonia. They pooled the knowledge of five trained models to make pneumonia diagnosis more accurate. To diagnose pneumonia, Rahman et al. [13] used a transfer learning strategy based on convolutional neural networks. We employed models such as AlexNet, ResNet18, DenseNet201, and SqueezeNet to identify chest X-rays that showed germs, viruses, or were just normal. The accuracy rate after using their approach was 98
### _CT scan for the Diagnosis of Lung Disease_
Roy, Sirohi, and Patle [14] developed a method for detecting lung cancer nodules using a due to uncontrollable system as well as an adaptive thresholding model. Using grey transformation, this method boosts contrast in the visual field. Segmentation is performed using an active contour model once an image has been binarized. The diagnosis of cancer is categorised using fuzzy inference. Features are retrieved for use in training the classifier, and they include the area, mean, entropy, correlation, main axis length, and minor axis length. The total accuracy of the system is 94.12%. Given its constraints, the suggested model cannot be used to differentiate between benign and malignant tumours. The K mean unsupervised learning method is used by Sangamithra and Govindaraj [15] for classification or clustering purposes. It classifies the dataset of pixels according to several characteristics. In order to classify data, this model employs a back propagation neural network. Features including entropy, correlation, homogeneity, PSNR, and SSIM may be retrieved via the gray-level co-occurrence matrix (GLCM) method. Roughly 90.7% accuracy may be expected from the system. Using a median filter, which is often used for picture improvement, may aid our new model in filtering out noise and improving accuracy.
## III Research Methodology
In this subsection, we will talk about our suggested unique architecture for defect diagnosis of X-Ray and CT Scan pictures. This design primarily consists of three phases: image pre-processing, image augmentation, and our Vision Transformers architecture for automatically extracting ROIs, classifying images and identification of diseases.
### _Contrast limited adaptive histogram equalization_
X-Ray and CT-Scan photographs used in the medical field often include noise from the medical field, such as noise in the backdrop that is hazy. The poor performance of the prediction models may be attributed to the presence of noise in an X-Ray picture. Aside from the background noise, it is vital to extract the area of interest (ROI) in order to improve the performance of predictive algorithms, hence minimizing the amount of redundant information and the amount of work required for computational pattern recognition. The vast majority of the present architectural designs do not take these critical aspects
into account; instead, they force unfiltered imagery to fit into a predicted framework. Taking all of these considerations into account, the approach that has been suggested improves the quality of the picture and derives the ROI by contrast-limited adaptive histogram equalization (CLAHE). This strategy, in contrast to the designs that are now in use, lessens the amount of computing work required and boosts the predictive models' overall performance. The issue of contrast over-amplification is addressed by CLAHE, which is a modification of the technique known as adaptive histogram equalization (AHE). CLAHE works with discrete parts of a picture, which are referred to as tiles, rather than analyzing the whole image. Tiles that are next to one another are blended together using bilinear interpolation in order to get rid of arbitrary boundaries.
### _Ben Graham Method:_
Contrast CLAHE, or limited adaptive histogram equalization, is used with Ben Graham's preprocessing approach, which involves eliminating the colour that is considered to be the local average. When compared to utilizing either of the algorithms on its own, this results in an improvement in the clarity of the vessels in the majority of the photos. However, after conducting a number of trials, it was shown that the performance of the combination of CLAHE and Ben Graham's method is marginally worse than that of CLAHE when applied to certain circumstances, particularly the Chase dataset. In order to address this issue, a feature fusion method is used, which consists of concatenating the solitary CLAHE preprocessed picture with the combined CLAHE and Ben Graham's preprocessed image. This method ranks highest among all of the tests that were carried out on all three datasets.
### _Data Augmentation_
Because more convolutional networks do not have inductive biases, ViT has only been validated as a benchmark design if it is trained on big datasets. This is because more convolutional networks do not have inductive biases. In this study, we adapted the idea from the analysis presented by Steiner et al., and the approach we suggested provides image augmentation to ViT-based designs in a cautious manner, resulting in higher efficiency. In addition to this, the relevance of ViT frameworks may be increased by the visual alternatives that are provided through augmentation. As a result, the pictures of the ROI that were extracted are subjected to further processing, during which they are altered visually in a variety of ways using image enhancement methods such as gaussian blur, random rotation, zooming, and flipping in a variety of orientations.
### _Compact Convolution Transformers_
The convolutional block in CCT allows the model to capture spatial relationships between patches of input data, which can be useful for tasks such as image classification. This is because a convolutional layer is able to learn local patterns in the data and is able to take into account the spatial structure of the input. By using a max pooling layer after the convolution, the model is able to reduce the dimensionality of the output while maintaining the most important information. The ReLU activation function helps the model learn non-linear relationships between the input and the output. Overall, the use of the convolutional block in CCT can improve the model's performance on certain tasks compared to the patch and embedding approach used in ViT.
The convolutional block in CCT is applied to the feature map of an input image. The feature map, represented by L, is a representation of the input image that encodes important information such as edges and textures. In the convolutional block, the feature map is first passed through a Conv2d operation with D filters, where D is the embedding dimension of the transformer backbone. This operation applies a set of D filters to the feature map, each of which is able to learn a different pattern in the data. Next, the output of the Conv2d operation is passed through a max pooling layer. This layer takes the maximum value of each filter output, which helps reduce the dimensionality of the output while preserving the most important information. The output of the max pooling layer is then passed through a ReLU activation function, which helps the model learn non-linear relationships between the input and the output.
\[Image(L)\in\mathbb{R}^{H\times W\times C} \tag{1}\]
\[L_{0}=MaxPool(ReLU(Conv2D(L)) \tag{2}\]
Overall, the use of the convolutional block in CCT allows the model to capture spatial relationships between patches of input data and improve its performance on tasks such as image classification. This is because the convolutional block is able to learn local patterns in the data and take into account the spatial structure of the input, which is useful for these types of tasks.
Incorporating a convolutional phase in CCT allows the technique to be more flexible and adaptable than techniques like ViT. This is because the convolutional phase allows the model to process input images of any resolution, rather than being limited to input images that are exactly divided by the patch size, as is the case with ViT. The use of convolutional blocks in CCT is also more effective in producing tokens for
Fig. 2: The Overview of Proposed Solution
the transformer. Because convolutional blocks are able to learn local patterns in the data, they are able to generate tokens that better capture the information in the input image. This can be useful for tasks such as image classification, where the model needs to be able to extract important features from the input image in order to make accurate predictions. Furthermore, the convolutional blocks in CCT can be repeated for additional downsampling, which allows the model to learn hierarchical representations of the input data. This can improve the model's performance on certain tasks, as it allows the model to learn both local and global patterns in the data. Additionally, the number of convolutional blocks and the downsampling ratio can be adjusted to suit the specific needs of the task at hand. In ViT, the transformer is used with class tokenization, which involves dividing the input data into patches and treating each patch as a separate token [16]. In CCT, the transformer is used with sequence pooling, which is an attention-based technique that pools information from the entire output token sequence. This is different from class tokenization, as it retains information from the entire sequence rather than just individual patches. The use of sequence pooling in CCT is motivated by the fact that the output token sequence contains important information that spans multiple regions of the input data. Retaining this information can improve the model's performance on certain tasks. Additionally, using sequence pooling reduces the number of tokens that need to be processed by the transformer, which can slightly reduce the computation required for the model. Overall, the use of sequence pooling in CCT can improve the model's performance and make it more efficient compared to using class tokenization in ViT as represented in [17]. The output sequence is mapped in this operation using the transformation \(T:\mathbb{R}^{B\times L\times D}\)
\[Z_{S}=f(L_{0})\in\mathbb{R}^{B\times L\times D} \tag{3}\]
In CCT, the output of the transformer encoder, represented by \(Z_{S}\), is passed through a linear layer \(g(Z_{S})\). The linear layer, represented by \(g(Z_{S})\), maps the output of the transformer encoder to a vector of size \(D\times 1\), where D is the total embedded dimension of the input data. The transformer encoder architecture of CCT is inspired by [18]. This vector is then passed through a softmax activation function, which produces a probability distribution over the possible classes of the input data. The use of the linear layer and the softmax activation function in CCT allows the model to make predictions about the class of the input data. The linear layer maps the output of the transformer encoder to a vector of size \(D\times 1\), which encodes important information about the input data. The softmax activation function then converts this vector into a probability distribution over the possible classes, which allows the model to make a prediction about the class of the input data. Overall, this process allows CCT to effectively classify input data.
\[Z_{S}^{{}^{\prime}}=softmax(g(Z_{S})^{T})\in\mathbb{R}^{D\times 1\times n} \tag{4}\]
\[Z=Z_{S}Z_{S}^{{}^{\prime}}=softmax(g(Z_{S})^{T})\times Z_{S}\in\mathbb{R}^{B \times 1\times D} \tag{5}\]
Equation 10 in CCT generates an output for each input token, which is then processed by equation 11. By flattening the output, \(Z\in\mathbb{R}^{B\times 1\times D}\), the model produces a representation of the input data that is suitable for input to the classifier. The use of sequence pooling in CCT allows the network to associate data from throughout the input information and evaluate the sequential embeddings of the latent space created by the transformer encoder. This allows the model to capture important information from the entire input sequence, rather than just individual patches, which can improve its performance on certain tasks. Overall, CCT is a model that incorporates a convolutional tokenizer, sequence pooling, and a transformer encoder. This combination of techniques allows CCT to effectively classify input data by capturing spatial relationships and important information from the entire input sequence.
## IV Experimental Results and Analysis
### _Dataset Preperation_
For the purpose of training the ViT model, this experiment makes use of a sleepiness estimate benchmark dataset called COVID-19 Radiography Database. This database is one of the most consid- erable datasets accessible among the public datasets that are currently available. The COVID-19 Radiography Training Database has a total of 36 participants, each with their own unique scenario. We gather random photos of each participant and identify them using a binary system according to whether or not they have covid. This dataset contains frames with a resolution of 640 by 480, which is considered to be high
Fig. 3: The Ben Graham Processing and Data Augmentation of the images
in comparison to the resolution of other COVID19 datasets. Significant adjustments have been made to the size, position, and impressions of the chests included in the dataset. As a result, this benchmark dataset is suitable for demonstrating the efficacy and performance in real-world contexts.
### _Experimental Setup and Computational Specification_
This section covers the hardware and software computing requirements for the proposed system. The framework was developed using the Tensorflow 2.0, and OpenCV libraries and was implemented in Python 3.9. The minimum requirements for the proposed system are listed in the table.
### _Distribution of Data_
The Distribution of the colour values in the images are taken into consideration based on the target class to extract useful insights. Insights for pixels from the plot of Mean vs Density:
* Covid Negative situations have a maximum pixel value that is more than 0.014 but less than 0.016.
* Greater than 0.004 but less than 0.006 is the maximum value for a pixel in circumstances when Covid is positive.
The Max versus Density plot reveals the following observations about pixels:
* For Covid Negative situations, the maximum pixel value must be larger than 0.035 and less than 0.040.
* The maximum pixel value for situations that are considered to be Covid Positive is 0.005.
The Minimum versus Density graphic reveals the following about pixels:
* The maximum pixel value for situations that are classified as Covid Negative is more than 0.4.
* In circumstances when Covid is positive, the maximum pixel value must be more than 0.0 and less than 0.1.
### _Evaluating Compact Convolutional Vision Transformers_
For the goal of conducting an analysis of the effectiveness of our ViT system, conventional benchmarks for judging classification models have been used. The learning curves of accuracy and loss that were experienced throughout the training and validation of the models are shown in the figure below. Because both the validation curve and the training curve maintain a point of stability with a minimal variation between them, these learning plots provide indicative of an efficient learning method. The training of the efficient ViT model was developed to incorporate three distinct but interrelated tasks at the same time. These tasks are as follows: 1) the calculation of output; 2) the correction of mistakes, and 3) the fine-tuning of the hyper-parameters. The purpose of this design was to achieve the best possible results from the training of the model. When employing a certain combination of hyper-parameters, the maximum training and validation accuracy, respectively, were determined to be 97% when using a particular combination of hyper-parameters. This was discovered after a number of rounds during which the hyper-parameters were fine-tuned.
In order to conduct further assessments of the efficacy of the classification ViT model, calculations of hamming loss and binary cross-entropy are carried out. It has been determined that the cross-entropy loss and the hamming loss for the trained ViT model both equal 0.6907 and that the corresponding hamming loss and cross-entropy losses are 0.0673. The fact that the log loss was coming closer and closer to zero was a sign that beneficial results were being achieved. Cross entropy loss had disproportionately penalized erroneous predictions, which is a factor that is significant for a loss function but undesirable for a metric. This is because this factor is a factor that is important for a loss function but bad for a metric. Cross
Fig. 4: Data Distribution with respect to mean, max and min value
entropy loss was the method that was used. As a result, scores for precision, recall, and F1, in addition to those for several other forms of accuracy, were calculated and shown.
### _Grad-CAM: Gradient-weighted Class Activation Mapping_
Gradient-weighted Class Activation Mapping (Grad-CAM) is a technique used in deep learning to generate a heatmap highlighting the regions in an image that are most important for a specific prediction. This can be useful for understanding how a model is making a particular decision, and can also help with debugging and improving the model.
The following are the insights obtained by the XAI :
* **Positive-1:** In its Grad-CAM picture, on the right mid-section of it we can see the blue colour highlighted piece, which is opacity owing to which it belongs to COVID's Positive Category. Because of this, it is a sample that has been given the Positive-1 designation.
* Positive Category. **Negative-1:** In its Grad-CAM picture, we are able to view the blue colour highlighted section that is between the Cardiac and the Diaphragm. Since there was no opacity identified, it falls into the COVID
- Negative Category because of this.
* Negative Category.
## V Conclusion
We have developed Compact Convolutional Transformers for lung disease classification in this work. Apart from this we have developed an end-to-end framework which consists of three phases for classification, phase 1 consists of image preprocessing using contrast limited adaptive histogram equalization (CLAHE) & ben graham method, phase 2 includes image augmentation and finally phase 3 consists of the compact convolution transformer for classification. CLAHE is operated on small regions of an image and is used to improve the visibility level of a foggy image and ben graham method is used to eliminate the color that is considered to be the local average. And in the phase 2 we have used different data augmentation techniques such as rotation, zooming, flipping, etc. And finally the CCT model is trained using the training dataset and it has obtained 97% and 94.6% as training and validation accuracy respectively. Explainable AI has been implemented in order to comprehend the reasoning behind a specific decision or action, or to provide an understanding of how the AI system operates. GradCAM is a technique used to visualize the region of an input that is used to predict the lesion with the ViT model.
|
2302.13795 | ChatGPT: A Meta-Analysis after 2.5 Months | ChatGPT, a chatbot developed by OpenAI, has gained widespread popularity and
media attention since its release in November 2022. However, little hard
evidence is available regarding its perception in various sources. In this
paper, we analyze over 300,000 tweets and more than 150 scientific papers to
investigate how ChatGPT is perceived and discussed. Our findings show that
ChatGPT is generally viewed as of high quality, with positive sentiment and
emotions of joy dominating in social media. Its perception has slightly
decreased since its debut, however, with joy decreasing and (negative) surprise
on the rise, and it is perceived more negatively in languages other than
English. In recent scientific papers, ChatGPT is characterized as a great
opportunity across various fields including the medical domain, but also as a
threat concerning ethics and receives mixed assessments for education. Our
comprehensive meta-analysis of ChatGPT's current perception after 2.5 months
since its release can contribute to shaping the public debate and informing its
future development. We make our data available. | Christoph Leiter, Ran Zhang, Yanran Chen, Jonas Belouadi, Daniil Larionov, Vivian Fresen, Steffen Eger | 2023-02-20T15:43:22Z | http://arxiv.org/abs/2302.13795v1 | # ChatGPT: A Meta-Analysis after 2.5 Months
###### Abstract
ChatGPT, a chatbot developed by OpenAI, has gained widespread popularity and media attention since its release in November 2022. However, little hard evidence is available regarding its perception in various sources. In this paper, we analyze over 300,000 tweets and more than 150 scientific papers to investigate how ChatGPT is perceived and discussed. Our findings show that ChatGPT is generally viewed as of high quality, with positive sentiment and emotions of joy dominating in social media. Its perception has slightly decreased since its debut, however, with joy decreasing and (negative) surprise on the rise, and it is perceived more negatively in languages other than English. In recent scientific papers, ChatGPT is characterized as a great opportunity across various fields including the medical domain, but also as a threat concerning ethics and receives mixed assessments for education. Our comprehensive meta-analysis of ChatGPT's current perception after 2.5 months since its release can contribute to shaping the public debate and informing its future development. We make our data available.1
Footnote 1: [https://github.com/NLG/ChatGPTReview](https://github.com/NLG/ChatGPTReview)
## 1 Introduction
ChatGPT2 -- a chatbot released by OpenAI in November 2022 which can answer questions, write fiction or prose, help debug code, etc. -- has seemingly taken the world by storm. Over the course of just a little more than two months, it has attracted more than 100 million subscribers, and has been described as the fastest growing web platform ever, leaving behind Instagram, Facebook, Netflix and TikTok3(Haque et al., 2022). Its qualities have been featured, discussed and praised by popular media,4 laymen5 and experts alike. On social media, it has (initially) been lauded as "Artificial General Intelligence",6 while more recent assessment hints at limitations and weaknesses e.g. regarding its reasoning and mathematical abilities (Borji, 2023; Frieder et al., 2023) (the authors of this work point out that, as of mid-February 2023, even after 5 updates, ChatGPT can still not accurately count the number of words in a sentence -- see Figure 13 -- a task primary school children would typically solve with ease.).
Footnote 2: chat.openai.com/
Footnote 3: [https://time.com/6253615/chatgpt-fastest-growing/](https://time.com/6253615/chatgpt-fastest-growing/)
However, while there is plenty of anecdotal evidence regarding the perception of ChatGPT, there is little hard evidence via analysis of different sources such as social media and scientific papers published on it. In this paper, we aim to fill this gap. We ask how ChatGPT is viewed from the perspectives of different actors, how its perception has changed over time and which limitations and strengths have been pointed out. We focus specifically on Social Media (Twitter), collecting over 300k tweets, as well as scientific papers from Arxiv and Semantic-Scholar, analyzing more than 150 papers.
We find that ChatGPT is overall characterized in different sources as of high quality, with positive sentiment and associated emotions of _joy_ dominating. In scientific papers, it is characterized predominantly as a (great) opportunity across various fields, including the medical area and various applications including (scientific) writing as well as for businesses, but also as a threat from an ethical perspective. The assessed impact in the education domain is more mixed, where ChatGPT is viewed both as an opportunity for shifting focus to teaching advanced writing skills (Bishop, 2023) and for making writing more efficient (Zhai, 2022) but also a threat to academic integrity and fostering dishon
esty (Ventayen, 2023). Its perception has, however, slightly decreased in social media since its debut, with _joy_ decreasing and _surprise_ on the rise. In addition, in languages other than English, it is perceived with more negative sentiment.
By providing a comprehensive assessment of its current perception, our paper can contribute to shaping the public debate and informing the future development of ChatGPT.
## 2 Analyses
### Social Media Analysis
We aim to acquire insights into public opinion and sentiment on ChatGPT and understand public attitudes toward different topics related to ChatGPT. We choose Twitter as our social media source and collect tweets since the publication date of ChatGPT. The following will introduce the data and the preprocessing steps.
DatasetWe obtain data through the use of a hashtag search tool _snscrape_,7 setting our search target as #ChatGPT. After acquiring the data, we deduplicate all the retweets and remove robots.8
Footnote 7: [https://github.com/JustAnotherArchivist/snscrape](https://github.com/JustAnotherArchivist/snscrape)
Our final dataset contains tweets in the time period from 2022-11-30 18:10:57 to 2023-02-09 17:24:45. The information is summarized in Table 1. We collect over 330k tweets from more than 168k unique user accounts. The average "age" over all user accounts is 2,807 days. On average, each user generates 1.99 tweets over the time period. The dataset contains tweets across 61 languages. Over 68% of them are in English, other major languages are Japanese (6.4%), Spanish (5.3%), French (5.0%), and German (3.3%). We translate all tweets into English via a multi-lingual machine translation model developed by Facebook.9
Footnote 8: The detection of robots involves the evaluation of two key metrics, the average time between each tweet and the number of total tweets in the examining period. In our analysis, we define the user as a robot account if the average tweet interval between two consecutive tweets is less than 2 hours. We discard 15 such users.
Sentiment AnalysisWe utilize the multi-lingual sentiment classifier from Barbieri et al. (2022) to acquire the sentiment label. This XLM-Roberta based language model is trained on 198 million tweets, and finetuned on Twitter sentiment dataset in eight different languages. The model performance on sentiment analysis varies among languages (e.g. the F1-score for Hindi is only 53%), but the model yields acceptable results in English with an F1-score of 71%. Thus we choose English as our sole input language and collect negative, neutral, and positive sentiments over time (represented as classes 0,1,2, respectively). Table 2 summarizes the sentiment distribution of all tweets. While the majority of the sentiment is neutral, there is a relatively large proportion of positive sentiment, with 100k instances, and a smaller but still notable number of tweets of negative sentiments, with 60k instances. Table 3 provides sample tweets belonging to different sentiment groups.
To examine the sentiment change over time, we plot the weekly average of sentiment and the weekly percentage of positive, neutral, and negative tweets in Figure 1. From the upper plot, we observe an overall downward trend of sentiment (black solid line) during the course of ChatGPT's first 2.5 months: an initial rise in average sentiment was followed by a decrease from January 2023 onwards. We note, however, that the decline is mild in absolute value: the average sentiment of a tweet decreases from a maximum of about 1.15 to a minimum of 1.10 (which also indicates that the average sentiment of tweets is slightly more positive than neutral). We also report the average sentiment of English tweets (dotted line) and non-English tweets(dashed line). Though the absolute difference is small, we can clearly identify the division of sentiment between English and non-English tweets. The difference in sentiment is narrowing
\begin{table}
\begin{tabular}{c c} \hline \hline
**Attribute** & **Detail** \\ \hline date range & 2022-11-30 to 2023-02-09 \\ number of tweets & 334,808 \\ language counts & 61 \\ English tweets & 228127 \\ number of users & 168,111 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Information of the collected Dataset
\begin{table}
\begin{tabular}{c c} \hline \hline
**Sentiment** & **Number of tweets** \\ \hline Positive & 100,163 \\ Neutral & 174,684 \\ Negative & 59,961 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Sentiment Distribution of all tweets.
over time, but overall tweets in English have a more positive perception of ChatGPT. This suggests that ChatGPT may be better in English, which constituted the majority of its training data; but see also our topic-based analysis below.
The bar plots in the lower part of the figure represent the count of tweets per week and the line plots show the percentage change of each sentiment class. While the percentage of negative tweets is stable over time, the percentage of positive tweets decreases and there is a clear increase in tweets with the neutral sentiment. This may indicate that the public view of ChatGPT is becoming more rational after an initial hype of this new "seemingly omnipotent" bot.
During the course of 2.5 months after ChatGPT's debut, OpenAI announced 5 new releases claiming various updates. Our data covers the period of the first three releases on the 15th of December 2022, the 9th of January, and the 3rd of January in 2023. The two latest releases on the 9th of February and the 13th of February are not included in this study.10 The three update time points of ChatGPT are depicted as vertical dashed lines in the lower plot of Figure 1. We can observe small short-term increases in sentiment after each new release.
Footnote 10: [https://help.openai.com/en/articles/6825453-chatgpt-release-notes](https://help.openai.com/en/articles/6825453-chatgpt-release-notes)
Sentiment across language and topicWe notice from Figure 1 that the sentiments among English and non-English tweets vary. Here we analyze sentiment based on all 5 major languages in our ChatGPT dataset, namely English (en), Japanese (ja), Spanish (es), French (fr), and German (de). Figure 2 demonstrates the weekly average sentiment of each language over time. As indicated by our previous observation in Figure 1, tweets in English have the most positive view of ChatGPT. It
Figure 1: Upper: weekly average of sentiment overall language (solid line), over English tweets (dotted line) and non-English tweets (dashed line). Lower: Tweet counts distribution and sentiment percentage change at weekly level aggregation.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Tweet** & Sentiment & Topic \\ \hline Here we had yet exchanged about the power of open \#KI APIs, now we are immersed in the amazing answers of \#ChatGPT. & 2 & science \& technology \\ I’ve been playing around with this for a few hours now and I can firmly say that i’ve never seen anything this developed before. Curious to see where this goes. \#ChatGPT & 2 & diaries \& daily life \\ \hline The U.S. company wants to add a filigrane to the texts generated by \#ChatGPT. [url] via @user \#tweetsrevenue \#cm \#transfonum & 1 & business \& en-trepreneurs \& \\ When you’re trying to be productive but the memes keep calling your name.\#TBT \#ChatGPT \#Memes & 1 & diaries \& daily life \\ \hline \hline @user I just tested this for myself and it’s TRUE. The platform should be shut down IMMEDIATELY \#chatgpt \#rascist \#woke \#leftwing & 0 & news \& social \\ I’m starting to think a student used \#ChatGPT for a term paper. If that’s the case, the technology isn’t ready yet. \#academichatter & 0 & concern \\ \hline \hline \end{tabular}
\end{table}
Table 3: Sample tweets of positive (2), neutral (1), and negative sentiment (0) along with their topic.
is also worth noting that over the time period, the sentiment of English, German, and French tweets are trending downward while Spanish and Japanese tweets start from a low point and trend upwards.
To answer why this is the case, we introduce topic labels into our analysis. To do so, we utilize the monolingual (English) topic classification model developed by Antypas et al. (2022). This Roberta-based model is trained on 124 million tweets and finetuned for multi-label topic classification on a corpus of over 11k tweets. The model has 19 classes of topics. We only focus on 5 major classes, which cover 86.3% of tweets in our dataset: science & technology (38.6%), learning & educational (15.2%), news & social concern (13.0%), diaries & daily life (10.2%), and business & entreprene (9.3%). The upper plot of Figure 3 depicts the topic distribution in percentage by different languages. The share of science & technology topic ranks the highest in all of the 5 languages. However, German and French tweets have a relatively higher share of learning & educational and news & social concern topics compared to English and Spanish. We report the sentiment distribution over different topics in Figure 4. From this plot, we notice that the topic business & entrepreneurs has the lowest proportion of negative tweets while the topic news & social concern contains the highest proportion of negative tweets. For the other three topics, even though their share of positive tweets are similar, diaries & daily life topic contains more negative tweets proportionally.
This observation may explain the differences in sentiment distribution among different languages. Compared to other languages, English tweets have the highest proportion of business & entrepreneurs and science & technology, both of which contain the lowest share of negative views about ChatGPT. French and German tweets have a similar proportion of news & social concern topics, which may result in their slightly less positivity than English tweets, though the three of them have similar overall trends. The case for Japanese and Spanish is unique in terms of the low initial sentiment. The lower plot in Figure 3, which shows the topic distribution change over time for Japanese tweets, may explain this phenomenon. We can observe an evident increase in topics concerning business & entrepreneurs and science & technology, which contribute more positivity, and a decrease in news & social concern, which reduces the share of negative tweets. The same explanation may apply to Spanish tweets.
the general sentiment reaches the peak, and 20 random negative tweets from the second week to the fourth week of 2023, where the general sentiment declines. We are particularly interested in what users find positive/negative about ChatGPT, which in general could relate to many things, e.g., its quality, downtimes, etc.
Based on our analysis of a sample of 20 tweets during the first period, we observed a prevalent positive sentiment towards ChatGPT's ability to generate human-like and concise text. Specifically, 14 out of 20 users reported evident admiration for the model and the text it produced. Users particularly noted the model's capacity to answer complex medical questions, generate rap lyrics and tailor texts to specific contexts. Notably, we also discovered instances where users published tweets that ChatGPT completely generated.
As for the randomly selected negative tweets of the second period, 13 out of the 20 users expressed frustration with the model. These users voiced concerns about potential factual inaccuracies in the generated text and the detectability of the model-generated text. Additionally, a few users expressed ethical concerns, with some expressing worries about biased output or the potential increase in misinformation. Our analysis also revealed that a minority of users expressed concerns over job loss to models like ChatGPT. Overall, these findings suggest that negative sentiment towards ChatGPT was primarily driven by concerns about the model's limitations and its potential impact on society, particularly in generating inaccurate or misleading information.
As part of our analysis, we manually evaluated the sentiment categories for the samples analyzed. We found that 25% (5 out of 20) of the automatically classified sentiment labels were incorrect during the first period. In the second period, we found that 20% (4 out of 20) of the assigned labels were incorrect. The majority of the misclassified tweets were determined to have a neutral sentiment. Despite these misclassifications, we consider the overall error rate of 22.5% (9 out of 40) acceptable for our use case. Especially, errors may cancel out in our aggregated analysis and it is worth pointing out that the main confusions were with the neutral class, not the confusion of negative and positive labels.
Emotion AnalysisIn addition to sentiment, we do a more fine-grained analysis based on the emotions of the tweets. We use the emotion classifier (a BERT base model) finetuned on the GoEmotions dataset (Demszky et al., 2020) that contains texts from Reddit and their emotion labels based on Ekman's taxonomy (Ekman, 1992) to categorize the translated English tweets into 7 dimensions: _joy, surprise, anger, sadness, fear, disgust_ and _neutral_.11 Among all 334,808 tweets, the great majority are labeled as _neutral_ (\(\sim\)70%), followed by the ones classified as _joy_ (17.6%) and _surprise_ (9.8%); the tweets classified as the remaining 4 emotions compose only 2.7% of the whole dataset.
Footnote 11: We use it to predict a single label for each tweet, despite that it is a multilabel classifier ([https://huggingface.co/monology/bert-base-cased-geomotions-ekman](https://huggingface.co/monology/bert-base-cased-geomotions-ekman)).
We demonstrate the weekly changes in the emotion distribution of _joy_ and _surprise_ tweets in Figure 5. Here we only show the percentage distribution denoting the ratio of the tweets classified as a specific emotion to all tweets with emotions (i.e., the tweets which are not labeled as _neutral_). We observe that the percentage of _joy_ tweets generally decreases after the release, though it rises to some degree after each update, indicating that the users have less fun with ChatGPT over time. On the other hand, the percentage of _surprise_ tweets is overall in an uptrend with slight declines between the update time points.
To gain more insights, we manually analyze five randomly selected tweets per emotion category for the release and each of the three update dates.12 Here, we focus on the _joy_ and _surprise_ tweets, as they dominate in the tweets with emotions; additionally, we also include an analysis of _fear_ tweets, because of the observed peak in their distribution trend at the first two update time points, which we
Figure 5: Weekly emotion distribution of (1) _joy_ and (2) _surprise_ tweets over time. The percentages denote the ratio of _joy/surprise_ tweets to non-neutral tweets. We mark the update time points with red dashed.
believe could provide more insight into the users' concerns across different updates. We collect a total of 60 tweets for manual analysis (5 tweets \(\times\) 4 dates \(\times\) 3 emotions); we show one sample for each emotion in Table 4.
Figure 8: Social impact in papers found on Arxiv and SemanticScholar that include ChatGPT in their title or abstract. The labels indicate (based on abstract and title) which effect the authors believe ChatGPT will have on the social good. _NAN_ indicates that no social sentiment is given.
Figure 6: Topics of papers found on Arxiv and SemanticScholar that include ChatGPT in their title or abstract. _Ethics_ also comprises papers addressing AI regulations and cyber scurity. _Evaluation_ denotes papers that evaluate ChatGPT with respect to biases or more than a single domain.
Figure 7: Performance quality in papers found on Arxiv and SemanticScholar that include ChatGPT in their title or abstract. On a scale of 1 (bad performance) to 5 (good performance), this indicates how the performance of ChatGPT is described in the papers’ titles/abstracts. _NAN_ indicates that no performance sentiment is given.
given paper could typically be classified into multiple classes, but we are interested in the dominant class.
3. their **impact** on society. We distinguish _Opportunity_, _Threat_, _Mixed_ (when a paper highlights both risks and opportunities) or _NAN_ (when the paper does not discuss this aspect).
Example annotations are shown in Table 5.
Annotation outcomesFour co-authors of this paper (three male PhD students and one male faculty member) initially annotated 10 papers on all three dimensions independently without guidelines. Agreements were low across all dimensions. After a discussion of disagreements, we devised guidelines for subsequent annotation of 10 further papers. This included (among others) to only look at paper abstracts for classification, as the annotation process would otherwise be too time-consuming, and which labels to prioritize in ambiguous cases. Abstracts are a good compromise because abstracts are (highly condensed) summaries of scientific papers, containing their main message. This time, agreements were high: the kappa agreement is 0.63 on average across all pairs of annotators for topic, 0.70 for impact and 0.80 Spearman for quality, averaged across annotators. In total we annotated 48 papers from Arxiv and 104 additional papers from SemanticScholar.
AnalysisFigure 6 shows the topic distributions for the Arxiv papers and the papers from SemanticScholar. The main topics we classified for the Arxiv papers are _Education_ and the _Application_ in various use cases. Only few papers were classified as _Medical_. Conversely, SemanticScholar papers are most frequently classified as _Medical_ and _Rest_. This indicates that _Medical_ is of great concern in more applied scientific fields not covered by Arxiv papers. Further, Figure 7 shows the distributions of quality labels we annotated. The labels 4 and
\begin{table}
\begin{tabular}{l l} \hline \hline
Figure 10: Heatmap of performance quality and topic of papers from Arxiv and non-Arxiv papers retrieved from SemanticScholar. On a scale of 1 (bad performance) to 5 (good performance), the performance quality indicates how the performance of ChatGPT is described in the papers’ titles/abstracts. NAN indicates that no performance quality is given. _Ethics_ also comprises papers addressing AI regulations and cyber security. _Evaluation_ denotes papers that evaluate ChatGPT with respect to multiple aspects.
Figure 9: Heatmap of performance quality and social impact of papers from Arxiv and SemanticScholar. On a |
2302.02840 | Emergence of Riemannian Quantum Geometry | In this chapter we take up the quantum Riemannian geometry of a spatial slice
of spacetime. While researchers are still facing the challenge of observing
quantum gravity, there is a geometrical core to loop quantum gravity that does
much to define the approach. This core is the quantum character of its
geometrical observables: space and spacetime are built up out of Planck-scale
quantum grains. The interrelations between these grains are described by spin
networks, graphs whose edges capture the bounding areas of the interconnected
nodes, which encode the extent of each grain. We explain how quantum Riemannian
geometry emerges from two different approaches: in the first half of the
chapter we take the perspective of continuum geometry and explain how quantum
geometry emerges from a few principles, such as the general rules of canonical
quantization of field theories, a classical formulation of general relativity
in which it appears embedded in the phase space of Yang-Mills theory, and
general covariance. In the second half of the chapter we show that quantum
geometry also emerges from the direct quantization of the finite number of
degrees of freedom of the gravitational field encoded in discrete geometries.
These two approaches are complimentary and are offered to assist readers with
different backgrounds enter the compelling arena of quantum Riemannian
geometry. | Hal M. Haggard, Jerzy Lewandowski, Hanno Sahlmann | 2023-02-06T15:00:50Z | http://arxiv.org/abs/2302.02840v1 | # Emergence of Riemannian Quantum Geometry
###### Abstract
In this chapter we take up the quantum Riemannian geometry of a spatial slice of spacetime. While researchers are still facing the challenge of observing quantum gravity, there is a geometrical core to loop quantum gravity that does much to define the approach. This core is the quantum character of its geometrical observables: space and spacetime are built up out of Planck-scale quantum grains. The interrelations between these grains are described by spin networks, graphs whose edges capture the bounding areas of the interconnected nodes, which encode the extent of each grain. We explain how quantum Riemannian geometry emerges from two different approaches: in the first half of the chapter we take the perspective of continuum geometry and explain how quantum geometry emerges from a few principles, such as the general rules of canonical quantization of field theories, a classical formulation of general relativity in which it appears embedded in the phase space of Yang-Mills theory, and general covariance. In the second half of the chapter we show that quantum geometry also emerges from the direct quantization of the finite number of degrees of freedom of the gravitational field encoded in discrete geometries. These two approaches are complimentary and are offered to assist readers with different backgrounds enter the compelling arena of quantum Riemannian geometry.
+
Footnote †: journal:
,, and |
2305.17554 | Distinguishing different stackings in layered materials via luminescence
spectroscopy | Despite its simple crystal structure, layered boron nitride features a
surprisingly complex variety of phonon-assisted luminescence peaks. We present
a combined experimental and theoretical study on ultraviolet-light emission in
hexagonal and rhombohedral bulk boron nitride crystals. Emission spectra of
high-quality samples are measured via cathodoluminescence spectroscopy,
displaying characteristic differences between the two polytypes. These
differences are explained using a fully first-principles computational
technique that takes into account radiative emission from ``indirect'',
finite-momentum, excitons via coupling to finite-momentum phonons. We show that
the differences in peak positions, number of peaks and relative intensities can
be qualitatively and quantitatively explained, once a full integration over all
relevant momenta of excitons and phonons is performed. | Matteo Zanfrognini, Alexandre Plaud, Ingrid Stenger, Frédéric Fossard, Lorenzo Sponza, Léonard Schué, Fulvio Paleari, Elisa Molinari, Daniele Varsano, Ludger Wirtz, François Ducastelle, Annick Loiseau, Julien Barjon | 2023-05-27T19:09:05Z | http://arxiv.org/abs/2305.17554v1 | # Distinguishing different stackings in layered materials via luminescence spectroscopy
###### Abstract
Despite its simple crystal structure, layered boron nitride features a surprisingly complex variety of phonon-assisted luminescence peaks. We present a combined experimental and theoretical study on ultraviolet-light emission in hexagonal and rhombohedral bulk boron nitride crystals. Emission spectra of high-quality samples are measured via cathodoluminescence spectroscopy, displaying characteristic differences between the two polytypes. These differences are explained using a fully first-principles computational technique that takes into account radiative emission from "indirect", finite-momentum, excitons via coupling to finite-momentum phonons. We show that the differences in peak positions, number of peaks and relative intensities can be qualitatively and quantitatively explained, once a full integration over all relevant momenta of excitons and phonons is performed.
Layered boron nitride (BN) crystals are identified as strategic materials for the integration of graphene and 2D semiconductors in optoelectronic devices based on van der Waals heterostructures [1; 2; 3]. To this end, largely scalable crystal growth methods able to produce high quality samples are desirable. The highest quality BN single crystals are mostly grown from a catalytic melt either at high pressure and high temperature (HPHT) [4; 5; 6] or, more recently, at intermediate or atmospheric pressure and high temperature [7; 8; 9; 10; 11]. The resulting crystals are limited in size or polycrystalline, which restricts their possible applications to optoelectronics. Up-scalable fabrication techniques at low pressure, such as chemical vapour deposition (CVD) or molecular beam epitaxy (MBE) allow instead for the controlled synthesis of BN thin films on large surfaces. However, they have encountered a limited success up to now due to the polymorphism of boron nitride. The layered bulk crystal can come, in principle, in six different polytypes [12], with the two most stable ones adopting the hexagonal (hBN) and rhombohedral (rBN) Bravais lattices. In hBN, two adjacent BN single layers differ by a \(\pi\) rotation, resulting in the so-called AA' stacking sequence, where boron and nitrogen atoms sit on top of each other (Figure 1a). Conversely, the unit cell of rBN crystals is composed by three BN monolayers, which are rigidly shifted along the same direction by the B-N planar interatomic distance: this stacking motif (ABC sequence) is shown in Figure 1b. While this stacking difference entails an extremely high energy cost associated to the transformation from rBN to hBN [13], these two polytypes are difficult to distinguish experimentally from a crystallographic point of view. Even from a computational point of view, the calculated stability difference of the two polytypes is close to the limit of accuracy of modern _ab initio_ methods [14; 12; 15]. In addition, the interaction with the substrate affects the abundance of stable rBN and hBN phases in synthetic products [16; 17; 18; 19]. For these reasons, the stacking sequence is rarely characterized in recent reports about BN multilayer growth, so that possible differences in the respective optoelectronic properties of the two polytypes might have been overlooked.
In this work, we present a spectroscopic investigation of rBN using cathodoluminescence (CL) spectroscopy. By comparing CL spectra obtained for rBN with analogous results for hBN [21; 22], we demonstrate that the stacking sequence affects the emission fine structure of rBN and hBN crystals, making CL an ideal experimental probe to
Figure 1: Stacking sequences of sp\({}_{2}\) BN considered in this work: in a) Boron Nitride with AA\({}^{\prime}\) stacking is shown, while in b) the three shifted layers forming rBN unit cell are presented. Nitrogen and Boron atoms are shown in gray and green,respectively.
discriminate between the two polytypes. Our experimental observations are explained by _ab initio_ calculations of luminescence spectra for the two polytypes, explicitly including exciton-phonon interactions.
The reference sample investigated here is the rBN powder fabricated by T. Sato [16], which is known as the international standard for the crystallographic diffraction database.[23] To our knowledge, this is the highest quality rBN single crystal available today. Fig. 2 (a) presents a transmission electron microscopy (TEM) image of the powder. It consists of cylindrical rBN crystallites with a typical 200 nm diameter and a 50 nm thickness. The ABC stacking sequence can be observed in the high-resolution image of the transverse section reported in Fig. 2 (b). The distance between B and N in this projection is 0.072 nm, which cannot be resolved due to our 0.12 nm TEM limit resolution. Nevertheless, the positions of B and N atomic columns can be identified in Fig. 2 (c) thanks to simulations performed in the conditions of the image acquisition in Fig. 2 (d) (see Supplemental Material [20] for details). The identification of the rBN structure is further confirmed by comparing its Raman spectrum with the one of hBN as presented in Supplemental Material [20], section Raman spectroscopy. In the following, the properties of the reference rBN sample (ABC stacking) will be compared with a reference hBN crystal grown by HPHT [6].
We now turn to the discussion of the exciton-dominated luminescence spectra as studied by CL using the set up detailed in Supplemental Material [20]. A comparison between the experimental CL spectra of hBN and rBN at \(T=5\) K is shown in Fig. 2. The visible features are due to phonon-assisted excitonic recombinations as will be discussed below. The two spectra display several key differences, including a redshift of the rBN features with respect to the corresponding hBN ones (which amounts to 15 meV for the highest peak), and, most importantly, the presence of two relevant structures at 5.847 and 5.919 eV only in rBN. The high accuracy of the experimental rBN spectrum is crucial to clearly resolve the fine structure of the intrinsic phonon-assisted peaks [24; 25], enabling us to explain these points in conjunction with the theoretical modelling in the following. Experimentally, these reported differences are fully significant, as we obtained almost identical spectra from a rBN sample grown by CVD on 6H-SiC. A detailed comparison between the two samples is included in Supplemental Material [20], along with a discussion of the defect peaks appearing in the CL signal measured at frequencies lower than those shown in Fig. 3.
_Ab initio_ calculations [26] indicate that rBN is an indirect-bandgap insulator. The exciton dispersion resulting from the solution of the Bethe-Salpeter equation (BSE) at finite momentum is
Figure 3: Comparison of experimental hBN (blue) and rBN (red) CL spectra at \(T=5\) K.
Figure 2: (a) Bright field TEM image of the reference rBN powder. (b) High resolution TEM image in the [10\(\bar{1}\)] zone axis of the crystallite indicated by the red arrow in (a). The traces of (101), (001) and (111) rBN planes reported with white lines are identified with the Fourier transform plotted in inset. (c) Magnified image of the white rectangle in (b) where the atomic positions of B and N (colored spheres) are deduced from the simulation (d) which has been performed with the illumination conditions used experimentally. Crystallographic notations refer here to the rhombohedral phase (see Supplemental Material [20] for details).
imum being located near the point \(\Omega=[\frac{1}{6},\frac{1}{6},0]\) in the middle of the \(\Gamma\)K symmetry direction in the hexagonal Brillouin zone (hBZ). According to our calculation, the energy difference between the lowest-lying exciton (due to indirect electronic transition) and the optically accessible (i.e., direct and dipole-allowed) \(\Gamma\) excitons is 230 meV (see Supplemental Material [20] for the exciton dispersion curve computed along this direction). This means that the excitonic radiative recombination in rBN requires the assistance of phonons with a wave-vector around the \(\Omega\) point, similarly to what happens in hBN.
The theoretical luminescence spectra have been computed using the expression [27; 28]:
\[I(E)\propto\sum_{\lambda}\sum_{\bf Q}\sum_{\nu}N(E_{\lambda}({\bf Q}))\Gamma_{ \lambda}^{\nu,{\bf Q}}(E) \tag{1}\]
where \(\lambda\) is an index running over exciton bands, \({\bf Q}\) is the exciton momentum and \(\nu\) denotes the phonon branches. \(N(E_{\lambda}({\bf Q}))=e^{-\frac{E_{\lambda}({\bf Q})-\mu}{k_{B}T_{\rm exc}}}\) is a Boltzmann distribution representing the exciton population from which light emission occurs, \(\mu\) being the energy of the lowest-energy exciton in the system and \(T_{\rm exc}\) the effective excitonic temperature. We fixed \(T_{\rm exc}\) to 20 K, which is its experimental value obtained for low sample temperatures (below 10 K, as in Fig. 3). (We have checked that our results are stable with respect to small changes of this parameter.)
The probability \(\Gamma_{\lambda}^{\nu,{\bf Q}}(E)\) describes photon emission by a finite-momentum exciton \(|\lambda,{\bf Q}\rangle\), assisted by a phonon \((\nu,{\bf Q})\). This quantity has been computed using second-order time-dependent perturbation theory, similarly to Refs. [29; 30], only considering phonon emission processes [31] (which dominate at small temperature):
\[\Gamma_{\lambda}^{\nu,{\bf Q}}(E)=\left|T_{\lambda}^{\nu,{\bf Q}}\right|^{2} \frac{(1+n_{\nu,{\bf Q}})\delta[E-E_{\lambda}({\bf Q})+\hbar\omega_{\nu,{\bf Q }}]}{E_{\lambda}({\bf Q})-\hbar\omega_{\nu,{\bf Q}}}, \tag{2}\]
with
\[T_{\lambda}^{\nu,{\bf Q}}=\sum_{\lambda_{2}}\frac{D_{\lambda_{2}}G_{\lambda_{ 2},\lambda}^{\nu}({\bf Q},-{\bf Q})}{E_{\lambda_{2}}(\Gamma)+\hbar\omega_{\nu,{\bf Q}}-E_{\lambda}({\bf Q})}. \tag{3}\]
In Eqs. (2) and (3), the index \(\lambda_{2}\) runs over the excitonic states at the \(\Gamma\) point with energy \(E_{\lambda_{2}}(\Gamma)\). The quantity \(D_{\lambda_{2}}\) is the excitonic optical dipole strength averaged over in-plane polarization directions. \(n_{\nu,{\bf Q}}\) corresponds to the Bose-Einstein phonon occupation factor, while \(E\) is the energy of the emitted photon; the Dirac delta guarantees energy conservation and has been numerically approximated with a Lorentzian function with FWHM equal to 5 meV in order to match the experimental peaks. Finally, the exciton-phonon coupling matrix element \(G_{\lambda_{2},\lambda}^{\nu}({\bf Q},-{\bf Q})\) describes the scattering amplitude for an exciton \(|\lambda,{\bf Q}\rangle\) to states \(|\lambda_{2},\Gamma\rangle\) while assisted by phonon mode \(\nu\)[28]:
\[G_{\lambda_{2},\lambda}^{\nu}({\bf Q},-{\bf Q})=\] \[\sum_{vec^{\prime}\bf k}A_{\lambda_{2}}^{\rm ST}(v{\bf k},c{\bf k })A_{\lambda}^{\bf Q}(v{\bf k},c^{\prime}{\bf k}+{\bf Q})g_{cc^{\prime}}^{ \nu}({\bf k}+{\bf Q};-{\bf Q})\] \[-\sum_{vv^{\prime}c{\bf k}}A_{\lambda_{2}}^{\rm ST}(v{\bf k},c{ \bf k})A_{\lambda}^{\bf Q}(v^{\prime}{\bf k}-{\bf Q},c{\bf k})g_{v^{\prime}v} ^{\nu}({\bf k};-{\bf Q}), \tag{4}\]
where \(A_{\lambda}^{\bf Q}(v{\bf k}_{h},c{\bf k}_{\rm c})\) is the envelope function for exciton \(|\lambda,{\bf Q}\rangle\), with \(v,v^{\prime}\)\((c,c^{\prime})\) running over the valence (conduction) states and \({\bf k}\) being the electronic wave vector in the hBZ. The electron-phonon coupling matrix element \(g_{n,n^{\prime}}^{\nu}({\bf k},{\bf Q})\) represents the scattering between single-particle states \(|n^{\prime},{\bf k}\rangle\) and \(|n,{\bf k}+{\bf Q}\rangle\)[32]. Importantly, within our numerical methodology, \(G_{\lambda_{2},\lambda}^{\nu}({\bf Q},-{\bf Q})\) is computed using the same single-particle Kohn-Sham states both for electron-phonon and excitonic quantities, thus overcoming phase mismatch problems as described in Ref. [30]. The \({\bf Q}\)-integration appearing in Eq. (1) has been performed in local neighbourhoods of the symmetry-equivalent \(\Omega\) points corresponding to the excitonic dispersion minima in the hBZ. The computational details [33] needed to reproduce the theoretical results are provided in the Supplemental Material [20].
In Figure 4, we present the comparison between experimental CL spectra (black dots) and theoretical BSE results (continuous green lines) for hBN (Fig. 4a) and rBN (Fig. 4c). Figures 4b and 4d show the calculated in-plane phonon dispersion along the \(\Gamma\)K direction for hBN and rBN, respectively. We find very good agreement between experimental and theoretical data. The relative energy shift between the two spectra is reproduced theoretically. As the phonon energies in the two systems differ only for a few meV, the 15 meV shift closely matches the underlying difference between the lowest-lying, finite-momentum exciton levels (which is around 12 meV). In turn, this difference can be traced back to the combined effects of rBN having both a smaller quasiparticle band gap (by 166 meV) and exciton binding energy (by 150 meV) with respect to hBN around the \(\Omega\) points in momentum space. In both hBN and rBN, the spectra are dominated by the two peaks in the low-energy part of the spectrum. These are phonon-assisted satellites due to longitudinal optical phonons - denoted as LO\({}_{2}\)-LO\({}_{3}\) modes in the phonon dispersion - and transverse optical ones (the almost-degenerate pair [34] TO\({}_{2}\)-TO\({}_{3}\)). For hBN, these assignments are in good agreement with the results obtained in Refs. [29; 35], using a finite-difference approach. Furthermore, the experimental intensity ratio between these peaks is well-reproduced by _ab initio_ calculations, with the LO peak being less intense than the TO one. The additional overtones appearing in the measurements in this energy region are due to higher-order scattering processes [36] and are thus not captured by our theoretical approach, which is restricted to first
order exciton-phonon interaction. The phonon branches involved in the emission process are explicitly labelled in Figs. 4b and 4d for the \(\Omega\) point only.[37] Luminescence spectra of hBN and rBN are qualitatively different at higher energies, as confirmed by _ab initio_ results. In the case of hBN, we observe only two main peaks: the first (at about 5.86 eV) corresponds to a replica of the LO\({}_{1}\)-LA phonons, while the higher intensity structure at 5.89 eV is mainly due to TO phonons, with a small contribution from the almost-degenerate transverse acoustic mode (TA-TO\({}_{1}\)). _Ab initio_ results reproduce with great accuracy both the splitting between these peaks and their intensity ratio (the LO\({}_{1}\)-LA peak being less pronounced than the TO\({}_{1}\)-TA one), while they tend to overestimate their relative strengths, with respect to the dominant, low-energy satellites. (The agreement may be further improved with a more complete **Q**-point integration in Eq. (1).) We also note that, in agreement with the group theory analysis discussed in Ref. [35], no contributions from the out-of-plane phonon modes appear in the luminescence spectra. This selection rule, which is strictly respected by Eq. (4), can be slightly broken in a real experiment, leading to the appearance of a very small signal corresponding to this mode (usually 100 times smaller than the other peaks [38]).
In the case of rBN, the high-energy portion of the CL spectrum shows three large peaks, respectively at about 5.847 eV, 5.878 eV and 5.919 eV, instead of the two peaks appearing in hBN. They are also recovered in the _ab initio_ results. The first structure is a combination of phonon-assisted replicas due to the almost-degenerate LA-LO\({}_{1}\) branches, albeit with a relevant contribution from optical out-of-plane modes (denoted as ZO\({}_{2}\); see Supplemental Material [20] for a mode-resolved spectrum). Conversely, the peak at 5.878 eV is associated to the TA-TO\({}_{1}\) phonons in analogy with the hBN case. We emphasise that _ab initio_ results correctly reproduce the intensity ratio among these peaks. Finally, the highest-energy structure at 5.919 eV turns out to be due to the out-of-plane optical mode ZO\({}_{1}\). This is forbidden for the centrosym
Figure 4: Experimental (black dots) and theoretical (green lines) luminescence spectra for hBN (a) and rBN (c). In both (a) and (c), theoretical spectra are blueshifted by 1.04 eV to match the position of the highest intensity peak in the experimental spectrum. Phonon dispersions in hBN (b) and rBN (d) along the \(\Gamma\)-K direction: phonon branches contributing to the luminescence spectra are highlighted at the \(\Omega\) point, in the middle of the \(\Gamma\)-K direction. See the main text for the phonon mode labelling. Almost-degenerate phonon branches are paired with a hyphen.
metric hBN luminescence while it is allowed in the rBN case because of the lowered symmetry of the crystal lattice.
In conclusion, we have demonstrated that cathodoluminescence is a viable tool to characterize fundamentally similar BN polytypes, which are hardly distinguishable otherwise. We have explained both experimentally and theoretically how the radiative emission spectrum is affected by the interaction between electronic excitations and lattice vibrations in rhombohedral and hexagonal boron nitride, two prototypical polytypes of low-dimensional layered materials with indirect band gap. Using a first-principles methodology which accounts for exciton-phonon interactions beyond the state of the art, we are able to provide a comprehensive and accurate description of the finite-momentum exciton states and phonon modes involved, thus showing the discriminating role of out-of-plane lattice vibrations assisting excitonic radiative recombination for rBN but not for hBN. We believe that our analysis and methodology could be useful for the growth and characterization of indirect-gap layered materials, which find widespread application as basic building blocks in novel 2D optoelectronic devices.
The authors would like to thank C. Vilar for the technical support on electron microscopy and K. Watanabe and T. Taniguchi for kindly providing a part of the rBN reference powder of T. Sato, M. Chubarov and A. Henry for providing rBN whiskers on 6H-SiC. We thank C. Attaccalite and P. Lechifflart for useful discussions about exciton-phonon coupling calculations. This project has received funding from the European Union Horizon 2020 research and innovation programme under grant agreement No 785219 and No 881603 (Graphene Flagship core 2 and core 3), the French National Agency for Research (ANR) under grant agreement No ANR-14-CE08-0018 (GoBN: Graphene on Boron Nitride Technology), MaX - MAretalis design at the eXascale - a European Centre of Excellence funded by the European Union's program HORIZON-EUROHPC- JU-2021-COE-01 (Grant No. 101093374). D.V. and M.Z. also acknowledge financial support from ICSC - Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing, funded by European Union - NextGenerationEU - PNRR and the Italian national program PRIN2017 grant n. 2017BZPKSZ. L.W. acknowledges funding by Fond National de Recherche (FNR), Luxembourg via project INTER/19/ANR/13376969/ACCEPT. We acknowledge EuroHPC Joint Undertaking for awarding us access to MeluXina at LuxProide, Luxembourg and CINECA for computational resources, awarded via the ISCRA Grants.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.